Imperial College London

DrBennyLo

Faculty of MedicineDepartment of Metabolism, Digestion and Reproduction

Visiting Reader
 
 
 
//

Contact

 

+44 (0)20 7594 0806benny.lo Website

 
 
//

Location

 

Bessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

292 results found

Jia W, Li B, Xu Q, Chen G, Mao ZH, McCrory MA, Baranowski T, Burke LE, Lo B, Anderson AK, Frost G, Sazonov E, Sun Met al., 2024, Image-based volume estimation for food in a bowl, Journal of Food Engineering, Vol: 372, ISSN: 0260-8774

Image-assisted dietary assessment has become popular in dietary monitoring studies in recent years. However, food volume estimation is still a challenging problem due to the lack of 3D information in a 2D image and the occlusion of the food by itself or container (e.g., bowl, cup). This study aims to investigate the relationship between the observable surface of food in a bowl and a normalized index (i.e., bowl fullness) to represent its volume. A mathematical model is established for describing different shapes of bowls, and a convenient experimental method is proposed to determine the bowl shape. An image feature called Food Area Ratio (FAR) is used to estimate the volume of food in a bowl based on the relationship between bowl fullness and the FAR calculated from the image. Both simulations and experiments with real food/liquid demonstrate the feasibility and accuracy of the proposed approach.

Journal article

Lo BPL, Au KWS, Chiu PWY, 2024, Guest Editorial Special section on The Hamlyn Symposium 2022 - MedTech Reimagined, IEEE Transactions on Medical Robotics and Bionics, Vol: 6, Pages: 2-3

Journal article

Han J, Gu X, Yang G-Z, Lo Bet al., 2024, Noise-Factorized Disentangled Representation Learning for Generalizable Motor Imagery EEG Classification., IEEE J Biomed Health Inform, Vol: 28, Pages: 765-776

Motor Imagery (MI) Electroencephalography (EEG) is one of the most common Brain-Computer Interface (BCI) paradigms that has been widely used in neural rehabilitation and gaming. Although considerable research efforts have been dedicated to developing MI EEG classification algorithms, they are mostly limited in handling scenarios where the training and testing data are not from the same subject or session. Such poor generalization capability significantly limits the realization of BCI in real-world applications. In this paper, we proposed a novel framework to disentangle the representation of raw EEG data into three components, subject/session-specific, MI-task-specific, and random noises, so that the subject/session-specific feature extends the generalization capability of the system. This is realized by a joint discriminative and generative framework, supported by a series of fundamental training losses and training strategies. We evaluated our framework on three public MI EEG datasets, and detailed experimental results show that our method can achieve superior performance by a large margin compared to current state-of-the-art benchmark algorithms.

Journal article

Gu X, Fani D, Han J, Liu X, Chen W, Guang-Zhong Y, Lo Bet al., 2024, Beyond supervised learning for pervasive healthcare, IEEE Reviews in Biomedical Engineering, Vol: 17, Pages: 42-62, ISSN: 1937-3333

The integration of machine/deep learning and sensing technologies is transforming healthcare and medical practice. However, inherent limitations in healthcare data, namely scarcity , quality , and heterogeneity , hinder the effectiveness of supervised learning techniques which are mainly based on pure statistical fitting between data and labels. In this paper, we first identify the challenges present in machine learning for pervasive healthcare and we then review the current trends beyond fully supervised learning that are developed to address these three issues. Rooted in the inherent drawbacks of empirical risk minimization that underpins pure fully supervised learning, this survey summarizes seven key lines of learning strategies, to promote the generalization performance for real-world deployment. In addition, we point out several directions that are emerging and promising in this area, to develop data-efficient, scalable, and trustworthy computational models, and to leverage multi-modality and multi-source sensing informatics, for pervasive healthcare.

Journal article

Qiu J, Li L, Sun J, Peng J, Shi P, Zhang R, Dong Y, Lam K, Lo PW, Xiao B, Yuan W, Xu D, Lo Bet al., 2023, Large AI Models in Health Informatics: Applications, Challenges, and the Future, arxiv

Journal article

Wang Z, Lo PW, Huang Y, Chen J, Calo JC, Chen W, Lo BPLet al., 2023, Tactile perception: a biomimetic whisker-based method for clinical gastrointestinal diseases screening, npj Robotics, Vol: 1, ISSN: 2731-4278

Early screening for gastrointestinal diseases is of vital importance for reducing mortality through introducing early intervention. In this paper, a biomimetic artificial whisker-based hardware system with artificial intelligence-enabled self-learning capability is proposed for endoluminal diagnosis. The proposed method provides an end-to-end screening strategy based on tactile information to extract the structural and textural details of the tissues in the lumen, enabling objective screening and reducing the inter-endoscopist variability. Benchmark performance analysis of the proposed was conducted to assess the electrical characteristics and core functions. To validate the feasibility of the proposed for endoluminal diagnosis, an ex-vivo study was conducted to detect some common tissue structures and our method shows promising results with the test accuracy up to 94.44% with 0.9167 kappa. This previously unexplored tactile-based method could potentially enhance or complement the current endoluminal diagnosis.

Journal article

Jobarteh ML, McCrory MA, Lo B, Triantafyllidis KK, Qiu J, Griffin JP, Sazonov E, Sun M, Jia W, Baranowski T, Anderson AK, Maitland K, Frost Get al., 2023, Evaluation of acceptability, functionality, and validity of a passive image-based dietary intake assessment method in adults and children of Ghanaian and Kenyan origin living in London, UK, Nutrients, Vol: 15, ISSN: 2072-6643

BACKGROUND: Accurate estimation of dietary intake is challenging. However, whilst some progress has been made in high-income countries, low- and middle-income countries (LMICs) remain behind, contributing to critical nutritional data gaps. This study aimed to validate an objective, passive image-based dietary intake assessment method against weighed food records in London, UK, for onward deployment to LMICs. METHODS: Wearable camera devices were used to capture food intake on eating occasions in 18 adults and 17 children of Ghanaian and Kenyan origin living in London. Participants were provided pre-weighed meals of Ghanaian and Kenyan cuisine and camera devices to automatically capture images of the eating occasions. Food images were assessed for portion size, energy, nutrient intake, and the relative validity of the method compared to the weighed food records. RESULTS: The Pearson and Intraclass correlation coefficients of estimates of intakes of food, energy, and 19 nutrients ranged from 0.60 to 0.95 and 0.67 to 0.90, respectively. Bland-Altman analysis showed good agreement between the image-based method and the weighed food record. Under-estimation of dietary intake by the image-based method ranged from 4 to 23%. CONCLUSIONS: Passive food image capture and analysis provides an objective assessment of dietary intake comparable to weighed food records.

Journal article

Jiang S, Strout Z, He B, Peng D, Shull PBB, Lo BPLet al., 2023, Dual Stream Meta Learning for Road Surface Classification and Riding Event Detection on Shared Bikes, IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, ISSN: 2168-2216

Journal article

Ghosh T, McCrory MA, Marden T, Higgins J, Anderson AK, Domfe CA, Jia W, Lo B, Frost G, Steiner-Asiedu M, Baranowski T, Sun M, Sazonov Eet al., 2023, I2N: image to nutrients, a sensor guided semi-automated tool for annotation of images for nutrition analysis of eating episodes, Frontiers in Nutrition, Vol: 10, Pages: 1-9, ISSN: 2296-861X

INTRODUCTION: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. METHODS: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). RESULTS: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. DISCUSSION: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

Journal article

Gu X, Han J, Yang G-Z, Lo Bet al., 2023, Generalizable movement intention recognition with multiple heterogenous EEG datasets, the 2023 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 9858-9864

Human movement intention recognition is important for human-robot interaction. Existing work based on motor imagery electroencephalogram (EEG) provides anon-invasive and portable solution for intention detection.However, the data-driven methods may suffer from the limited scale and diversity of the training datasets, which result in poor generalization performance on new test subjects. It is practically difficult to directly aggregate data from multiple datasets for training, since they often employ different channels and collected data suffers from significant domain shifts caused by different devices, experiment setup, etc. On the other hand, the inter-subject heterogeneity is also substantial due to individual differences in EEG representations. In this work, we developed two networks to learn from both the shared and the complete channels across datasets, handlinginter-subject and inter-dataset heterogeneity respectively. Based on both networks, we further developed an online knowledge co-distillation framework to collaboratively learn from both networks, achieving coherent performance boosts. Experimental results have shown that our proposed method can effectively aggregate knowledge from multiple datasets, demonstrating better generalization in the context of cross-subject validation.

Conference paper

Calo J, Lo B, 2023, IoT Federated Blockchain Learning at the Edge., Pages: 1-4

IoT devices are sorely underutilized in the medical field, especially within machine learning for medicine, yet they offer unrivaled benefits. IoT devices are low cost, energy efficient, small and intelligent devices [1].In this paper, we propose a distributed federated learning framework for IoT devices, more specifically for IoMT (In-ternet of Medical Things), using blockchain to allow for a decentralized scheme improving privacy and efficiency over a centralized system; this allows us to move from the cloud based architectures, that are prevalent, to the edge.The system is designed for three paradigms: 1) Training neural networks on IoT devices to allow for collaborative training of a shared model whilst decoupling the learning from the dataset [2] to ensure privacy [3]. Training is performed in an online manner simultaneously amongst all participants, allowing for training of actual data that may not have been present in a dataset collected in the traditional way and dynamically adapt the system whilst it is being trained. 2) Training of an IoMT system in a fully private manner such as to mitigate the issue with confidentiality of medical data and to build robust, and potentially bespoke [4], models where not much, if any, data exists. 3) Distribution of the actual network training, something federated learning itself does not do, to allow hospitals, for example, to utilize their spare computing resources to train network models.

Conference paper

Li Y, Luo S, Zhang H, Zhang Y, Zhang Y, Lo Bet al., 2023, MtCLSS: Multi-Task Contrastive Learning for Semi-Supervised Pediatric Sleep Staging, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 27, Pages: 2647-2655, ISSN: 2168-2194

Journal article

Calo J, Lo B, 2023, Federated Blockchain Learning at the Edge, INFORMATION, Vol: 14

Journal article

Zhang R, Chen J, Wang Z, Yang Z, Ren Y, Shi P, Calo J, Lam K, Purkayastha S, Lo Bet al., 2023, A Step Towards Conditional Autonomy-Robotic Appendectomy, IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 8, Pages: 2429-2436, ISSN: 2377-3766

Journal article

Zhou X, Yang Z, Ren Y, Bai W, Lo B, Yeatman EMMet al., 2023, Modified Bilateral Active Estimation Model: A Learning-Based Solution to the Time Delay Problem in Robotic Tele-Control, IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 8, Pages: 2653-2660, ISSN: 2377-3766

Journal article

Qiu J, Lo FP-W, Gu X, Jobarteh ML, Jia W, Baranowski T, Steiner-Asiedu M, Anderson AK, Mccrory MA, Sazonov E, Sun M, Frost G, Lo Bet al., 2023, Egocentric image captioning for privacy-preserved passive dietary intake monitoring, IEEE Transactions on Cybernetics, Vol: PP, Pages: 1-14, ISSN: 1083-4419

Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings.

Journal article

Zhang C, Jovanov E, Liao H, Zhang Y-T, Lo B, Zhang Y, Guan Cet al., 2023, Video Based Cocktail Causal Container for Blood Pressure Classification and Blood Glucose Prediction, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 27, Pages: 1118-1128, ISSN: 2168-2194

Journal article

Alian A, Zari E, Wang Z, Franco E, Avery JP, Runciman M, Lo B, Rodriguez y Baena F, Mylonas Get al., 2023, Current engineering developments for robotic systems in flexible endoscopy, Techniques and Innovations in Gastrointestinal Endoscopy, Vol: 25, Pages: 67-81, ISSN: 2590-0307

The past four decades have seen an increase in the incidence of early-onset gastrointestinal cancer. Because early-stage cancer detection is vital to reduce mortality rate, mass screening colonoscopy provides the most effective prevention strategy. However, conventional endoscopy is a painful and technically challenging procedure that requires sedation and experienced endoscopists to be performed. To overcome the current limitations, technological innovation is needed in colonoscopy. In recent years, researchers worldwide have worked to enhance the diagnostic and therapeutic capabilities of endoscopes. The new frontier of endoscopic interventions is represented by robotic flexible endoscopy. Among all options, self-propelling soft endoscopes are particularly promising thanks to their dexterity and adaptability to the curvilinear gastrointestinal anatomy. For these devices to replace the standard endoscopes, integration with embedded sensors and advanced surgical navigation technologies must be investigated. In this review, the progress in robotic endoscopy was divided into the fundamental areas of design, sensing, and imaging. The article offers an overview of the most promising advancements on these three topics since 2018. Continuum endoscopes, capsule endoscopes, and add-on endoscopic devices were included, with a focus on fluid-driven, tendon-driven, and magnetic actuation. Sensing methods employed for the shape and force estimation of flexible endoscopes were classified into model- and sensor-based approaches. Finally, some key contributions in molecular imaging technologies, artificial neural networks, and software algorithms are described. Open challenges are discussed to outline a path toward clinical practice for the next generation of endoscopic devices.

Journal article

Rosa BMG, Wales D, Lo B, 2023, Towards E-Nose Detection of Volatile Organic Compounds as Disease Biomarkers with Complementary Cardiovascular Assessment

Monitoring of volatile organic compounds (VOCs) in body fluids (blood, urine, sweat and saliva) or exhaled breath is a recent trend in medical research with the potential to unveil information about internal body processes or metabolic pathways dysregulated due to bacterial infection, tissue cancer, inflammation, and injury. Typically, these low-weight chemical compounds are analyzed from collected samples by expensive methods involving gas chromatography-mass spectrometry, proton-Transfer-reaction mass spectrometry or ion mobility-spectrometry, which makes it difficult to translate into wearable technology for day-To-day use by patients. Recently, E-noses have been proposed to sense chemicals and/or odors from gas exchanges taking place in the upper body part, in an attempt to replace the human nose, with different degrees of success. In this paper, we propose a prototype for an E-nose device with sensing modules for VOCs detection by graphene-field effect transistor (GFET) technology, combined with modules for the detection of body temperature, motion and sounds produced by the cardiovascular system. We successfully tested the prototype in the neck region (carotid artery) for monitorization of the latter variables, whereas 12 clinically relevant VOCs were monitored inside a controlled setup for metrics such as the change of graphene's resistance and spectral noise upon exposure to these vapors. This can then constitute the basis for development of a fully integrated system that directly correlates physiological variables with disease biomarkers sensed from gas exchanges.

Conference paper

Lo FPW, Guo Y, Sun Y, Qiu J, Lo Bet al., 2023, An Intelligent Vision-Based Nutritional Assessment Method for Handheld Food Items, IEEE Transactions on Multimedia, Vol: 25, Pages: 5840-5851, ISSN: 1520-9210

Dietary assessment has proven to be effective to evaluate the dietary intake of patients with diabetes and obesity. The traditional approach of accessing the dietary intake is to conduct a 24-hour dietary recall, a structured interview designed to obtain information on food categories and volume consumed by the participants. Due to unconscious biases in this kind of self-reporting approaches, many research studies have explored the use of vision-based approaches to provide accurate and objective assessments. Despite the promising results of food recognition by deep neural networks, there still exist several hurdles in deep learning-based food volume estimation ranging from domain shift between synthetic and raw 3D models, shape completion ambiguity and lack of large-scale paired training dataset. Therefore, this paper proposed an intelligent nutritional assessment approach via weakly-supervised point cloud completion, which aims to close the reality gap in 3D point cloud completion tasks and address the targeted challenges. Then the volume can be easily estimated from the completed representation of the food. Another major merit of our system is that it can be used to estimate the volume of handheld food items without requiring the constraints including placing the food items on a table or next to fiducial markers, which facilitates the implementation on both wearable and handheld cameras. Comprehensive experiments have been carried out on major benchmark datasets and self-constructed volume-annotated dataset respectively, in which the proposed method demonstrates comparable results with several strong fully-supervised baseline methods and shows superior completion ability in handling food volume estimation.

Journal article

Shi P, Peng J, Qiu J, Ju X, Po Wen Lo F, Lo Bet al., 2023, EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse Night Conditions

Accurate depth estimation under adverse night conditions has practical impact and applications, such as on autonomous driving and rescue robots. In this work, we studied monocular depth estimation at night time in which various adverse weather, light, and different road conditions exist, with data captured in both RGB and event modalities. Event camera can better capture intensity changes by virtue of its high dynamic range (HDR), which is particularly suitable to be applied at adverse night conditions in which the amount of light is limited in the scene. Although event data can retain visual perception that conventional RGB camera may fail to capture, the lack of texture and color information of event data hinders its applicability to accurately estimate depth alone. To tackle this problem, we propose an event-vision based framework that integrates low-light enhancement for the RGB source, and exploits the complementary merits of RGB and event data. A dataset that includes paired RGB and event streams, and ground truth depth maps has been constructed. Comprehensive experiments have been conducted, and the impact of different adverse weather combinations on the performance of framework has also been investigated. The results have shown that our proposed framework can better estimate monocular depth at adverse nights than six baselines.

Conference paper

Fard SE, Ghosh T, Hossain D, McCrory MA, Thomas G, Higgins J, Jia W, Baranowski T, Steiner-Asiedu M, Anderson AK, Sun M, Frost G, Lo B, Sazonov Eet al., 2023, Development of a Method for Compliance Detection in Wearable Sensors

One of the crucial elements in studies relying on wearable sensors for quantification of human activities (like physical activity or food intake) is the assessment of wear time (compliance). In this paper, we propose a novel method based on the Automatic Ingestion Monitor v2 (AIM-2), deployed for measuring nutrient and energy intake. The proposed method was developed using data from a study of 30 participants for two days each (US dataset) and tested with an independent dataset (Ghana dataset) on 10 households (30 Participants, 3 days for each, a total of 90 days). The signals from the accelerometer sensor of the AIM-2 were used to extract features and train the gradient-boosting tree classifier. To reduce the error in the classification of non-compliance in situations where the sensor changes its position with respect to gravity, a two-stage classifier followed by post-processing was introduced. Previously, we developed an offline compliance classifier, and this work aimed to develop a classifier for a cloud-based feedback system. The accuracy and F1-score of the developed two-phase classifier based on K-fold validation for the training and validation dataset were 95.37% and 96.93%, and for the Ghana dataset, were 95.86% and 92.56%, respectively, showing satisfactory performance results. The trained classifier can be deployed to monitor compliance with device wear in realtime applications. Clinical Relevance- Food Intake and physical activity studies can contribute to detecting, controlling, and even improving eating or physical activity-related problems, like obesity, diabetes, and eating planning. To ensure effective monitoring, compliance with the wearing of the device is crucial.

Conference paper

Shu Y, Gu X, Yang G-Z, Lo Bet al., 2022, Revisiting self-supervised constrastive learning for facial expression recognition, British Machine Vision Conference, Publisher: British Machine Vision Association, Pages: 1-14

The success of most advanced facial expression recognition works relies heavily on large-scale annotated datasets. However, it poses great challenges in acquiring clean and consistent annotations for facial expression datasets. On the other hand, self-supervised contrastive learning has gained great popularity due to its simple yet effective instance discrimination training strategy, which can potentially circumvent the annotation issue. Nevertheless, there remain inherent disadvantages of instance-level discrimination, which are even more challenging when faced with complicated facial representations. In this paper, we revisit the use of self-supervised contrastive learning and explore three core strategies to enforce expression-specific representations and to minimize the interferencefrom other facial attributes, such as identity and face styling. Experimental results show that our proposed method outperforms the current state-of-the-art self-supervised learning methods, in terms of both categorical and dimensional facial expression recognition tasks. Our project page: https://claudiashu.github.io/SSLFER.

Conference paper

Diao H, Chen C, Liu X, Yuan W, Amara A, Tamura T, Lo B, Fan J, Meng L, Pun SH, Zhang Y-T, Chen Wet al., 2022, Real-Time and Cost-Effective Smart Mat System Based on Frequency Channel Selection for Sleep Posture Recognition in IoMT, IEEE INTERNET OF THINGS JOURNAL, Vol: 9, Pages: 21421-21431, ISSN: 2327-4662

Journal article

Lam K, Lo FP-W, An Y, Darzi A, Kinross JM, Purkayastha S, Lo Bet al., 2022, Deep Learning for Instrument Detection and Assessment of Operative Skill in Surgical Videos, IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, Vol: 4, Pages: 1068-1071

Journal article

Gu X, Guo Y, Li Z, Qiu J, Dou Q, Liu Y, Lo B, Yang G-Zet al., 2022, Tackling long-tailed category distribution under domain shifts, European Conference on Computer Vision (ECCV 2022), Publisher: Springer, Pages: 727-743, ISSN: 0302-9743

Machine learning models fail to perform well on real-world applications when 1) the category distribution P(Y) of the training dataset suffers from long-tailed distribution and 2) the test data is drawn from different conditional distributions P(X|Y). Existing approaches cannot handle the scenario where both issues exist, which however is common for real-world applications. In this study, we took a step forward and looked into the problem of long-tailed classification under domain shifts. We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation. Furthermore, we adopted a meta-learning framework which integrates these three blocks to improve domain generalization on unseen target domains. Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS. We evaluated our method on the two datasets and extensive experimental results demonstrate that our proposed method can achieve superior performance over state-of-the-art long-tailed/domain generalization approaches and the combinations. Source codes and datasets can be found at our project page https://xiaogu.site/LTDS.

Conference paper

Zhang D, Ren Y, Barbot A, Seichepine F, Lo B, Ma Z-C, Yang G-Zet al., 2022, Fabrication and optical manipulation of micro-robots for biomedical applications, MATTER, Vol: 5, Pages: 3135-3160, ISSN: 2590-2393

Journal article

Qiu J, Chen L, Gu X, Lo FP-W, Tsai Y-Y, Sun J, Liu J, Lo Bet al., 2022, Egocentric Human Trajectory Forecasting With a Wearable Camera and Multi-Modal Fusion, IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 7, Pages: 8799-8806, ISSN: 2377-3766

Journal article

Cerminaro C, Sazonov E, McCrory MA, Steiner-Asiedu M, Bhaskar V, Gallo S, Laing E, Jia W, Sun M, Baranowski T, Frost G, Lo B, Anderson AKet al., 2022, Feasibility of the automatic ingestion monitor (AIM-2) for infant feeding assessment: a pilot study among breast-feeding mothers from Ghana, PUBLIC HEALTH NUTRITION, Vol: 25, Pages: 2897-2907, ISSN: 1368-9800

Journal article

Sun Y, Lo FP-W, Lo B, 2022, Light-weight internet-of-things device authentication, encryption and key distribution using end-to-end neural cryptosystems, IEEE Internet of Things Journal, Vol: 9, Pages: 14978-14987, ISSN: 2327-4662

Device authentication, encryption, and key distribution are of vital importance to any Internet-of-Things (IoT) systems, such as the new smart city infrastructures. This is due to the concern that attackers could easily exploit the lack of strong security in IoT devices to gain unauthorized access to the system or to hijack IoT devices to perform denial-of-service attacks on other networks. With the rise of fog and edge computing in IoT systems, increasing numbers of IoT devices have been equipped with computing capabilities to perform data analysis with deep learning technologies. Deep learning on edge devices can be deployed in numerous applications, such as local cardiac arrhythmia detection on a smart sensing patch, but it is rarely applied to device authentication and wireless communication encryption. In this paper, we propose a novel lightweight IoT device authentication, encryption, and key distribution approach using neural cryptosystems and binary latent space. The neural cryptosystems adopt three types of end-to-end encryption schemes: symmetric, public-key, and without keys. A series of experiments were conducted to test the performance and security strength of the proposed neural cryptosystems. The experimental results demonstrate the potential of this novel approach as a promising security and privacy solution for the next-generation of IoT systems.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00347538&limit=30&person=true