Imperial College London

DrBennyLo

Faculty of MedicineDepartment of Surgery & Cancer

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 0806benny.lo Website

 
 
//

Location

 

B414BBessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

210 results found

Zhang D, Wu Z, Chen J, Gao A, Chen X, Li P, Wang Z, Yang G, Lo B, Yang G-Zet al., 2020, Automatic Microsurgical Skill Assessment Based on Cross-Domain Transfer Learning, IEEE Robotics and Automation Letters, Vol: 5, Pages: 4148-4155

Journal article

Varghese RJ, Lo BPL, Yang G-Z, 2020, Design and Prototyping of a Bio-Inspired Kinematic Sensing Suit for the Shoulder Joint: Precursor to a Multi-DoF Shoulder Exosuit, Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC

Working paper

Jobarteh ML, McCrory MA, Lo B, Sun M, Sazonov E, Anderson AK, Jia W, Maitland K, Qiu J, Steiner-Asiedu M, Higgins JA, Baranowski T, Olupot-Olupot P, Frost Get al., 2020, Development and validation of objective, passive dietary assessment Method for estimating food and nutrient intake in households in Low and Middle-Income Countries (LMICs): a study protocol, Current Developments in Nutrition, Vol: 4, Pages: 1-11, ISSN: 2475-2991

Malnutrition is a major concern in low- and middle-income countries (LMIC), but the full extent of nutritional deficiencies remains unknown largely due to lack of accurate assessment methods. This study seeks to develop and validate an objective, passive method of estimating food and nutrient intake in households in Ghana and Uganda. Household members (including under-5s and adolescents) are assigned a wearable camera device to capture images of their food intake during waking hours. Using custom software, images captured are then used to estimate an individual's food and nutrient (i.e., protein, fat, carbohydrate, energy, and micronutrients) intake. Passive food image capture and assessment provides an objective measure of food and nutrient intake in real time, minimizing some of the limitations associated with self-reported dietary intake methods. Its use in LMIC could potentially increase the understanding of a population's nutritional status, and the contribution of household food intake to the malnutrition burden. This project is registered at clinicaltrials.gov (NCT03723460).

Journal article

Zhang Y, Guo Y, Yang P, Chen W, Lo Bet al., 2020, Epilepsy Seizure Prediction on EEG Using Common Spatial Pattern and Convolutional Neural Network, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 24, Pages: 465-474, ISSN: 2168-2194

Journal article

Zhang Y, Zhang Y, Lo B, Xu Wet al., 2020, Wearable ECG signal processing for automated cardiac arrhythmia classification using CFASE‐based feature selection, Expert Systems, Vol: 37, Pages: 1-13, ISSN: 0266-4720

Classification of electrocardiogram (ECG) signals is obligatory for the automatic diagnosis of cardiovascular disease. With the recent advancement of low‐cost wearable ECG device, it becomes more feasible to utilize ECG for cardiac arrhythmia classification in daily life. In this paper, we propose a lightweight approach to classify five types of cardiac arrhythmia, namely, normal beat (N), atrial premature contraction (A), premature ventricular contraction (V), left bundle branch block beat (L), and right bundle branch block beat (R). The combined method of frequency analysis and Shannon entropy is applied to extract appropriate statistical features. Information gain criterion is employed to select features that the results show that 10 highly effective features can obtain performance measures comparable to those obtained by using the complete features. The selected features are then fed to the input of Random Forest, K‐Nearest Neighbour, and J48 for classification. To evaluate classification performance, tenfold cross validation is used to verify the effectiveness of our method. Experimental results show that Random Forest classifier demonstrates significant performance with the highest sensitivity of 98.1%, the specificity of 99.5%, the precision of 98.1%, and the accuracy of 98.08%, outperforming other representative approaches for automated cardiac arrhythmia classification.

Journal article

Lo FP-W, Sun Y, Qiu J, Lo BPLet al., 2020, Point2Volume: A vision-based dietary assessment approach using view synthesis, IEEE Transactions on Industrial Informatics, Vol: 16, Pages: 577-586, ISSN: 1551-3203

Dietary assessment is an important tool for nutritional epidemiology studies. To assess the dietary intake, the common approach is to carry out 24-h dietary recall (24HR), a structured interview conducted by experienced dietitians. Due to the unconscious biases in such self-reporting methods, many research works have proposed the use of vision-based approaches to provide accurate and objective assessments. In this article, a novel vision-based method based on real-time three-dimensional (3-D) reconstruction and deep learning view synthesis is proposed to enable accurate portion size estimation of food items consumed. A point completion neural network is developed to complete partial point cloud of food items based on a single depth image or video captured from any convenient viewing position. Once 3-D models of food items are reconstructed, the food volume can be estimated through meshing. Compared to previous methods, our method has addressed several major challenges in vision-based dietary assessment, such as view occlusion and scale ambiguity, and it outperforms previous approaches in accurate portion size estimation.

Journal article

Chen C-M, Anastasova S, Zhang K, Rosa BG, Lo B, Assender HE, Yang G-Zet al., 2019, Towards Wearable and Flexible Sensors and Circuits Integration for Stress Monitoring., IEEE J Biomed Health Inform

Excessive stress is one of the main causes of mental illness. Long-term exposure of stress could affect one's physiological wellbeing (such as hypertension) and psychological condition (such as depression). Multisensory information such as heart rate variability (HRV) and pH can provide suitable information about mental and physical stress. This paper proposes a novel approach for stress condition monitoring using disposable flexible sensors. By integrating flexible amplifiers with a commercially available flexible polyvinylidene difluoride (PVDF) mechanical deformation sensor and a pH-type chemical sensor, the proposed system can detect arterial pulses from the neck and pH levels from sweat located in the back of the body. The system uses organic thin film transistor (OTFT)-based signal amplification front-end circuits with modifications to accommodate the dynamic signal ranges obtained from the sensors. The OTFTs were manufactured on a low-cost flexible polyethylene naphthalate (PEN) substrate using a coater capable of Roll-to-Roll (R2R) deposition. The proposed system can capture physiological indicators and data be interrogated by Near Field Communication (NFC) and has been validated with healthy subjects, demonstrating its application for real-time stress monitoring.

Journal article

Zheng Y, Ghovanloo M, Lo BPL, Atef M, Jiang Het al., 2019, Introduction to the special issue on wearable and flexible integrated sensors for screening, diagnostics, and treatment, IEEE Transactions on Biomedical Circuits and Systems, Vol: 13, Pages: 1300-1303, ISSN: 1932-4545

The papers in this special issue present a selection of high quality research papers on wearable and flexible integrated sensors for screening, diagnostics, and treatment. Emerging flexible and wearable physical sensing devices create huge potential for many vital healthcare and biomedical applications including artificial electronic skins, physiological monitoring and assessment systems, therapeutic and drug delivery platforms, etc. Monitoring of vital physiological parameters in hospital and/or home environments has been of tremendous interests to healthcare practitioners for a long time. Robust and reliable sensors with excellent flexibility and stretchability are essential in the development of pervasive health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous discomfort and invasiveness.

Journal article

Lo B, Zhang Y, Inan OT, Ellul Jet al., 2019, Guest editorial: special issue on pervasive sensing and machine learning for mental health, IEEE Journal of Biomedical and Health Informatics, Vol: 23, Pages: 2245-2246, ISSN: 2168-2194

The seven papers included in this special section focus on machine learning applications for the mental health industry. Mental health is one of the major global health issues affecting substantially more people than other noncommunicable diseases. Much research has been focused on developing novel technologies for tackling this global health challenge, including the development of advanced analytical techniques based on extensive datasets and multimodal acquisition for early detection and treatment of mental illnesses. The papers in this issue are dedicated to cover the related topics on technological advancements for mental health care and diagnosis with a focus on pervasive sensing and machine learning.

Journal article

Berthelot M, Ashcroft J, Boshier P, Henry FP, Hunter J, Lo B, Yang G-Z, Leff Det al., 2019, Use of near infrared spectroscopy and implantable Doppler for postoperative monitoring of free tissue transfer for breast reconstruction: a systematic review and meta-analysis, Plastic and Reconstructive Surgery Global Open, Vol: 7, Pages: 1-8, ISSN: 2169-7574

Background: Failure to accurately assess the perfusion of free tissue transfer (FTT) in the early postoperative periodmay contribute to failure, which is a source of major patient morbidity and healthcare costs.Goal: This systematic review and meta-analysis aims to evaluate and appraise current evidence for the use of nearinfrared spectroscopy (NIRS) and/or implantable Doppler (ID) devices compared with conventional clinicalassessment (CCA) for postoperative monitoring of FTT in reconstructive breast surgery.Methods: A systematic literature search was performed in accordance with the PRISMA guidelines. Studies in humansubjects published within the last decade relevant to the review question were identified. Meta-analysis using randomeffects models of FTT failure rate and STARD scoring were then performed on the retrieved publications.Results: 19 studies met the inclusions criteria. For NIRS and ID, the mean sensitivity for the detection of FTT failure is99.36% and 100% respectively, with average specificity of 99.36% and 97.63% respectively. From studies withsufficient reported data, meta-analysis results demonstrated that both NIRS (OR = 0.09 [0.02, 0.36], P < 0.001) and ID(OR = 0.39 [0.27, 0.95], P = 0.04) were associated with significant reduction of FTT failure rates compared to CCA.Conclusion: The use of ID and NIRS provide equivalent outcomes in detecting FTT failure and were superior to CCA.The ability to acquire continuous objective physiological data regarding tissue perfusion is a perceived advantage ofthese techniques. Reduced clinical staff workload and minimised hospital costs are also perceived as positiveconsequences of their use.

Journal article

Chen C-M, Kwasnicki RM, Curto VF, Yang G-Z, Lo BPLet al., 2019, Tissue oxygenation sensor and an active in vitro phantom for sensor Validation, IEEE Sensors Journal, Vol: 19, Pages: 8233-8240, ISSN: 1530-437X

A free flap is a tissue reconstruction procedure where healthy tissue is harvested to cover up vital structures after wound debridement. Microvascular anastomoses are carried out to join the arteries and veins of the flap with recipient vessels near the target site. Continuous monitoring is required to identify the flap failure and enable early intervention to salvage the flap. Although there are medical instruments that can assist surgeons in monitoring flap viability, high upfront costs and time-consuming data interpretation have hindered the use of such technologies in practice. Surgeons still rely largely on the clinical examination to monitor flaps after operations. This paper presents a low-cost, low-power (6.6 mW), and miniaturized Hamlyn StO 2 (tissue oxygen saturation) sensor that can be embodied as a plaster and attached to a flap for real-time monitoring. Similar to the design of oxygen saturation (SpO 2 /SaO 2 ) sensors, the Hamlyn StO 2 sensor was designed based on photoplethysmography (PPG), but with a different target of quantifying tissue perfusion rather than capturing pulsatile flow. To understand the spectral response to oxygenation/deoxygenation and vascular flow, an active in vitro silicone phantom was developed. The new sensor was validated using the silicone phantom and compared with a commercially available photospectroscopy and laser Doppler machine (O2C, LEA, Germany). In addition, in vivo experiments were conducted using a brachial pressure cuff forearm ischemia model. The experiment results have shown a high correlation between the proposed sensor and the O2C machine (r = 0.672 and p <; 0.001), demonstrating the potential value of the of the proposed low-cost sensor in post-operative free flap monitoring.

Journal article

Guo Y, Sun M, Lo FPW, Lo Bet al., 2019, Visual guidance and automatic control for robotic personalized stent graft manufacturing, 2019 International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 8740-8746

Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.

Conference paper

Zhang K, Chen C-M, Anastasova S, Gil B, Lo B, Assender Het al., 2019, Roll-to-roll processable OTFT-based amplifier and application for pH sensing, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

The prospect of roll-to-roll (R2R) processable Organic Thin Film Transistors (OTFTs) and circuits has attracted attention due to their mechanical flexibility and low cost of manufacture. This work will present a flexible electronics application for pH sensing with flexible and wearable signal processing circuits. A transimpedance amplifier was designed and fabricated on a polyethylene naphthalate (PEN) substrate prototype sheet that consists of 54 transistors. Different types and current ratios of current mirrors were initially created and then a suitable simple 1:3 current mirror (200nA) was selected to present the best performance of the proposed OTFT based transimpedance amplifier (TIA). Finally, this transimpedance amplifier was connected to a customized needle-based pH sensor that was induced as microfluidic collector for potential disease diagnosis and healthcare monitoring.

Conference paper

Lo FP-W, Sun Y, Qiu J, Lo Bet al., 2019, A novel vision-based approach for dietary assessment using deep learning view synthesis, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Dietary assessment system has proven as an effective tool to evaluate the eating behavior of patients suffering from diabetes and obesity. To assess the dietary intake, the traditional method is to carry out a 24-hour dietary recall (24HR), a structured interview aimed at capturing information on food items and portion size consumed by participants. However, unconscious biases are developed easily due to individual's subjective perception in this self-reporting technique which may lead to inaccuracy. Thus, this paper proposed a novel vision-based approach for estimating the volume of food items based on deep learning view synthesis and depth sensing techniques. In this paper, a point completion network is applied to perform 3D reconstruction of food items using a single depth image captured from any convenient viewing angle. Compared to previous approaches, the proposed method has addressed several key challenges in vision-based dietary assessment, such as view occlusion and scale ambiguity. Experiments have been carried out to examine this approach and showed the feasibility of the algorithm in accurate estimation of food volume.

Conference paper

Qiu J, Lo FP-W, Lo B, 2019, Assessing individual dietary intake in food sharing scenarios with a 360 camera and deep learning, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

A novel vision-based approach for estimating individual dietary intake in food sharing scenarios is proposed in this paper, which incorporates food detection, face recognition and hand tracking techniques. The method is validated using panoramic videos which capture subjects' eating episodes. The results demonstrate that the proposed approach is able to reliably estimate food intake of each individual as well as the food eating sequence. To identify the food items ingested by the subject, a transfer learning approach is designed. 4, 200 food images with segmentation masks, among which 1,500 are newly annotated, are used to fine-tune the deep neural network for the targeted food intake application. In addition, a method for associating detected hands with subjects is developed and the outcomes of face recognition are refined to enable the quantification of individual dietary intake in communal eating settings.

Conference paper

Chen S, Kang L, Lu Y, Wang N, Lu Y, Lo B, Yang G-Zet al., 2019, Discriminative information added by wearable sensors for early screening - a case study on diabetic peripheral neuropathy, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, Pages: 1-4, ISSN: 2376-8886

Wearable inertial sensors have demonstrated their potential to screen for various neuropathies and neurological disorders. Most such research has been based on classification algorithms that differentiate the control group from the pathological group, using biomarkers extracted from wearable data as predictors. However, such methods often lack quantitative evaluation of how much information provided by the wearable biomarkers contributes to the overall prediction. Despite promising results from internal cross validation, their utility in clinical practice remains unclear. In this paper, we highlight in a case study - early screening for diabetic peripheral neuropathy (DPN) - evaluation methods for quantifying the contribution of wearable inertial sensors. Using a quick-to-deploy wearable sensor system, we collected 106 in-hospital diabetic patients' gait data and developed logistic regression models to predict the risk of a diabetic patient having DPN. Adopting various metrics, we evaluated the discriminative information added by gait biomarkers and how much it improved screening. The results show that the proposed wearable system added useful information significantly to the existing clinical standards, and boosted the C-index significantly from 0.75 to 0.84, surpassing the current survey-based screening methods used in clinics.

Conference paper

Rosa BG, Anastasova-Ivanova S, Lo B, Yang GZet al., 2019, Towards a fully automatic food intake recognition system using acoustic, image capturing and glucose measurements, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Food intake is a major healthcare issue in developed countries that has become an economic and social burden across all sectors of society. Bad food intake habits lead to increased risk for development of obesity in children, young people and adults, with the latter more prone to suffer from health diseases such as diabetes, shortening the life expectancy. Environmental, cultural and behavioural factors have been appointed to be responsible for altering the balance between energy intake and expenditure, resulting in excess body weight. Methods to counteract the food intake problem are vast and include self-reported food questionnaires, body-worn sensors that record the sound, pressure or movements in the mouth and GI tract or image-based approaches that recognize the different types of food being ingested. In this paper we present an ear-worn device to track food intake habits by recording the acoustic signal produced by the chewing movements as well as the glucose level amperiometrically. Combined with a small camera on a future version of the device, we hope to deliver a complete system to control dietary habits with caloric intake estimation during satiation and deficit during satiety periods, which can be adapted to the physiology of each user.

Conference paper

Sun Y, Lo FP-W, Lo B, 2019, A deep learning approach on gender and age recognition using a single inertial sensor, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Extracting human attributes, such as gender and age, from biometrics have received much attention in recent years. Gender and age recognition can provide crucial information for applications such as security, healthcare, and gaming. In this paper, a novel deep learning approach on gender and age recognition using a single inertial sensors is proposed. The proposed approach is tested using the largest available inertial sensor-based gait database with data collected from more than 700 subjects. To demonstrate the robustness and effectiveness of the proposed approach, 10 trials of inter-subject Monte-Carlo cross validation were conducted, and the results show that the proposed approach can achieve an averaged accuracy of 86.6%±2.4% for distinguishing two age groups: teen and adult, and recognizing gender with averaged accuracies of 88.6%±2.5% and 73.9%±2.8% for adults and teens respectively.

Conference paper

Sun Y, Lo FPW, Lo B, 2019, EEG-based user identification system using 1D-convolutional long short-term memory neural networks, Expert Systems with Applications, Vol: 125, Pages: 259-267, ISSN: 0957-4174

Electroencephalographic (EEG) signals have been widely used in medical applications, yet the use of EEG signals as user identification systems for healthcare and Internet of Things (IoT) systems has only gained interests in the last few years. The advantages of EEG-based user identification systems lie in its dynamic property and uniqueness among different individuals. However, it is for this reason that manually designed features are not always adapted to the needs. Therefore, a novel approach based on 1D Convolutional Long Short-term Memory Neural Network (1D-Convolutional LSTM) for EEG-based user identification system is proposed in this paper. The performance of the proposed approach was validated with a public database consists of EEG data of 109 subjects. The experimental results showed that the proposed network has a very high averaged accuracy of 99.58%, when using only 16 channels of EEG signals, which outperforms the state-of-the-art EEG-based user identification methods. The combined use of CNNs and LSTMs in the proposed 1D-Convolutional LSTM can greatly improve the accuracy of user identification systems by utilizing the spatiotemporal features of the EEG signals with LSTM, and lowering cost of the systems by reducing the number of EEG electrodes used in the systems.

Journal article

Lo FPW, Sun Y, Lo B, 2019, Depth estimation based on a single close-up image with volumetric annotations in the wild: A pilot study, Pages: 513-518

© 2019 IEEE. A novel depth estimation technique based on a single close-up image is proposed in this paper for better understanding of the geometry of an unknown scene. Previous works focus mainly on depth estimation from global view information. Our technique, which is designed based on a deep neural network framework, utilizes monocular color images with volumetric annotations to train a two-stage neural network to estimate the depth information from close-up images. RGBVOL, a database of RGB images with volumetric annotations, has also been constructed by our group to validate the proposed methodology. Compared to previous depth estimation techniques, our method improves the accuracy of depth estimation under the condition that global cues of the scene are not available due to viewing angle and distance constraints.

Conference paper

Berthelot M, Henry FP, Hunter J, Leff D, Wood S, Jallali N, Dex E, Ladislava L, Lo B, Yang GZet al., 2019, Pervasive wearable device for free tissue transfer monitoring based on advanced data analysis: clinical study report, Journal of Biomedical Optics, Vol: 24, Pages: 067001-1-067001-8, ISSN: 1083-3668

Free tissue transfer (FTT) surgery for breast reconstruction following mastectomy has become a routineoperation with high success rates. Although failure is low, it can have a devastating impact on patient recovery,prognosis and psychological well-being. Continuous and objective monitoring of tissue oxygen saturation (StO2) hasshown to reduce failure rates through rapid detection time of postoperative vascular complications. We have developeda pervasive wearable wireless device that employs near infrared spectroscopy (NIRS) to continuously monitor FTTviaStO2measurement. Previously tested on different models, this paper introduces the results of a clinical study. Thegoal of the study is to demonstrate the developed device can reliably detectStO2variations in a clinical setting: 14patients were recruited. Advanced data analysis were performed on theStO2variations, the relativeStO2gradientchange, and, the classification of theStO2within different clusters of blood occlusion level (from 0% to 100% at 25%step) based on previous studies made on a vascular phantom and animals. The outcomes of the clinical study concurwith previous experimental results and the expected biological responses. This suggests the device is able to correctlydetect perfusion changes and provide real-time assessment on the viability of the FTT in a clinical setting.

Journal article

McCrory M, Sun M, Sazonov E, Frost G, Anderson A, Jia W, Jobarteh ML, Maitland K, Steiner-Asiedu M, Ghosh T, Higgins JA, Baranowski T, Lo Bet al., 2019, Methodology for objective, passive, image- and sensor-based assessment of dietary intake, meal-timing, and food-related activity in Ghana and Kenya (P13-028-19)., Current Developments in Nutrition, Vol: 3, Pages: 1247-1247, ISSN: 2475-2991

Objectives: Herein we describe a new system we have developed for assessment of dietary intake, meal timing, and food-related activities, adapted for use in low- and middle-income countries. Methods: System components include one or more wearable cameras (the Automatic Ingestion Monitor-2 (AIM), an eyeglasses-mounted wearable chewing sensor and micro-camera; ear-worn camera; the eButton, a camera attached to clothes; and eHat, a camera attached to a visor worn by the mother when feeding infants and toddlers), and custom software for evaluation of dietary intake from food-based images and sensor-detected food intake. General protocol: The primary caregiver of the family uses one or more wearable cameras during all waking hours. The cameras aim directly in front of the participant and capture images every few seconds, thereby providing multiple images of all food-related activities throughout the day. The camera may be temporarily removed for short periods to preserve privacy, such as during bathing and personal care. For analysis, images and sensor signals are processed by the study team in custom software. The images are time-stamped, arranged in chronological order, and linked with sensor-detected eating occasions. The software also incorporates food composition databases of choice such as the West African Foods Database, a Kenyan Foods Database, and the USDA Food Composition Database, allowing for image-based dietary assessment by trained nutritionists. Images can be linked with nutritional analysis and tagged with an activity label (e.g., food shopping, child feeding, cooking, eating). Assessment of food-related activities such as food-shopping, food gathering from gardens, cooking, and feeding of other family members by the primary caregiver can help provide context for dietary intake and additional information to increase accuracy of dietary assessment and analysis of eating behavior. Examples of the latter include assessment of specific ingredients in prepared

Journal article

Sun Y, Lo B, 2019, An artificial neural network framework for gait based biometrics, IEEE Journal of Biomedical and Health Informatics, Vol: 23, Pages: 987-998, ISSN: 2168-2194

OAPA As the popularity of wearable and implantable Body Sensor Network (BSN) devices increases, there is a growing concern regarding the data security of such power-constrained miniaturized medical devices. With limited computational power, BSN devices are often not able to provide strong security mechanisms to protect sensitive personal and health information, such as one's physiological data. Consequently, many new methods of securing Wireless Body Area Networks (WBANs) have been proposed recently. One effective solution is the Biometric Cryptosystem (BCS) approach. BCS exploits physiological and behavioral biometric traits, including face, iris, fingerprints, Electrocardiogram (ECG), and Photoplethysmography (PPG). In this paper, we propose a new BCS approach for securing wireless communications for wearable and implantable healthcare devices using gait signal energy variations and an Artificial Neural Network (ANN) framework. By simultaneously extracting similar features from BSN sensors using our approach, binary keys can be generated on demand without user intervention. Through an extensive analysis on our BCS approach using a gait dataset, the results have shown that the binary keys generated using our approach have high entropy for all subjects. The keys can pass both NIST and Dieharder statistical tests with high efficiency. The experimental results also show the robustness of the proposed approach in terms of the similarity of intra-class keys and the discriminability of the inter-class keys.

Journal article

Singh RK, Varghese RJ, Liu J, Zhang Z, Lo Bet al., 2019, A multi-sensor fusion approach for intention detection, Biosystems and Biorobotics, Pages: 454-458

© Springer Nature Switzerland AG 2019. For assistive devices to seamlessly and promptly assist users with activities of daily living (ADL), it is important to understand the user’s intention. Current assistive systems are mostly driven by unimodal sensory input which hinders their accuracy and responses. In this paper, we propose a context-aware sensor fusion framework to detect intention for assistive robotic devices which fuses information from a wearable video camera and wearable inertial measurement unit (IMU) sensors. A Naive Bayes classifier is used to predict the intent to move from IMU data and the object classification results from the video data. The proposed approach can achieve an accuracy of 85.2% in detecting movement intention.

Book chapter

Sun Y, Lo FP-W, Lo B, 2019, Security and Privacy for the Internet of Medical Things Enabled Healthcare Systems: A Survey, IEEE ACCESS, Vol: 7, Pages: 183339-183355, ISSN: 2169-3536

Journal article

Bernstein A, Varghese RJ, Liu J, Zhang Z, Lo Bet al., 2019, An Assistive Ankle Joint Exoskeleton for Gait Impairment, Biosystems and Biorobotics, Pages: 658-662

© 2019, Springer Nature Switzerland AG. Motor rehabilitation and assistance post-stroke are becoming a major concern for healthcare services with an increasingly aging population. Wearable robots can be a technological solution to support gait rehabilitation and to provide assistance to enable users to carry out activities of daily living independently. To address the need for long-term assistance for stroke survivors suffering from drop foot, this paper proposes a low-cost, assistive ankle joint exoskeleton for gait assistance. The proposed exoskeleton is designed to provide ankle foot support thus enabling normal walking gait. Baseline gait reading was recorded from two force sensors attached to a custom-built shoe insole of the exoskeleton. From our experiments, the average maximum force during heel-strike (63.95 N) and toe-off (54.84 N) were found, in addition to the average period of a gait cycle (1.45 s). The timing and force data were used to control the actuation of tendons of the exoskeleton to prevent the foot from preemptively hitting the ground during swing phase.

Book chapter

Lo FP-W, Sun Y, Qiu J, Lo Bet al., 2018, Food volume estimation based on deep learning view synthesis from a single depth map, Nutrients, Vol: 10, Pages: 1-20, ISSN: 2072-6643

An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items

Journal article

Ahmed MR, Zhang Y, Feng Z, Lo B, Inan OT, Liao Het al., 2018, Neuroimaging and machine learning for dementia diagnosis: recent advancements and future prospects, IEEE Reviews in Biomedical Engineering, Vol: 12, Pages: 19-33, ISSN: 1941-1189

Dementia, a chronic and progressive cognitive declination of brain function caused by disease or impairment, is becoming more prevalent due to the aging population. A major challenge in dementia is achieving accurate and timely diagnosis. In recent years, neuroimaging with computer-aided algorithms have made remarkable advances in addressing this challenge. The success of these approaches is mostly attributed to the application of machine learning techniques for neuroimaging. In this review paper, we present a comprehensive survey of automated diagnostic approaches for dementia using medical image analysis and machine learning algorithms published in the recent years. Based on the rigorous review of the existing works, we have found that, while most of the studies focused on Alzheimer's disease, recent research has demonstrated reasonable performance in the identification of other types of dementia remains a major challenge. Multimodal imaging analysis deep learning approaches have shown promising results in the diagnosis of these other types of dementia. The main contributions of this review paper are as follows. 1) Based on the detailed analysis of the existing literature, this paper discusses neuroimaging procedures for dementia diagnosis. 2) It systematically explains the most recent machine learning techniques and, in particular, deep learning approaches for early detection of dementia.

Journal article

Teachasrisaksakul K, Wu L, Yang G-Z, Lo Bet al., 2018, Hand Gesture Recognition with Inertial Sensors., 40th International Conference of the IEEE Engineering in Medicine and Biology Society, Publisher: IEEE, Pages: 3517-3520, ISSN: 1557-170X

Dyscalculia is a learning difficulty hindering fundamental arithmetical competence. Children with dyscalculia often have difficulties in engaging in lessons taught with traditional teaching methods. In contrast, an educational game is an attractive alternative. Recent educational studies have shown that gestures could have a positive impact in learning. With the recent development of low cost wearable sensors, a gesture based educational game could be used as a tool to improve the learning outcomes particularly for children with dyscalculia. In this paper, two generic gesture recognition methods are proposed for developing an interactive educational game with wearable inertial sensors. The first method is a multilayered perceptron classifier based on the accelerometer and gyroscope readings to recognize hand gestures. As gyroscope is more power demanding and not all low-cost wearable device has a gyroscope, we have simplified the method using a nearest centroid classifier for classifying hand gestures with only the accelerometer readings. The method has been integrated into open-source educational games. Experimental results based on 5 subjects have demonstrated the accuracy of inertial sensor based hand gesture recognitions. The results have shown that both methods can recognize 15 different hand gestures with the accuracy over 93%.

Conference paper

Sun Y, Lo B, 2018, Random number generation using inertial measurement unit signals for on-body IoT devices, Living in the Internet of Things: Cybersecurity of the IoT - A PETRAS, IoTUK and IET Event, Publisher: IET

With increasing popularity of wearable and implantable technologies for medical applications, there is a growing concern on the security and data protection of the on-body Internet-ofThings (IoT) devices. As a solution, cryptographic system is often adopted to encrypt the data, and Random Number Generator (RNG) is of vital importance to such system. This paper proposes a new random number generation method for securing on-body IoT devices based on temporal signal variations of the outputs of the Inertial Measurement Units (IMU) worn by the users while walking. As most new wearable and implantable devices have built-in IMUs and walking gait signals can be extracted from these body sensors, this method can be applied and integrated into the cryptographic systems of these new devices. To generate the random numbers, this method divides IMU signals into gait cycles and generates bits by comparing energy differences between the sensor signals in a gait cycle and the averaged IMU signals in multiple gait cycles. The generated bits are then re-indexed in descending order by the absolute values of the associated energy differences to further randomise the data and generate high-entropy random numbers. Two datasets were used in the studies to generate random numbers, where were rigorously tested and passed four well-known randomness test suites, namely NIST-STS, ENT, Dieharder, and RaBiGeTe.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00347538&limit=30&person=true