Imperial College London

DrBennyLo

Faculty of MedicineDepartment of Metabolism, Digestion and Reproduction

Visiting Reader
 
 
 
//

Contact

 

+44 (0)20 7594 0806benny.lo Website

 
 
//

Location

 

Bessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

292 results found

Qiu J, Lo FP-W, Sun Y, Wang S, Lo Bet al., 2019, British Machine Vision Conference 2019, BMVC 2019

Conference paper

Qiu J, Lo FPW, Sun Y, Wang S, Lo Bet al., 2019, Mining discriminative food regions for accurate food recognition, BMVC 2019, Publisher: British Machine Vision Conference

Automatic food recognition is the very first step towards passive dietary monitoring. In this paper, we address the problem of food recognition by mining discriminative food regions. Taking inspiration from Adversarial Erasing, a strategy that progressively discovers discriminative object regions for weakly supervised semantic segmentation, we propose a novel network architecture in which a primary network maintains the base accuracy of classifying an input image, an auxiliary network adversarially mines discriminative food regions, and a region network classifies the resulting mined regions. The global (the original input image) and the local (the mined regions) representations are then integrated for the final prediction. The proposed architecture denoted as PAR-Net is end-to-end trainable, and highlights discriminative regions in an online fashion. In addition, we introduce a new fine-grained food dataset named as Sushi-50, which consists of 50 different sushi categories. Extensive experiments have been conducted to evaluate the proposed approach. On three food datasets chosen (Food-101, Vireo-172, andSushi-50), our approach performs consistently and achieves state-of-the-art results (top-1 testing accuracy of 90:4%, 90:2%, 92:0%, respectively) compared with other existing approaches.

Conference paper

Guo Y, Sun M, Lo FPW, Lo Bet al., 2019, Visual guidance and automatic control for robotic personalized stent graft manufacturing, 2019 International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 8740-8746

Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.

Conference paper

Zhang K, Chen C-M, Anastasova S, Gil B, Lo B, Assender Het al., 2019, Roll-to-roll processable OTFT-based amplifier and application for pH sensing, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

The prospect of roll-to-roll (R2R) processable Organic Thin Film Transistors (OTFTs) and circuits has attracted attention due to their mechanical flexibility and low cost of manufacture. This work will present a flexible electronics application for pH sensing with flexible and wearable signal processing circuits. A transimpedance amplifier was designed and fabricated on a polyethylene naphthalate (PEN) substrate prototype sheet that consists of 54 transistors. Different types and current ratios of current mirrors were initially created and then a suitable simple 1:3 current mirror (200nA) was selected to present the best performance of the proposed OTFT based transimpedance amplifier (TIA). Finally, this transimpedance amplifier was connected to a customized needle-based pH sensor that was induced as microfluidic collector for potential disease diagnosis and healthcare monitoring.

Conference paper

Lo FP-W, Sun Y, Qiu J, Lo Bet al., 2019, A novel vision-based approach for dietary assessment using deep learning view synthesis, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Dietary assessment system has proven as an effective tool to evaluate the eating behavior of patients suffering from diabetes and obesity. To assess the dietary intake, the traditional method is to carry out a 24-hour dietary recall (24HR), a structured interview aimed at capturing information on food items and portion size consumed by participants. However, unconscious biases are developed easily due to individual's subjective perception in this self-reporting technique which may lead to inaccuracy. Thus, this paper proposed a novel vision-based approach for estimating the volume of food items based on deep learning view synthesis and depth sensing techniques. In this paper, a point completion network is applied to perform 3D reconstruction of food items using a single depth image captured from any convenient viewing angle. Compared to previous approaches, the proposed method has addressed several key challenges in vision-based dietary assessment, such as view occlusion and scale ambiguity. Experiments have been carried out to examine this approach and showed the feasibility of the algorithm in accurate estimation of food volume.

Conference paper

Qiu J, Lo FP-W, Lo B, 2019, Assessing individual dietary intake in food sharing scenarios with a 360 camera and deep learning, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

A novel vision-based approach for estimating individual dietary intake in food sharing scenarios is proposed in this paper, which incorporates food detection, face recognition and hand tracking techniques. The method is validated using panoramic videos which capture subjects' eating episodes. The results demonstrate that the proposed approach is able to reliably estimate food intake of each individual as well as the food eating sequence. To identify the food items ingested by the subject, a transfer learning approach is designed. 4, 200 food images with segmentation masks, among which 1,500 are newly annotated, are used to fine-tune the deep neural network for the targeted food intake application. In addition, a method for associating detected hands with subjects is developed and the outcomes of face recognition are refined to enable the quantification of individual dietary intake in communal eating settings.

Conference paper

Rosa BG, Anastasova-Ivanova S, Lo B, Yang GZet al., 2019, Towards a fully automatic food intake recognition system using acoustic, image capturing and glucose measurements, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Food intake is a major healthcare issue in developed countries that has become an economic and social burden across all sectors of society. Bad food intake habits lead to increased risk for development of obesity in children, young people and adults, with the latter more prone to suffer from health diseases such as diabetes, shortening the life expectancy. Environmental, cultural and behavioural factors have been appointed to be responsible for altering the balance between energy intake and expenditure, resulting in excess body weight. Methods to counteract the food intake problem are vast and include self-reported food questionnaires, body-worn sensors that record the sound, pressure or movements in the mouth and GI tract or image-based approaches that recognize the different types of food being ingested. In this paper we present an ear-worn device to track food intake habits by recording the acoustic signal produced by the chewing movements as well as the glucose level amperiometrically. Combined with a small camera on a future version of the device, we hope to deliver a complete system to control dietary habits with caloric intake estimation during satiation and deficit during satiety periods, which can be adapted to the physiology of each user.

Conference paper

Chen S, Kang L, Lu Y, Wang N, Lu Y, Lo B, Yang G-Zet al., 2019, Discriminative information added by wearable sensors for early screening - a case study on diabetic peripheral neuropathy, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, Pages: 1-4, ISSN: 2376-8886

Wearable inertial sensors have demonstrated their potential to screen for various neuropathies and neurological disorders. Most such research has been based on classification algorithms that differentiate the control group from the pathological group, using biomarkers extracted from wearable data as predictors. However, such methods often lack quantitative evaluation of how much information provided by the wearable biomarkers contributes to the overall prediction. Despite promising results from internal cross validation, their utility in clinical practice remains unclear. In this paper, we highlight in a case study - early screening for diabetic peripheral neuropathy (DPN) - evaluation methods for quantifying the contribution of wearable inertial sensors. Using a quick-to-deploy wearable sensor system, we collected 106 in-hospital diabetic patients' gait data and developed logistic regression models to predict the risk of a diabetic patient having DPN. Adopting various metrics, we evaluated the discriminative information added by gait biomarkers and how much it improved screening. The results show that the proposed wearable system added useful information significantly to the existing clinical standards, and boosted the C-index significantly from 0.75 to 0.84, surpassing the current survey-based screening methods used in clinics.

Conference paper

Sun Y, Lo FP-W, Lo B, 2019, A deep learning approach on gender and age recognition using a single inertial sensor, IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, ISSN: 2376-8886

Extracting human attributes, such as gender and age, from biometrics have received much attention in recent years. Gender and age recognition can provide crucial information for applications such as security, healthcare, and gaming. In this paper, a novel deep learning approach on gender and age recognition using a single inertial sensors is proposed. The proposed approach is tested using the largest available inertial sensor-based gait database with data collected from more than 700 subjects. To demonstrate the robustness and effectiveness of the proposed approach, 10 trials of inter-subject Monte-Carlo cross validation were conducted, and the results show that the proposed approach can achieve an averaged accuracy of 86.6%±2.4% for distinguishing two age groups: teen and adult, and recognizing gender with averaged accuracies of 88.6%±2.5% and 73.9%±2.8% for adults and teens respectively.

Conference paper

Sun Y, Lo FPW, Lo B, 2019, EEG-based user identification system using 1D-convolutional long short-term memory neural networks, Expert Systems with Applications, Vol: 125, Pages: 259-267, ISSN: 0957-4174

Electroencephalographic (EEG) signals have been widely used in medical applications, yet the use of EEG signals as user identification systems for healthcare and Internet of Things (IoT) systems has only gained interests in the last few years. The advantages of EEG-based user identification systems lie in its dynamic property and uniqueness among different individuals. However, it is for this reason that manually designed features are not always adapted to the needs. Therefore, a novel approach based on 1D Convolutional Long Short-term Memory Neural Network (1D-Convolutional LSTM) for EEG-based user identification system is proposed in this paper. The performance of the proposed approach was validated with a public database consists of EEG data of 109 subjects. The experimental results showed that the proposed network has a very high averaged accuracy of 99.58%, when using only 16 channels of EEG signals, which outperforms the state-of-the-art EEG-based user identification methods. The combined use of CNNs and LSTMs in the proposed 1D-Convolutional LSTM can greatly improve the accuracy of user identification systems by utilizing the spatiotemporal features of the EEG signals with LSTM, and lowering cost of the systems by reducing the number of EEG electrodes used in the systems.

Journal article

Berthelot M, Henry FP, Hunter J, Leff D, Wood S, Jallali N, Dex E, Ladislava L, Lo B, Yang GZet al., 2019, Pervasive wearable device for free tissue transfer monitoring based on advanced data analysis: clinical study report, Journal of Biomedical Optics, Vol: 24, Pages: 067001-1-067001-8, ISSN: 1083-3668

Free tissue transfer (FTT) surgery for breast reconstruction following mastectomy has become a routineoperation with high success rates. Although failure is low, it can have a devastating impact on patient recovery,prognosis and psychological well-being. Continuous and objective monitoring of tissue oxygen saturation (StO2) hasshown to reduce failure rates through rapid detection time of postoperative vascular complications. We have developeda pervasive wearable wireless device that employs near infrared spectroscopy (NIRS) to continuously monitor FTTviaStO2measurement. Previously tested on different models, this paper introduces the results of a clinical study. Thegoal of the study is to demonstrate the developed device can reliably detectStO2variations in a clinical setting: 14patients were recruited. Advanced data analysis were performed on theStO2variations, the relativeStO2gradientchange, and, the classification of theStO2within different clusters of blood occlusion level (from 0% to 100% at 25%step) based on previous studies made on a vascular phantom and animals. The outcomes of the clinical study concurwith previous experimental results and the expected biological responses. This suggests the device is able to correctlydetect perfusion changes and provide real-time assessment on the viability of the FTT in a clinical setting.

Journal article

McCrory M, Sun M, Sazonov E, Frost G, Anderson A, Jia W, Jobarteh ML, Maitland K, Steiner-Asiedu M, Ghosh T, Higgins JA, Baranowski T, Lo Bet al., 2019, Methodology for objective, passive, image- and sensor-based assessment of dietary intake, meal-timing, and food-related activity in Ghana and Kenya (P13-028-19)., Current Developments in Nutrition, Vol: 3, Pages: 1247-1247, ISSN: 2475-2991

Objectives: Herein we describe a new system we have developed for assessment of dietary intake, meal timing, and food-related activities, adapted for use in low- and middle-income countries. Methods: System components include one or more wearable cameras (the Automatic Ingestion Monitor-2 (AIM), an eyeglasses-mounted wearable chewing sensor and micro-camera; ear-worn camera; the eButton, a camera attached to clothes; and eHat, a camera attached to a visor worn by the mother when feeding infants and toddlers), and custom software for evaluation of dietary intake from food-based images and sensor-detected food intake. General protocol: The primary caregiver of the family uses one or more wearable cameras during all waking hours. The cameras aim directly in front of the participant and capture images every few seconds, thereby providing multiple images of all food-related activities throughout the day. The camera may be temporarily removed for short periods to preserve privacy, such as during bathing and personal care. For analysis, images and sensor signals are processed by the study team in custom software. The images are time-stamped, arranged in chronological order, and linked with sensor-detected eating occasions. The software also incorporates food composition databases of choice such as the West African Foods Database, a Kenyan Foods Database, and the USDA Food Composition Database, allowing for image-based dietary assessment by trained nutritionists. Images can be linked with nutritional analysis and tagged with an activity label (e.g., food shopping, child feeding, cooking, eating). Assessment of food-related activities such as food-shopping, food gathering from gardens, cooking, and feeding of other family members by the primary caregiver can help provide context for dietary intake and additional information to increase accuracy of dietary assessment and analysis of eating behavior. Examples of the latter include assessment of specific ingredients in prepared

Journal article

Sun Y, Lo B, 2019, An artificial neural network framework for gait based biometrics, IEEE Journal of Biomedical and Health Informatics, Vol: 23, Pages: 987-998, ISSN: 2168-2194

OAPA As the popularity of wearable and implantable Body Sensor Network (BSN) devices increases, there is a growing concern regarding the data security of such power-constrained miniaturized medical devices. With limited computational power, BSN devices are often not able to provide strong security mechanisms to protect sensitive personal and health information, such as one's physiological data. Consequently, many new methods of securing Wireless Body Area Networks (WBANs) have been proposed recently. One effective solution is the Biometric Cryptosystem (BCS) approach. BCS exploits physiological and behavioral biometric traits, including face, iris, fingerprints, Electrocardiogram (ECG), and Photoplethysmography (PPG). In this paper, we propose a new BCS approach for securing wireless communications for wearable and implantable healthcare devices using gait signal energy variations and an Artificial Neural Network (ANN) framework. By simultaneously extracting similar features from BSN sensors using our approach, binary keys can be generated on demand without user intervention. Through an extensive analysis on our BCS approach using a gait dataset, the results have shown that the binary keys generated using our approach have high entropy for all subjects. The keys can pass both NIST and Dieharder statistical tests with high efficiency. The experimental results also show the robustness of the proposed approach in terms of the similarity of intra-class keys and the discriminability of the inter-class keys.

Journal article

Lo FP-W, Sun Y, Lo B, 2019, Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study, IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Publisher: IEEE, Pages: 513-518, ISSN: 2159-6255

A novel depth estimation technique based on a single close-up image is proposed in this paper for better understanding of the geometry of an unknown scene. Previous works focus mainly on depth estimation from global view information. Our technique, which is designed based on a deep neural network framework, utilizes monocular color images with volumetric annotations to train a two-stage neural network to estimate the depth information from close-up images. RGBVOL, a database of RGB images with volumetric annotations, has also been constructed by our group to validate the proposed methodology. Compared to previous depth estimation techniques, our method improves the accuracy of depth estimation under the condition that global cues of the scene are not available due to viewing angle and distance constraints.

Conference paper

Singh RK, Varghese RJ, Liu J, Zhang Z, Lo Bet al., 2019, A multi-sensor fusion approach for intention detection, 4th International Conference on NeuroRehabilitation (ICNR), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 454-458, ISSN: 2195-3562

For assistive devices to seamlessly and promptly assist users with activities of daily living (ADL), it is important to understand the user’s intention. Current assistive systems are mostly driven by unimodal sensory input which hinders their accuracy and responses. In this paper, we propose a context-aware sensor fusion framework to detect intention for assistive robotic devices which fuses information from a wearable video camera and wearable inertial measurement unit (IMU) sensors. A Naive Bayes classifier is used to predict the intent to move from IMU data and the object classification results from the video data. The proposed approach can achieve an accuracy of 85.2% in detecting movement intention.

Conference paper

Rother R, Sun Y, Lo B, 2019, Internet of things based pervasive sensing of psychological anxiety via wearable devices under naturalistic settings

Psychological anxiety is highly prevalent in dementia patients, and reduces the quality of life of the afflicted and their caregivers. Still, not much technological research has been conducted to address this issue in the field. This study aimed to develop a wearable system which could detect anxiety in dementia patients under naturalistic settings, and alert caregivers via a web application. The wearable system designed included an accelerometer, pulse sensor, skin conductivity sensor, and a camera. The readings would be fed into a machine learning model to output an anxiety state prediction. One participant was trialled under both controlled and naturalistic settings. The model achieved classification accuracy of 0.95 under controlled settings and 0.82 under naturalistic settings. Implications of the study include achieving relatively high classification accuracy under naturalistic settings, and the novel discovery of movement as a potential predictor of anxiety states.

Conference paper

Lo FP-W, Sun Y, Qiu J, Lo Bet al., 2018, Food volume estimation based on deep learning view synthesis from a single depth map, Nutrients, Vol: 10, Pages: 1-20, ISSN: 2072-6643

An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items

Journal article

Ahmed MR, Zhang Y, Feng Z, Lo B, Inan OT, Liao Het al., 2018, Neuroimaging and machine learning for dementia diagnosis: recent advancements and future prospects, IEEE Reviews in Biomedical Engineering, Vol: 12, Pages: 19-33, ISSN: 1941-1189

Dementia, a chronic and progressive cognitive declination of brain function caused by disease or impairment, is becoming more prevalent due to the aging population. A major challenge in dementia is achieving accurate and timely diagnosis. In recent years, neuroimaging with computer-aided algorithms have made remarkable advances in addressing this challenge. The success of these approaches is mostly attributed to the application of machine learning techniques for neuroimaging. In this review paper, we present a comprehensive survey of automated diagnostic approaches for dementia using medical image analysis and machine learning algorithms published in the recent years. Based on the rigorous review of the existing works, we have found that, while most of the studies focused on Alzheimer's disease, recent research has demonstrated reasonable performance in the identification of other types of dementia remains a major challenge. Multimodal imaging analysis deep learning approaches have shown promising results in the diagnosis of these other types of dementia. The main contributions of this review paper are as follows. 1) Based on the detailed analysis of the existing literature, this paper discusses neuroimaging procedures for dementia diagnosis. 2) It systematically explains the most recent machine learning techniques and, in particular, deep learning approaches for early detection of dementia.

Journal article

Bernstein A, Varghese RJ, Liu J, Zhang Z, Lo Bet al., 2018, An assistive ankle joint exoskeleton for gait impairment, 4th International Conference on NeuroRehabilitation (ICNR), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 658-662, ISSN: 2195-3562

Motor rehabilitation and assistance post-stroke are becoming a major concern for healthcare services with an increasingly aging population. Wearable robots can be a technological solution to support gait rehabilitation and to provide assistance to enable users to carry out activities of daily living independently. To address the need for long-term assistance for stroke survivors suffering from drop foot, this paper proposes a low-cost, assistive ankle joint exoskeleton for gait assistance. The proposed exoskeleton is designed to provide ankle foot support thus enabling normal walking gait. Baseline gait reading was recorded from two force sensors attached to a custom-built shoe insole of the exoskeleton. From our experiments, the average maximum force during heel-strike (63.95 N) and toe-off (54.84 N) were found, in addition to the average period of a gait cycle (1.45 s). The timing and force data were used to control the actuation of tendons of the exoskeleton to prevent the foot from preemptively hitting the ground during swing phase.

Conference paper

Teachasrisaksakul K, Wu L, Yang G-Z, Lo Bet al., 2018, Hand Gesture Recognition with Inertial Sensors., 40th International Conference of the IEEE Engineering in Medicine and Biology Society, Publisher: IEEE, Pages: 3517-3520, ISSN: 1557-170X

Dyscalculia is a learning difficulty hindering fundamental arithmetical competence. Children with dyscalculia often have difficulties in engaging in lessons taught with traditional teaching methods. In contrast, an educational game is an attractive alternative. Recent educational studies have shown that gestures could have a positive impact in learning. With the recent development of low cost wearable sensors, a gesture based educational game could be used as a tool to improve the learning outcomes particularly for children with dyscalculia. In this paper, two generic gesture recognition methods are proposed for developing an interactive educational game with wearable inertial sensors. The first method is a multilayered perceptron classifier based on the accelerometer and gyroscope readings to recognize hand gestures. As gyroscope is more power demanding and not all low-cost wearable device has a gyroscope, we have simplified the method using a nearest centroid classifier for classifying hand gestures with only the accelerometer readings. The method has been integrated into open-source educational games. Experimental results based on 5 subjects have demonstrated the accuracy of inertial sensor based hand gesture recognitions. The results have shown that both methods can recognize 15 different hand gestures with the accuracy over 93%.

Conference paper

Sun Y, Lo B, 2018, Random number generation using inertial measurement unit signals for on-body IoT devices, Living in the Internet of Things: Cybersecurity of the IoT - A PETRAS, IoTUK and IET Event, Publisher: IET

With increasing popularity of wearable and implantable technologies for medical applications, there is a growing concern on the security and data protection of the on-body Internet-ofThings (IoT) devices. As a solution, cryptographic system is often adopted to encrypt the data, and Random Number Generator (RNG) is of vital importance to such system. This paper proposes a new random number generation method for securing on-body IoT devices based on temporal signal variations of the outputs of the Inertial Measurement Units (IMU) worn by the users while walking. As most new wearable and implantable devices have built-in IMUs and walking gait signals can be extracted from these body sensors, this method can be applied and integrated into the cryptographic systems of these new devices. To generate the random numbers, this method divides IMU signals into gait cycles and generates bits by comparing energy differences between the sensor signals in a gait cycle and the averaged IMU signals in multiple gait cycles. The generated bits are then re-indexed in descending order by the absolute values of the associated energy differences to further randomise the data and generate high-entropy random numbers. Two datasets were used in the studies to generate random numbers, where were rigorously tested and passed four well-known randomness test suites, namely NIST-STS, ENT, Dieharder, and RaBiGeTe.

Conference paper

Berthelot M, Lo B, Yang G-Z, Leff Det al., 2018, Pilot study: Free flap monitoring using a new tissue oxygen saturation (StO2) device, European Journal of Surgical Oncology, Vol: 44, Pages: 900-900, ISSN: 0748-7983

Journal article

Guo Y, Zhang Y, Mursalin M, Xu W, Lo BPLet al., 2018, Automated epileptic seizure detection by analyzing wearable EEG signals using extended correlation-based feature selection, IEEE BSN 2018, Publisher: IEEE, Pages: 66-69

Electroencephalogram (EEG) that measures the electrical activity of the brain has been widely employed for diagnosing epilepsy which is one kind of brain abnormalities. With the advancement of low-cost wearable brain-computer interface devices, it is possible to monitor EEG for epileptic seizure detection in daily use. However, it is still challenging to develop seizure classification algorithms with a considerable higher accuracy and lower complexity. In this study, we propose a lightweight method which can reduce the number of features for a multiclass classification to identify three different seizure statuses (i.e., Healthy, Interictal and Epileptic seizure) through EEG signals with a wearable EEG sensors using Extended Correlation-Based Feature Selection (ECFS). More specifically, there are three steps in our proposed approach. Firstly, the EEG signals were segmented into five frequency bands and secondly, we extract the features while the unnecessary feature space was eliminated by developing the ECFS method. Finally, the features were fed into five different classification algorithms, including Random Forest, Support Vector Machine, Logistic Model Trees, RBF Network and Multilayer Perceptron. Experimental results have shown that Logistic Model Trees provides the highest accuracy of 97.6% comparing to other classifiers.

Conference paper

Gu X, Deligianni F, Lo B, Chen W, Yang Get al., 2018, Markerless gait analysis based on a single RGB camera, International Conference on Wearable and Implantable Body Sensor Networks, Publisher: IEEE, ISSN: 2376-8894

Gait analysis is an important tool for monitoring and preventing injuries as well as to quantify functional decline in neurological diseases and elderly people. In most cases, it is more meaningful to monitor patients in natural living environments with low-end equipment such as cameras and wearable sensors. However, inertial sensors cannot provide enough details on angular dynamics. This paper presents a method that uses a single RGB camera to track the 2D joint coordinates with state-of-the-art vision algorithms. Reconstruction of the 3D trajectories uses sparse representation of an active shape model. Subsequently, we extract gait features and validate our results in comparison with a state-of-the-art commercial multi-camera tracking system. Our results are comparable to those from the current literature based on depth cameras and optical markers to extract gait characteristics.

Conference paper

Berthelot M, Yang G-Z, Lo B, 2018, Tomographic probe for perfusion analysis in deep layer tissue, 15th International Conference on Biomedical and Health Informatics (BHI) and Wearable and Implantable Body Sensor Networks (BSN) of the IEEE-Engineering-in-Medicine-and-Biology-Society, Publisher: IEEE, Pages: 86-89, ISSN: 2376-8886

Continuous buried soft tissue free flap postoperative monitoring is crucial to detect flap failure and enable early intervention. In this case, clinical assessment is challenging as the flap is buried and only implantable or hand held devices can be used for regular monitoring. These devices have limitations in their price, usability and specificity. Near-infrared spectroscopy (NIRS) has shown promising results for superficial free flap postoperative monitoring, but it has not been considered for buried free flap, mainly due to the limited penetration depth of conventional approaches. A wearable wireless tomographic probe has been developed for continuous monitoring of tissue perfusion at different depths. Using the NIRS method, blood flow can be continuously measured at different tissue depths. This device has been designed following conclusions of extensive computerised simulations and it has been validated using a vascular phantom.

Conference paper

Lo BPL, 2018, Innovative sensing technologies for developing countries, IEEE Biomedical and Health Informatics BHI 2018

Conference paper

Friedl KE, Hixson JD, Buller MJ, Lo Bet al., 2018, Guest editorial - 13th Body Sensor Networks Symposium, IEEE Journal of Biomedical and Health Informatics, Vol: 22, Pages: 3-4, ISSN: 2168-2194

Journal article

Gao A, Lo P, Lo B, 2017, Food volume estimation for quantifying dietary intake with a wearable camera, Body Sensor Networks Conference 2018, Publisher: IEEE

A novel food volume measurement technique isproposed in this paper for accurate quantification of the dailydietary intake of the user. The technique is based on simul-taneous localisation and mapping (SLAM), a modified versionof convex hull algorithm, and a 3D mesh object reconstructiontechnique. This paper explores the feasibility of applying SLAMtechniques for continuous food volume measurement with amonocular wearable camera. A sparse map will be generatedby SLAM after capturing the images of the food item withthe camera and the multiple convex hull algorithm is appliedto form a 3D mesh object. The volume of the target objectcan then be computed based on the mesh object. Comparedto previous volume measurement techniques, the proposedmethod can measure the food volume continuously with no priorinformation such as pre-defined food shape model. Experimentshave been carried out to evaluate this new technique andshowed the feasibility and accuracy of the proposed algorithmin measuring food volume.

Conference paper

Sun Y, Yang G, Lo B, 2017, An artificial neural network framework for lower limb motion signal estimation with foot-mounted inertial sensors, IEEE Conference on Body Sensor Networks (BSN) 2018, Publisher: IEEE

This paper proposes a novel artificial neuralnetwork based method for real-time gait analysis with minimalnumber of Inertial Measurement Units (IMUs). Accurate lowerlimb attitude estimation has great potential for clinical gait di-agnosis for orthopaedic patients and patients with neurologicaldiseases. However, the use of multiple wearable sensors hinderthe ubiquitous use of inertial sensors for detailed gait analysis.This paper proposes the use of two IMUs mounted on theshoes to estimate the IMU signals at the shin, thigh and waistfor accurate attitude estimation of the lower limbs. By usingthe artificial neural network framework, the gait parameters,such as angle, velocity and displacements of the IMUs canbe estimated. The experimental results have shown that theproposed method can accurately estimate the IMUs signals onthe lower limbs based only on the IMU signals on the shoes,which demonstrates its potential for lower limb motion trackingand real-time gait analysis.

Conference paper

Berthelot ME, Yang GZ, Lo B, 2017, A self-calibrated tissue viability sensor for free flap monitoring, IEEE Journal of Biomedical and Health Informatics, Vol: 22, Pages: 5-14, ISSN: 2168-2194

In fasciocutaneous free flap surgery, close postoperative monitoring is crucial for detecting flap failure, as around 10% of cases require additional surgery due to compromised anastomosis. Different biochemical and biophysical techniques have been developed for continuous flap monitoring, however, they all have shortcoming in terms of reliability, elevated cost, potential risks to the patient and inability to adapt to the patient's phenotype. A wearable wireless device based on near infrared spectroscopy (NIRS) has been developed for continuous blood flow and perfusion monitoring by quantifying tissue oxygen saturation (StO2). This miniaturized and low cost device is designed for postoperative monitoring of flap viability. With self-calibration, the device can adapt itself to the characteristics of the patients' skin such as tone and thickness. An extensive study was conducted with 32 volunteers. The experimental results show that the device can obtain reliable StO2 measurements across different phenotypes (age, sex, skin tone and thickness). To assess its ability to detect flap failure, the sensor was validated with an animal study. Free groin flaps were performed on 16 Sprague Dawley rats. Results demonstrate the accuracy of the sensor in assessing flap viability and identifying the origin of failure (venous or arterial thrombosis).

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00347538&limit=30&person=true&page=4&respub-action=search.html