Imperial College London

DrBennyLo

Faculty of MedicineDepartment of Surgery & Cancer

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 0806benny.lo Website

 
 
//

Location

 

B414BBessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

253 results found

Gil B, Anastasova S, Lo B, 2022, Graphene field-effect transistors array for detection of liquid conductivities in the physiological range through novel time-multiplexed impedance measurements, Carbon, Vol: 193, Pages: 394-403, ISSN: 0008-6223

In medical applications, graphene field-effect transistors (GFETs) have been employed as sensors for the detection of biological biomarkers and other compounds due to graphene's high sensitivity towards immobilized molecules at its surface. Commonly, the resistivity of a graphene channel is measured under direct drain-source current (DC) and gate voltage sweep. However, like other materials, the electrical response of graphene can also be studied in the form of impedance measurements using alternating drain-source current (AC), which alters the distribution and migration of ionic species across the graphene channel. In this study, an array with 12 non-functionalized GFETs is used for pioneering electrical measurements of prepared liquid solutions in the DC and AC regimes of stimulation. In particular, the transistors were characterized under saline solutions with conductivity levels ranging from 84 μS/cm to 1413 μS/cm. We first report the observation of a larger variation on the graphene's charge neutrality point in gate sweep for the AC regime with varying conductivity levels (ΔVGS = −0.00033∗ΔS) as compared to DC (ΔVGS = −0.00023∗ΔS), which can potentially be explored as a novel way of detecting electrical changes on graphene induced by physiological solutions, chemical analytes, immobilized substances, or cell substrates, when direct current cannot penetrate and influence these structures.

Journal article

Zhang D, Barbot A, Seichepine F, Lo FP-W, Bai W, Yang G-Z, Lo Bet al., 2022, Micro-object pose estimation with sim-to-real transfer learning using small dataset, COMMUNICATIONS PHYSICS, Vol: 5, ISSN: 2399-3650

Journal article

Li Y, Peng C, Zhang Y, Zhang Y, Lo Bet al., 2022, Adversarial learning for semi-supervised pediatric sleep staging with single-EEG channel., Methods

Despite the progress recently made towards automatic sleep staging for adults, children have complicated sleep structures that require attention to the pediatric sleep staging. Semi-supervised learning, i.e., training networks with both labeled and unlabeled data, greatly reduces the burden of epoch-by-epoch annotation for physicians. However, the inherent class-imbalance problem in sleep staging task undermines the effectiveness of semi-supervised methods such as pseudo-labeling. In this paper, we propose a Bi-Stream Adversarial Learning network (BiSALnet) to generate pseudo-labels with higher confidence for network optimization. Adversarial learning strategy is adopted in Student and Teacher branches of the two-stream networks. The similarity measurement function minimizes the divergence between the outputs of the Student and Teacher branches, and the discriminator continuously enhances its discriminative ability. In addition, we employ a powerful symmetric positive definite (SPD) manifold structure in the Student branch to capture the desired feature distribution properties. The joint discriminative power of convolutional features and nonlinear complex information aggregated by SPD matrices is combined by the attention feature fusion module to improve the sleep stage classification performance. The BiSALnet is tested on pediatric dataset collected from local hospital. Experimental results show that our method yields the overall classification accuracy of 0.80, kappa of 0.73 and F1-score of 0.76. We also examine the generality of our method on a well-known public dataset Sleep-EDF. Our BiSALnet exhibits noticeable performance with accuracy of 0.91, kappa of 0.85 and F1-score of 0.77. Remarkably, we have obtained comparable performance with state-of-the-art supervised approaches with fairly limited labeled data.

Journal article

Lam K, Chen J, Wang Z, Iqbal F, Darzi A, Lo B, Purkayastha S, Kinross Jet al., 2022, Machine learning for technical skill assessment in surgery: a systematic review, npj Digital Medicine, Vol: 5, ISSN: 2398-6352

Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive and subject to bias. Machine learning (ML) has the potential to provide rapid, automated and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66) and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment ofbasic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.

Journal article

Gu X, Guo Y, Yang G-Z, Lo Bet al., 2022, Cross-domain self-supervised complete geometric representation learning for real-scanned point cloud based pathological gait analysis, IEEE Journal of Biomedical and Health Informatics, Vol: 26, Pages: 1034-1044, ISSN: 2168-2194

Accurate lower-limb pose estimation is a prereq-uisite of skeleton based pathological gait analysis. To achievethis goal in free-living environments for long-term monitoring,single depth sensor has been proposed in research. However,the depth map acquired from a single viewpoint encodes onlypartial geometric information of the lower limbs and exhibitslarge variations across different viewpoints. Existing off-the-shelfthree-dimensional (3D) pose tracking algorithms and publicdatasets for depth based human pose estimation are mainlytargeted at activity recognition applications. They are relativelyinsensitive to skeleton estimation accuracy, especially at thefoot segments. Furthermore, acquiring ground truth skeletondata for detailed biomechanics analysis also requires consid-erable efforts. To address these issues, we propose a novelcross-domain self-supervised complete geometric representationlearning framework, with knowledge transfer from the unlabelledsynthetic point clouds of full lower-limb surfaces. The proposedmethod can significantly reduce the number of ground truthskeletons (with only 1%) in the training phase, meanwhileensuring accurate and precise pose estimation and capturingdiscriminative features across different pathological gait patternscompared to other methods.

Journal article

Jia W, Ren Y, Li B, Beatrice B, Que J, Cao S, Wu Z, Mao Z-H, Lo B, Anderson AK, Frost G, McCrory MA, Sazonov E, Steiner-Asiedu M, Baranowski T, Burke LE, Sun Met al., 2022, A Novel Approach to Dining Bowl Reconstruction for Image-Based Food Volume Estimation, SENSORS, Vol: 22

Journal article

Qiu J, Lo FP-W, Gu X, Sun Y, Jiang S, Lo Bet al., 2021, Indoor future person localization from an egocentric wearable camera, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 8586-8592

Accurate prediction of future person location and movement trajectory from an egocentric wearable camera can benefit a wide range of applications, such as assisting visually impaired people in navigation, and the development of mobility assistance for people with disability. In this work, a new egocentric dataset was constructed using a wearable camera, with 8,250 short clips of a targeted person either walking 1) toward, 2) away, or 3) across the camera wearer in indoor environments, or 4) staying still in the scene, and 13,817 person bounding boxes were manually labelled. Apart from the bounding boxes, the dataset also contains the estimated pose of the targeted person as well as the IMU signal of the wearable camera at each time point. An LSTM-based encoder-decoder framework was designed to predict the future location and movement trajectory of the targeted person in this egocentric setting. Extensive experiments have been conducted on the new dataset, and have shown that the proposed method is able to reliably and better predict future person location and trajectory in egocentric videos captured by the wearable camera compared to three baselines.

Conference paper

Zhang D, Wang R, Lo B, 2021, Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction, IEEE International Conference on Robotics and Automation (ICRA) 2021, Publisher: IEEE, Pages: 1350-1356

Minimally invasive surgery mainly consists of a series of sub-tasks, which can be decomposed into basic gestures or contexts. As a prerequisite of autonomic operation, surgical gesture recognition can assist motion planning and decision-making, and build up context-aware knowledge to improve the surgical robot control quality. In this work, we aim to develop an effective surgical gesture recognition approach with an explainable feature extraction process. A Bidirectional Multi-Layer independently RNN (BML-indRNN) model is proposed in this paper, while spatial feature extraction is implemented via fine-tuning of a Deep Convolutional Neural Network (DCNN) model constructed based on the VGG architecture. To eliminate the black-box effects of DCNN, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed. It can provide explainable results by showing the regions of the surgical images that have a strong relationship with the surgical gesture classification results. The proposed method was evaluated based on the suturing task with data obtained from the public available JIGSAWS database. Comparative studies were conducted to verify the pro-posed framework. Results indicated that the testing accuracy for the suturing task based on our proposed method is 87.13%,which outperforms most of the state-of-the-art algorithms

Conference paper

Wang Y, Lo B, 2021, A soft inflatable elbow-assistive robot for children with cerebral palsy

Cerebral palsy can severely impair children's motor function and leading to permanent disability. Compared to adults, children are more vulnerable and susceptible to external harm. Wearable robotics gained much attention in rehabilitation, and has shown its potential in supporting the recovery of people with motor dysfunctions. Conventional adult-oriented wearable assistive robots are tendon-driven whereas the force and inertia generated is too large for children, which could injure children. To address this issue, this paper proposes a novel soft inflatable robot that can aid children in elbow movement whilst minimising the risk of harm. Thermoplastic Polyurethane (TPU) and pneumatic actuation were used in developing the soft robot. From the experiment, the maximum bending angle is 142.2°, with the maximum moment generated being 0.784 Nm, which is suitable for the needed elbow support for young children with cerebral palsy.

Conference paper

Pavlidou A, Lo B, 2021, Artificial ear - A wearable device for the hearing impaired

Hearing aid devices have been around for decades, and one of the most recent approaches is the cochlear implant which is designed for patients with severe hearing loss. This paper introduces the design of a haptic-signal based hearing aid targeting patients suffering from inner ear malfunction, for which the conventional assistive hearing devices will not suffice. This device is designed to record the incoming sound, filter and analyze it into its harmonics, and classify it into the phonemes. The output is transfer into tactile feedback with vibrating motors and each phoneme will activate the respective combination of them.

Conference paper

Rosa BG, Anastasova S, Lo B, 2021, Small-form wearable device for long-term monitoring of cardiac sounds on the body surface

Sound monitoring from sources inside the human body can have important diagnostic relevance in medicine. Cardiac sounds originated from the pumping activity of the heart structure is such an example, with valuable cardiovascular parameters being extracted from the signal, including heart rate (HR) and the systolic intervals. Novel non-invasive methods for early detection of potential life-threatening risks convoyed by unbalanced cardiovascular parameters are essential to reduce the mortality rates associated with cardiac diseases nowadays. In this paper, we propose a small-form wearable device for long-term monitoring of the cardiac sounds through a miniaturized microphone in contact with the body surface at specific locations, which extend from the chest region to the upper and lower body parts. Powered by battery, the device can measure signals for a consecutive period of 28 h in continuous recording mode that is extensive up to 7 days in discontinuous mode, achieving signal amplitude resolution of 0.81 μV and optimal bandwidth between 5 to 20 Hz (infrasound range). The proposed device was able to detect cardiac sound patterns in locations as distant as the forehead, wrist, or ankle, thus paving the way to the use of acoustic signals for wearable heartbeat estimators still relying on optical or bio-potential methods, while replacing the obtrusive and expensive cardiography equipment dedicated to the estimation of the systolic intervals directly from the chest.

Conference paper

Hu M, Kassanos P, Keshavarz M, Yeatman E, Lo Bet al., 2021, Electrical and Mechanical Characterization of Carbon-Based Elastomeric Composites for Printed Sensors and Electronics

Printing technologies have attracted significant interest in recent years, particularly for the development of flexible and stretchable electronics and sensors. Conductive elastomeric composites are a popular choice for these new generations of devices. This paper examines the electrical and mechanical properties of elastomeric composites of polydimethylsiloxane (PDMS), an insulating elastomer, with carbon-based fillers (graphite powder and various types of carbon black, CB), as a function of their composition. The results can direct the choice of material composition to address specific device and application requirements. Molding and stencil printing are used to demonstrate their use.

Conference paper

Han J, Gu X, Lo B, 2021, Semi-supervised contrastive learning for generalizable motor imagery eeg classification, 17th IEEE International Conference on Wearable and Implantable Body Sensor Networks, Publisher: IEEE

Electroencephalography (EEG) is one of the most widely used brain-activity recording methods in non-invasive brain-machine interfaces (BCIs). However, EEG data is highly nonlinear, and its datasets often suffer from issues such as data heterogeneity, label uncertainty and data/label scarcity. To address these, we propose a domain independent, end-to-end semi-supervised learning framework with contrastive learning and adversarial training strategies. Our method was evaluated in experiments with different amounts of labels and an ablation study in a motor imagery EEG dataset. The experiments demonstrate that the proposed framework with two different backbone deep neural networks show improved performance over their supervised counterparts under the same condition.

Conference paper

Yang X, Zhang Y, Lo B, Wu D, Liao H, Zhang Y-Tet al., 2021, DBAN: Adversarial Network With Multi-Scale Features for Cardiac MRI Segmentation, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 25, Pages: 2018-2028, ISSN: 2168-2194

Journal article

Jiang S, Kang P, Song X, Lo B, Shull PBet al., 2021, Emerging wearable interfaces and algorithms for hand gesture recognition: a survey., IEEE Reviews in Biomedical Engineering, Vol: PP, ISSN: 1941-1189

Hands are vital in a wide range of fundamental daily activities, and neurological diseases that impede hand function can significantly affect quality of life. Wearable hand gesture interfaces hold promise to restore and assist hand function and to enhance human-human and human-computer communication. The purpose of this review is to synthesize current novel sensing interfaces and algorithms for hand gesture recognition, and the scope of applications covers rehabilitation, prosthesis control, sign language recognition, and human-computer interaction. Results showed that electrical, dynamic, acoustical/vibratory, and optical sensing were the primary input modalities in gesture recognition interfaces. Two categories of algorithms were identified: 1) classification algorithms for predefined, fixed hand poses and 2) regression algorithms for continuous finger and wrist joint angles. Conventional machine learning algorithms, including linear discriminant analysis, support vector machines, random forests, and non-negative matrix factorization, have been widely used for a variety of gesture recognition applications, and deep learning algorithms have more recently been applied to further facilitate the complex relationship between sensor signals and multi-articulated hand postures. Future research should focus on increasing recognition accuracy with larger hand gesture datasets, improving reliability and robustness for daily use outside of the laboratory, and developing softer, less obtrusive interfaces.

Journal article

Qiu J, Lo FP-W, Jiang S, Tsai Y-Y, Sun Y, Lo Bet al., 2021, Counting bites and recognizing consumed food from videos for passive dietary monitoring., IEEE Journal of Biomedical and Health Informatics, Vol: 25, Pages: 1471-1482, ISSN: 2168-2194

Assessing dietary intake in epidemiological studies are predominantly based on self-reports, which are subjective, inefficient, and also prone to error. Technological approaches are therefore emerging to provide objective dietary assessments. Using only egocentric dietary intake videos, this work aims to provide accurate estimation on individual dietary intake through recognizing consumed food items and counting the number of bites taken. This is different from previous studies that rely on inertial sensing to count bites, and also previous studies that only recognize visible food items but not consumed ones. As a subject may not consume all food items visible in a meal, recognizing those consumed food items is more valuable. A new dataset that has 1,022 dietary intake video clips was constructed to validate our concept of bite counting and consumed food item recognition from egocentric videos. 12 subjects participated and 52 meals were captured. A total of 66 unique food items, including food ingredients and drinks, were labelled in the dataset along with a total of 2,039 labelled bites. Deep neural networks were used to perform bite counting and food item recognition in an end-to-end manner. Experiments have shown that counting bites directly from video clips can reach 74.15% top-1 accuracy (classifying between 0-4 bites in 20-second clips), and a MSE value of 0.312 (when using regression). Our experiments on video-based food recognition also show that recognizing consumed food items is indeed harder than recognizing visible ones, with a drop of 25% in F1 score.

Journal article

Chen G, Jia W, Zhao Y, Mao Z-H, Lo B, Anderson AK, Frost G, Jobarteh ML, McCrory MA, Sazonov E, Steiner-Asiedu M, Ansong RS, Baranowski T, Burke L, Sun Met al., 2021, Food/non-food classification of real-life egocentric images in low- and middle-income countries based on image tagging features, Frontiers in Artificial Intelligence, Vol: 4, ISSN: 2624-8212

Malnutrition, including both undernutrition and obesity, is a significant problem in low- and middle-income countries (LMICs). In order to study malnutrition and develop effective intervention strategies, it is crucial to evaluate nutritional status in LMICs at the individual, household, and community levels. In a multinational research project supported by the Bill & Melinda Gates Foundation, we have been using a wearable technology to conduct objective dietary assessment in sub-Saharan Africa. Our assessment includes multiple diet-related activities in urban and rural families, including food sources (e.g., shopping, harvesting, and gathering), preservation/storage, preparation, cooking, and consumption (e.g., portion size and nutrition analysis). Our wearable device ("eButton" worn on the chest) acquires real-life images automatically during wake hours at preset time intervals. The recorded images, in amounts of tens of thousands per day, are post-processed to obtain the information of interest. Although we expect future Artificial Intelligence (AI) technology to extract the information automatically, at present we utilize AI to separate the acquired images into two binary classes: images with (Class 1) and without (Class 0) edible items. As a result, researchers need only to study Class-1 images, reducing their workload significantly. In this paper, we present a composite machine learning method to perform this classification, meeting the specific challenges of high complexity and diversity in the real-world LMIC data. Our method consists of a deep neural network (DNN) and a shallow learning network (SLN) connected by a novel probabilistic network interface layer. After presenting the details of our method, an image dataset acquired from Ghana is utilized to train and evaluate the machine learning system. Our comparative experiment indicates that the new composite method performs better than the conventional deep learning method assessed by integra

Journal article

Sun Y, Lo FP-W, Lo B, 2021, Light-weight internet-of-things device authentication, encryption and key distribution using end-to-end neural cryptosystems, IEEE Internet of Things Journal, ISSN: 2327-4662

Device authentication, encryption, and key distribution are of vital importance to any Internet-of-Things (IoT) systems, such as the new smart city infrastructures. This is due to the concern that attackers could easily exploit the lack of strong security in IoT devices to gain unauthorized access to the system or to hijack IoT devices to perform denial-of-service attacks on other networks. With the rise of fog and edge computing in IoT systems, increasing numbers of IoT devices have been equipped with computing capabilities to perform data analysis with deep learning technologies. Deep learning on edge devices can be deployed in numerous applications, such as local cardiac arrhythmia detection on a smart sensing patch, but it is rarely applied to device authentication and wireless communication encryption. In this paper, we propose a novel lightweight IoT device authentication, encryption, and key distribution approach using neural cryptosystems and binary latent space. The neural cryptosystems adopt three types of end-to-end encryption schemes: symmetric, public-key, and without keys. A series of experiments were conducted to test the performance and security strength of the proposed neural cryptosystems. The experimental results demonstrate the potential of this novel approach as a promising security and privacy solution for the next-generation of IoT systems.

Journal article

Zhang C, Liu S, Han F, Nie Z, Lo B, Zhang Yet al., 2021, Hybrid manifold-deep convolutional neural network for sleep staging, Methods, Pages: 1-9, ISSN: 1046-2023

Analysis of electroencephalogram (EEG) is a crucial diagnostic criterion for many sleep disorders, of which sleep staging is an important component. Manual stage classification is a labor-intensive process and usually suffered from many subjective factors. Recently, more and more computer-aided techniques have been applied to this task, among which deep convolutional neural network has been performing well as an effective automatic classification model. Despite some comprehensive models have been developed to improve classification results, the accuracy for clinical applications has not been reached due to the lack of sufficient labeled data and the limitation of extracting latent discriminative EEG features. Therefore, we propose a novel hybrid manifold-deep convolutional neural network with hyperbolic attention. To overcome the shortage of labeled data, we update the semi-supervised training scheme as an optimal solution. In order to extract the latent feature representation, we introduce the manifold learning module and the hyperbolic module to extract more discriminative information. Eight subjects from the public dataset are utilized to evaluate our pipeline, and the model achieved 89% accuracy, 70% precision, 80% sensitivity, 72% f1-score and kappa coefficient of 78%, respectively. The proposed model demonstrates powerful ability in extracting feature representation and achieves promising results by using semi-supervised training scheme. Therefore, our approach shows strong potential for future clinical development.

Journal article

Lei J, Qiu J, Lo FP-W, Lo Bet al., 2021, Assessing individual dietary intake in food sharing scenarios with food and human pose detection, 6th International Workshop on Multimedia Assisted Dietary Management (MADiMa 2020), Publisher: Springer International Publishing, Pages: 549-557, ISSN: 0302-9743

Food sharing and communal eating are very common in some countries. To assess individual dietary intake in food sharing scenarios, this work proposes a vision-based approach to first capturing the food sharing scenario with a 360-degree camera, and then using a neural network to infer different eating states of each individual based on their body pose and relative positions to the dishes. The number of bites each individual has taken of each dish is then deduced by analyzing the inferred eating states. A new dataset with 14 panoramic food sharing videos was constructed to validate our approach. The results show that our approach is able to reliably predict different eating states as well as individual’s bite count with respect to each dish in food sharing scenarios.

Conference paper

Li W, Tsai Y-Y, Yang G-Z, Lo Bet al., 2021, A novel endoscope design using spiral technique for robotic-assisted endoscopy insertion, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3119-3124

Gastrointestinal (GI) endoscopy is a conventional and prevalent procedure used to diagnose and treat diseases in the digestive tract. This procedure requires inserting an endoscope equipped with a camera and instruments inside a patient to the target of interest. To manoeuvre the endoscope, an endoscopist would rotate the knob at the handle to change the direction of the distal tip and apply the feeding force to advance the endoscope. However, due to the nature of the design, this often causes a looping problem during insertion making it difficult to be further advanced to the deeper section of the tract such as the transverse and ascending colon. To this end, in this paper, we propose a novel robotic endoscope which is covered by a rotating screw-like sheath and uses a spiral insertion technique to generate 'pull' forces at the distal tip of the endoscope to facilitate insertion. The whole shaft of the endoscope can be actively rotated, providing the crawling ability from the attached spiral sheath. With the redundant control on a spring-like continuum joint, the bending tip is capable of maintaining its orientation to assist endoscope navigation. To test its functions and feasibility to address the looping problem, three experiments were carried out. The first two experiments were to analyse the kinematic of the device and test the ability of the device to hold its distal tip at different orientation angles during spiral insertion. In the third experiment, we inserted the device in the bent colon phantom to evaluate the effectiveness of the proposed design against looping when advancing through a curved section of a colon. Results show the moving ability using spiral technique and verify its potential of clinical application.

Conference paper

Gu X, Guo Y, Deligianni F, Lo B, Yang G-Zet al., 2021, Cross-subject and cross-modal transfer for generalized abnormal gait pattern recognition, IEEE Transactions on Neural Networks and Learning Systems, Vol: 32, Pages: 546-560, ISSN: 1045-9227

For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.

Journal article

Li W, Shen M, Gao A, Yang GZ, Lo Bet al., 2021, Towards a Snake-Like Flexible Robot for Endoscopic Submucosal Dissection, IEEE Transactions on Medical Robotics and Bionics, Vol: 3, Pages: 257-260

The advance of flexible robots enables a more efficient and safer way to perform endoscopic submucosal dissection surgery. The robot should be flexible enough for easy insertion and able to maintain a rigid shape to transmit the forces applied on instrumentation during the operation. This article presents a snake-like flexible endoscope design which consists of an active snake robot and a passive flexible body. The active section is composed of metal printed spring-like joints actuated by tendons arranged in a novel fashion. To analyse the performance and clinical feasibility of the proposed flexible robot, Finite Element Analysis, workspace analysis, path following accuracy and force test have been performed. The results have shown that the robot can reach a minimum retro-flex bending radius of 23 mm and the distance errors of each joint when advancing along a simulated colon path are analysed. Validation of the proposed robot demonstrates its potential for ESD surgeries.

Journal article

Lo FPW, Guo Y, Sun Y, Qiu J, Lo Bet al., 2021, Deep3DRanker: A Novel Framework for Learning to Rank 3D Models with Self-Attention in Robotic Vision, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4341-4347, ISSN: 1050-4729

Conference paper

Kang P, Jiang S, Shull PB, Lo Bet al., 2021, Feasibility Validation on Healthy Adults of a Novel Active Vibrational Sensing Based Ankle Band for Ankle Flexion Angle Estimation, IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY, Vol: 2, Pages: 314-319

Journal article

Bai W, Cursi F, Guo X, Huang B, Lo B, Yang GZ, Yeatman EMet al., 2021, Task-Based LSTM Kinematic Modelling for a Tendon-Driven Flexible Surgical Robot, IEEE Transactions on Medical Robotics and Bionics

Tendon-driven flexible surgical robots are normally suffering from the inaccurate modelling and imprecise motion control problems due to the nonlinearities of tendon transmission. Learning-based approaches are experimental data-driven with uncertainties modelled empirically, which can be adopted to improve the inevitable issues. This work proposes a LSTM-based kinematic modelling approach with task-based data for a flexible tendon-driven surgical robot to improve the control accuracy. Real experiments demonstrated the effectiveness and superiority of the proposed learned model when completing path following tasks, especially compared to the traditional modelling.

Journal article

Wang R, Zhang D, Li Q, Xiao-Yun Z, Lo Bet al., 2021, Real-time Surgical Environment Enhancement for Robot-Assisted Minimally Invasive Surgery Based on Super-Resolution, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3434-3440, ISSN: 1050-4729

Conference paper

Chen X, Jiang S, Lo B, 2020, Subject-independent slow fall detection with wearable sensors via deep learning, 2020 IEEE SENSORS, Publisher: IEEE, Pages: 1-4

One of the major healthcare challenges is elderly fallers. A fall can lead to disabilities and even mortality. With the current Covid-19 pandemic, insufficient resources could be provided for the care of elderlies, and care workers often may not be able to visit them. Therefore, a fall may get undetected or delayed leading to serious harm or consequences. Automatic fall detection systems could provide the necessary detection and warnings for timely intervention. Although many sensor-based fall detection systems have been proposed, most systems focus on the sudden fall and have not considered the slow fall scenario, a typical fall instance for elderly fallers. In this paper, a robust activity (RA) and slow fall detection system is proposed. The system consists of a waist-worn wearable sensor embedded with an inertial measurement unit (IMU) and a barometer, and a reference ambient barometer. A deep neural network (DNN) is developed for fusing the sensor data and classifying fall events. The results have shown that the IMU-barometer design yield better detection of fall events and the DNN approach (90.33% accuracy) outperforms traditional machine learning algorithms.

Conference paper

Chen X, Jiang S, Li Z, Lo Bet al., 2020, A pervasive respiratory monitoring sensor for COVID-19 pandemic, IEEE Open Journal of Engineering in Medicine and Biology, Vol: 2, Pages: 11-16, ISSN: 2644-1276

Goal: The SARS-CoV-2 viral infection could cause severe acute respiratory syndrome, disturbing the regular breathing and leading to continuous coughing. Automatic respiration monitoring systems could provide the necessary metrics and warnings for timely intervention, especially for those with mild symptoms. Current respiration detection systems are expensive and too obtrusive for any large-scale deployment. Thus, a low-cost pervasive ambient sensor is proposed. Methods: We will posit a barometer on the working desk and develop a novel signal processing algorithm with a sparsity-based filter to remove the similar-frequency noise. Three modes (coughing, breathing and others) will be conducted to detect coughing and estimate different respiration rates. Results: The proposed system achieved 97.33% accuracy of cough detection and 98.98% specificity of respiration rate estimation. Conclusions: This system could be used as an effective screening tool for detecting subjects suffering from COVID-19 symptoms and enable large scale monitoring of patients diagnosed with or recovering.

Journal article

Zhang G, Mei Z, Zhang Y, Ma X, Lo B, Chen D, Zhang Yet al., 2020, A noninvasive blood glucose monitoring system based on smartphone PPG signal processing and machine learning, IEEE Transactions on Industrial Informatics, Vol: 16, Pages: 7209-7218, ISSN: 1551-3203

Blood glucose level needs to be monitored regularly to manage the health condition of hyperglycemic patients. The current glucose measurement approaches still rely on invasive techniques which are uncomfortable and raise the risk of infection. To facilitate daily care at home, in this article, we propose an intelligent, noninvasive blood glucose monitoring system which can differentiate a user's blood glucose level into normal, borderline, and warning based on smartphone photoplethysmography (PPG) signals. The main implementation processes of the proposed system include 1) a novel algorithm for acquiring PPG signals using only smartphone camera videos; 2) a fitting-based sliding window algorithm to remove varying degrees of baseline drifts and segment the signal into single periods; 3) extracting characteristic features from the Gaussian functions by comparing PPG signals at different blood glucose levels; 4) categorizing the valid samples into three glucose levels by applying machine learning algorithms. Our proposed system was evaluated on a data set of 80 subjects. Experimental results demonstrate that the system can separate valid signals from invalid ones at an accuracy of 97.54% and the overall accuracy of estimating the blood glucose levels reaches 81.49%. The proposed system provides a reference for the introduction of noninvasive blood glucose technology into daily or clinical applications. This article also indicates that smartphone-based PPG signals have great potential to assess an individual's blood glucose level.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00347538&limit=30&person=true