239 results found
Gu X, Guo Y, Yang G-Z, et al., 2021, Cross-domain self-supervised complete geometric representation learning for real-scanned point cloud based pathological gait analysis, IEEE Journal of Biomedical and Health Informatics, ISSN: 2168-2194
Accurate lower-limb pose estimation is a prereq-uisite of skeleton based pathological gait analysis. To achievethis goal in free-living environments for long-term monitoring,single depth sensor has been proposed in research. However,the depth map acquired from a single viewpoint encodes onlypartial geometric information of the lower limbs and exhibitslarge variations across different viewpoints. Existing off-the-shelfthree-dimensional (3D) pose tracking algorithms and publicdatasets for depth based human pose estimation are mainlytargeted at activity recognition applications. They are relativelyinsensitive to skeleton estimation accuracy, especially at thefoot segments. Furthermore, acquiring ground truth skeletondata for detailed biomechanics analysis also requires consid-erable efforts. To address these issues, we propose a novelcross-domain self-supervised complete geometric representationlearning framework, with knowledge transfer from the unlabelledsynthetic point clouds of full lower-limb surfaces. The proposedmethod can significantly reduce the number of ground truthskeletons (with only 1%) in the training phase, meanwhileensuring accurate and precise pose estimation and capturingdiscriminative features across different pathological gait patternscompared to other methods.
Hu M, Kassanos P, Keshavarz M, et al., 2021, Electrical and Mechanical Characterization of Carbon-Based Elastomeric Composites for Printed Sensors and Electronics
Printing technologies have attracted significant interest in recent years, particularly for the development of flexible and stretchable electronics and sensors. Conductive elastomeric composites are a popular choice for these new generations of devices. This paper examines the electrical and mechanical properties of elastomeric composites of polydimethylsiloxane (PDMS), an insulating elastomer, with carbon-based fillers (graphite powder and various types of carbon black, CB), as a function of their composition. The results can direct the choice of material composition to address specific device and application requirements. Molding and stencil printing are used to demonstrate their use.
Han J, Gu X, Lo B, 2021, Semi-supervised contrastive learning for generalizable motor imagery eeg classification, 17th IEEE International Conference on Wearable and Implantable Body Sensor Networks, Publisher: IEEE
Electroencephalography (EEG) is one of the most widely used brain-activity recording methods in non-invasive brain-machine interfaces (BCIs). However, EEG data is highly nonlinear, and its datasets often suffer from issues such as data heterogeneity, label uncertainty and data/label scarcity. To address these, we propose a domain independent, end-to-end semi-supervised learning framework with contrastive learning and adversarial training strategies. Our method was evaluated in experiments with different amounts of labels and an ablation study in a motor imagery EEG dataset. The experiments demonstrate that the proposed framework with two different backbone deep neural networks show improved performance over their supervised counterparts under the same condition.
Yang X, Zhang Y, Lo B, et al., 2021, DBAN: Adversarial Network With Multi-Scale Features for Cardiac MRI Segmentation, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 25, Pages: 2018-2028, ISSN: 2168-2194
Jiang S, Kang P, Song X, et al., 2021, Emerging wearable interfaces and algorithms for hand gesture recognition: a survey., IEEE Reviews in Biomedical Engineering, Vol: PP, ISSN: 1941-1189
Hands are vital in a wide range of fundamental daily activities, and neurological diseases that impede hand function can significantly affect quality of life. Wearable hand gesture interfaces hold promise to restore and assist hand function and to enhance human-human and human-computer communication. The purpose of this review is to synthesize current novel sensing interfaces and algorithms for hand gesture recognition, and the scope of applications covers rehabilitation, prosthesis control, sign language recognition, and human-computer interaction. Results showed that electrical, dynamic, acoustical/vibratory, and optical sensing were the primary input modalities in gesture recognition interfaces. Two categories of algorithms were identified: 1) classification algorithms for predefined, fixed hand poses and 2) regression algorithms for continuous finger and wrist joint angles. Conventional machine learning algorithms, including linear discriminant analysis, support vector machines, random forests, and non-negative matrix factorization, have been widely used for a variety of gesture recognition applications, and deep learning algorithms have more recently been applied to further facilitate the complex relationship between sensor signals and multi-articulated hand postures. Future research should focus on increasing recognition accuracy with larger hand gesture datasets, improving reliability and robustness for daily use outside of the laboratory, and developing softer, less obtrusive interfaces.
Qiu J, Lo FP-W, Jiang S, et al., 2021, Counting bites and recognizing consumed food from videos for passive dietary monitoring., IEEE Journal of Biomedical and Health Informatics, Vol: 25, Pages: 1471-1482, ISSN: 2168-2194
Assessing dietary intake in epidemiological studies are predominantly based on self-reports, which are subjective, inefficient, and also prone to error. Technological approaches are therefore emerging to provide objective dietary assessments. Using only egocentric dietary intake videos, this work aims to provide accurate estimation on individual dietary intake through recognizing consumed food items and counting the number of bites taken. This is different from previous studies that rely on inertial sensing to count bites, and also previous studies that only recognize visible food items but not consumed ones. As a subject may not consume all food items visible in a meal, recognizing those consumed food items is more valuable. A new dataset that has 1,022 dietary intake video clips was constructed to validate our concept of bite counting and consumed food item recognition from egocentric videos. 12 subjects participated and 52 meals were captured. A total of 66 unique food items, including food ingredients and drinks, were labelled in the dataset along with a total of 2,039 labelled bites. Deep neural networks were used to perform bite counting and food item recognition in an end-to-end manner. Experiments have shown that counting bites directly from video clips can reach 74.15% top-1 accuracy (classifying between 0-4 bites in 20-second clips), and a MSE value of 0.312 (when using regression). Our experiments on video-based food recognition also show that recognizing consumed food items is indeed harder than recognizing visible ones, with a drop of 25% in F1 score.
Chen G, Jia W, Zhao Y, et al., 2021, Food/non-food classification of real-life egocentric images in low- and middle-income countries based on image tagging features, Frontiers in Artificial Intelligence, Vol: 4, ISSN: 2624-8212
Malnutrition, including both undernutrition and obesity, is a significant problem in low- and middle-income countries (LMICs). In order to study malnutrition and develop effective intervention strategies, it is crucial to evaluate nutritional status in LMICs at the individual, household, and community levels. In a multinational research project supported by the Bill & Melinda Gates Foundation, we have been using a wearable technology to conduct objective dietary assessment in sub-Saharan Africa. Our assessment includes multiple diet-related activities in urban and rural families, including food sources (e.g., shopping, harvesting, and gathering), preservation/storage, preparation, cooking, and consumption (e.g., portion size and nutrition analysis). Our wearable device ("eButton" worn on the chest) acquires real-life images automatically during wake hours at preset time intervals. The recorded images, in amounts of tens of thousands per day, are post-processed to obtain the information of interest. Although we expect future Artificial Intelligence (AI) technology to extract the information automatically, at present we utilize AI to separate the acquired images into two binary classes: images with (Class 1) and without (Class 0) edible items. As a result, researchers need only to study Class-1 images, reducing their workload significantly. In this paper, we present a composite machine learning method to perform this classification, meeting the specific challenges of high complexity and diversity in the real-world LMIC data. Our method consists of a deep neural network (DNN) and a shallow learning network (SLN) connected by a novel probabilistic network interface layer. After presenting the details of our method, an image dataset acquired from Ghana is utilized to train and evaluate the machine learning system. Our comparative experiment indicates that the new composite method performs better than the conventional deep learning method assessed by integra
Sun Y, Lo FP-W, Lo B, 2021, Light-weight internet-of-things device authentication, encryption and key distribution using end-to-end neural cryptosystems, IEEE Internet of Things Journal, ISSN: 2327-4662
Device authentication, encryption, and key distribution are of vital importance to any Internet-of-Things (IoT) systems, such as the new smart city infrastructures. This is due to the concern that attackers could easily exploit the lack of strong security in IoT devices to gain unauthorized access to the system or to hijack IoT devices to perform denial-of-service attacks on other networks. With the rise of fog and edge computing in IoT systems, increasing numbers of IoT devices have been equipped with computing capabilities to perform data analysis with deep learning technologies. Deep learning on edge devices can be deployed in numerous applications, such as local cardiac arrhythmia detection on a smart sensing patch, but it is rarely applied to device authentication and wireless communication encryption. In this paper, we propose a novel lightweight IoT device authentication, encryption, and key distribution approach using neural cryptosystems and binary latent space. The neural cryptosystems adopt three types of end-to-end encryption schemes: symmetric, public-key, and without keys. A series of experiments were conducted to test the performance and security strength of the proposed neural cryptosystems. The experimental results demonstrate the potential of this novel approach as a promising security and privacy solution for the next-generation of IoT systems.
Zhang D, Wang R, Lo B, 2021, Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction, IEEE International Conference on Robotics and Automation (ICRA) 2021, Publisher: IEEE
Minimally invasive surgery mainly consists of a series of sub-tasks, which can be decomposed into basic gestures or contexts. As a prerequisite of autonomic operation, surgical gesture recognition can assist motion planning and decision-making, and build up context-aware knowledge to improve the surgical robot control quality. In this work, we aim to develop an effective surgical gesture recognition approach with an explainable feature extraction process. A Bidirectional Multi-Layer independently RNN (BML-indRNN) model is proposed in this paper, while spatial feature extraction is implemented via fine-tuning of a Deep Convolutional Neural Network (DCNN) model constructed based on the VGG architecture. To eliminate the black-box effects of DCNN, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed. It can provide explainable results by showing the regions of the surgical images that have a strong relationship with the surgical gesture classification results. The proposed method was evaluated based on the suturing task with data obtained from the public available JIGSAWS database. Comparative studies were conducted to verify the pro-posed framework. Results indicated that the testing accuracy for the suturing task based on our proposed method is 87.13%,which outperforms most of the state-of-the-art algorithms
Zhang C, Liu S, Han F, et al., 2021, Hybrid manifold-deep convolutional neural network for sleep staging, Methods, Pages: 1-9, ISSN: 1046-2023
Analysis of electroencephalogram (EEG) is a crucial diagnostic criterion for many sleep disorders, of which sleep staging is an important component. Manual stage classification is a labor-intensive process and usually suffered from many subjective factors. Recently, more and more computer-aided techniques have been applied to this task, among which deep convolutional neural network has been performing well as an effective automatic classification model. Despite some comprehensive models have been developed to improve classification results, the accuracy for clinical applications has not been reached due to the lack of sufficient labeled data and the limitation of extracting latent discriminative EEG features. Therefore, we propose a novel hybrid manifold-deep convolutional neural network with hyperbolic attention. To overcome the shortage of labeled data, we update the semi-supervised training scheme as an optimal solution. In order to extract the latent feature representation, we introduce the manifold learning module and the hyperbolic module to extract more discriminative information. Eight subjects from the public dataset are utilized to evaluate our pipeline, and the model achieved 89% accuracy, 70% precision, 80% sensitivity, 72% f1-score and kappa coefficient of 78%, respectively. The proposed model demonstrates powerful ability in extracting feature representation and achieves promising results by using semi-supervised training scheme. Therefore, our approach shows strong potential for future clinical development.
Lei J, Qiu J, Lo FP-W, et al., 2021, Assessing individual dietary intake in food sharing scenarios with food and human pose detection, 6th International Workshop on Multimedia Assisted Dietary Management (MADiMa 2020), Publisher: Springer International Publishing, Pages: 549-557, ISSN: 0302-9743
Food sharing and communal eating are very common in some countries. To assess individual dietary intake in food sharing scenarios, this work proposes a vision-based approach to first capturing the food sharing scenario with a 360-degree camera, and then using a neural network to infer different eating states of each individual based on their body pose and relative positions to the dishes. The number of bites each individual has taken of each dish is then deduced by analyzing the inferred eating states. A new dataset with 14 panoramic food sharing videos was constructed to validate our approach. The results show that our approach is able to reliably predict different eating states as well as individual’s bite count with respect to each dish in food sharing scenarios.
Li W, Tsai Y-Y, Yang G-Z, et al., 2021, A novel endoscope design using spiral technique for robotic-assisted endoscopy insertion, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3119-3124
Gastrointestinal (GI) endoscopy is a conventional and prevalent procedure used to diagnose and treat diseases in the digestive tract. This procedure requires inserting an endoscope equipped with a camera and instruments inside a patient to the target of interest. To manoeuvre the endoscope, an endoscopist would rotate the knob at the handle to change the direction of the distal tip and apply the feeding force to advance the endoscope. However, due to the nature of the design, this often causes a looping problem during insertion making it difficult to be further advanced to the deeper section of the tract such as the transverse and ascending colon. To this end, in this paper, we propose a novel robotic endoscope which is covered by a rotating screw-like sheath and uses a spiral insertion technique to generate 'pull' forces at the distal tip of the endoscope to facilitate insertion. The whole shaft of the endoscope can be actively rotated, providing the crawling ability from the attached spiral sheath. With the redundant control on a spring-like continuum joint, the bending tip is capable of maintaining its orientation to assist endoscope navigation. To test its functions and feasibility to address the looping problem, three experiments were carried out. The first two experiments were to analyse the kinematic of the device and test the ability of the device to hold its distal tip at different orientation angles during spiral insertion. In the third experiment, we inserted the device in the bent colon phantom to evaluate the effectiveness of the proposed design against looping when advancing through a curved section of a colon. Results show the moving ability using spiral technique and verify its potential of clinical application.
Gu X, Guo Y, Deligianni F, et al., 2021, Cross-subject and cross-modal transfer for generalized abnormal gait pattern recognition, IEEE Transactions on Neural Networks and Learning Systems, Vol: 32, Pages: 546-560, ISSN: 1045-9227
For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.
Chen X, Jiang S, Lo B, 2020, Subject-independent slow fall detection with wearable sensors via deep learning, 2020 IEEE SENSORS, Publisher: IEEE, Pages: 1-4
One of the major healthcare challenges is elderly fallers. A fall can lead to disabilities and even mortality. With the current Covid-19 pandemic, insufficient resources could be provided for the care of elderlies, and care workers often may not be able to visit them. Therefore, a fall may get undetected or delayed leading to serious harm or consequences. Automatic fall detection systems could provide the necessary detection and warnings for timely intervention. Although many sensor-based fall detection systems have been proposed, most systems focus on the sudden fall and have not considered the slow fall scenario, a typical fall instance for elderly fallers. In this paper, a robust activity (RA) and slow fall detection system is proposed. The system consists of a waist-worn wearable sensor embedded with an inertial measurement unit (IMU) and a barometer, and a reference ambient barometer. A deep neural network (DNN) is developed for fusing the sensor data and classifying fall events. The results have shown that the IMU-barometer design yield better detection of fall events and the DNN approach (90.33% accuracy) outperforms traditional machine learning algorithms.
Chen X, Jiang S, Li Z, et al., 2020, A pervasive respiratory monitoring sensor for COVID-19 pandemic, IEEE Open Journal of Engineering in Medicine and Biology, Vol: 2, Pages: 11-16, ISSN: 2644-1276
Goal: The SARS-CoV-2 viral infection could cause severe acute respiratory syndrome, disturbing the regular breathing and leading to continuous coughing. Automatic respiration monitoring systems could provide the necessary metrics and warnings for timely intervention, especially for those with mild symptoms. Current respiration detection systems are expensive and too obtrusive for any large-scale deployment. Thus, a low-cost pervasive ambient sensor is proposed. Methods: We will posit a barometer on the working desk and develop a novel signal processing algorithm with a sparsity-based filter to remove the similar-frequency noise. Three modes (coughing, breathing and others) will be conducted to detect coughing and estimate different respiration rates. Results: The proposed system achieved 97.33% accuracy of cough detection and 98.98% specificity of respiration rate estimation. Conclusions: This system could be used as an effective screening tool for detecting subjects suffering from COVID-19 symptoms and enable large scale monitoring of patients diagnosed with or recovering.
Zhang G, Mei Z, Zhang Y, et al., 2020, A noninvasive blood glucose monitoring system based on smartphone PPG signal processing and machine learning, IEEE Transactions on Industrial Informatics, Vol: 16, Pages: 7209-7218, ISSN: 1551-3203
Blood glucose level needs to be monitored regularly to manage the health condition of hyperglycemic patients. The current glucose measurement approaches still rely on invasive techniques which are uncomfortable and raise the risk of infection. To facilitate daily care at home, in this article, we propose an intelligent, noninvasive blood glucose monitoring system which can differentiate a user's blood glucose level into normal, borderline, and warning based on smartphone photoplethysmography (PPG) signals. The main implementation processes of the proposed system include 1) a novel algorithm for acquiring PPG signals using only smartphone camera videos; 2) a fitting-based sliding window algorithm to remove varying degrees of baseline drifts and segment the signal into single periods; 3) extracting characteristic features from the Gaussian functions by comparing PPG signals at different blood glucose levels; 4) categorizing the valid samples into three glucose levels by applying machine learning algorithms. Our proposed system was evaluated on a data set of 80 subjects. Experimental results demonstrate that the system can separate valid signals from invalid ones at an accuracy of 97.54% and the overall accuracy of estimating the blood glucose levels reaches 81.49%. The proposed system provides a reference for the introduction of noninvasive blood glucose technology into daily or clinical applications. This article also indicates that smartphone-based PPG signals have great potential to assess an individual's blood glucose level.
Di Camillo B, Nicosia G, Buffa F, et al., 2020, Guest editorial data science in smart healthcare: Challenges and opportunities, IEEE Journal of Biomedical and Health Informatics, Vol: 24, Pages: 3041-3043, ISSN: 2168-2194
The fifteen articles in this special section focus on data science used in smart healthcare applications. A shift toward a data-driven socio-economic health model is occurring. This is the result of the increased volume, velocity and variety of data collected from the public and private sector in healthcare, and biology in general. In the past five-years, there has been an impressive development of computational intelligence and informatics methods for application to health and biomedical science. However, the effective use of data to address the scale and scope of human health problems has yet to realize its full potential. The barriers limiting the impact of practical application of standard data mining and machine learning methods have been inherent to the characteristics of health data. Besides the volume of the data (‘big data’), these are challenging due to their heterogeneity, complexity, variability and dynamic nature. Finally, data management and interpretability of the results have been limited by practical challenges in implementing new and also existing standards across the different health providers and research institutions. The scope of this Special issue is to discuss some of these challenges and opportunities in health and biological data science, with particular focus on the infrastructure, software, methods and algorithms needed to analyze large datasets in biological and clinical research.
Mu F, Gu X, Guo Y, et al., 2020, Unsupervised domain adaptation for position-independent IMU based gait analysis, 2020 IEEE SENSORS, Publisher: IEEE, Pages: 1-4
Inertial measurement units (IMUs) together with advanced machine learning algorithms have enabled pervasive gait analysis. However, the worn positions of IMUs can be varied due to movements, and they are difficult to standardize across different trials, causing signal variations. Such variation contributes to a bias in the underlying distribution of training and testing data, and hinder the generalization ability of a computational gait analysis model. In this paper, we propose a position-independent IMU based gait analysis framework based on unsupervised domain adaptation. It is based on transferring knowledge from the trained data positions to a novel position without labels. Our framework was validated on gait event detection and pathological gait pattern recognition tasks based on different computational models and achieved consistently high performance on both tasks.
Chen J, Zhang D, Munawar A, et al., 2020, Supervised semi-autonomous control for surgical robot based on Banoian optimization, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2943-2949
The recent development of Robot-Assisted Minimally Invasive Surgery (RAMIS) has brought much benefit to ease the performance of complex Minimally Invasive Surgery (MIS) tasks and lead to more clinical outcomes. Compared to direct master-slave manipulation, semi-autonomous control for the surgical robot can enhance the efficiency of the operation, particularly for repetitive tasks. However, operating in a highly dynamic in-vivo environment is complex. Supervisory control functions should be included to ensure flexibility and safety during the autonomous control phase. This paper presents a haptic rendering interface to enable supervised semi-autonomous control for a surgical robot. Bayesian optimization is used to tune user-specific parameters during the surgical training process. User studies were conducted on a customized simulator for validation. Detailed comparisons are made between with and without the supervised semi-autonomous control mode in terms of the number of clutching events, task completion time, master robot end-effector trajectory and average control speed of the slave robot. The effectiveness of the Bayesian optimization is also evaluated, demonstrating that the optimized parameters can significantly improve users' performance. Results indicate that the proposed control method can reduce the operator's workload and enhance operation efficiency.
Zhang D, Lo FP-W, Zheng J-Q, et al., 2020, Data-driven microscopic pose and depth estimation for optical microrobot manipulation, ACS Photonics, Vol: 7, Pages: 3003-3014, ISSN: 2330-4022
Optical microrobots have a wide range of applications in biomedical research for both in vitro and in vivo studies. In most microrobotic systems, the video captured by a monocular camera is the only way for visualizing the movements of microrobots, and only planar motion, in general, can be captured by a monocular camera system. Accurate depth estimation is essential for 3D reconstruction or autofocusing of microplatforms, while the pose and depth estimation are necessary to enhance the 3D perception of the microrobotic systems to enable dexterous micromanipulation and other tasks. In this paper, we propose a data-driven method for pose and depth estimation in an optically manipulated microrobotic system. Focus measurement is used to obtain features for Gaussian Process Regression (GPR), which enables precise depth estimation. For mobile microrobots with varying poses, a novel method is developed based on a deep residual neural network with the incorporation of prior domain knowledge about the optical microrobots encoded via GPR. The method can simultaneously track microrobots with complex shapes and estimate the pose and depth values of the optical microrobots. Cross-validation has been conducted to demonstrate the submicron accuracy of the proposed method and precise pose and depth perception for microrobots. We further demonstrate the generalizability of the method by adapting it to microrobots of different shapes using transfer learning with few-shot calibration. Intuitive visualization is provided to facilitate effective human-robot interaction during micromanipulation based on pose and depth estimation results.
Zhang D, Barbot A, Lo B, et al., 2020, Distributed force control for microrobot manipulation via planar multi-spot optical tweezer, Advanced Optical Materials, Vol: 8, Pages: 1-15, ISSN: 2195-1071
Optical tweezers (OT) represent a versatile tool for micro‐manipulation. To avoid damages to living cells caused by illuminating laser directly on them, microrobots controlled by OT can be used for manipulation of cells or living organisms in microscopic scale. Translation and planar rotation motion of microrobots can be realized by using a multi‐spot planar OT. However, out‐of‐plane manipulation of microrobots is difficult to achieve with a planar OT. This paper presents a distributed manipulation scheme based on multiple laser spots, which can control the out‐of‐plane pose of a microrobot along multiple axes. Different microrobot designs have been investigated and fabricated for experimental validation. The main contributions of this paper include: i) development of a generic model for the structure design of microrobots which enables multi‐dimensional (6D) control via conventional multi‐spot OT; ii) introduction of the distributed force control for microrobot manipulation based on characteristic distance and power intensity distribution. Experiments are performed to demonstrate the effectiveness of the proposed method and its potential applications, which include indirect manipulation of micro‐objects.
Chen C-M, Anastasova S, Zhang K, et al., 2020, Towards wearable and flexible sensors and circuits integration for stress monitoring, IEEE Journal of Biomedical and Health Informatics, Vol: 24, Pages: 2208-2215, ISSN: 2168-2194
Excessive stress is one of the main causes of mental illness. Long-term exposure of stress could affect one's physiological wellbeing (such as hypertension) and psychological condition (such as depression). Multisensory information such as heart rate variability (HRV) and pH can provide suitable information about mental and physical stress. This paper proposes a novel approach for stress condition monitoring using disposable flexible sensors. By integrating flexible amplifiers with a commercially available flexible polyvinylidene difluoride (PVDF) mechanical deformation sensor and a pH-type chemical sensor, the proposed system can detect arterial pulses from the neck and pH levels from sweat located in the back of the body. The system uses organic thin film transistor (OTFT)-based signal amplification front-end circuits with modifications to accommodate the dynamic signal ranges obtained from the sensors. The OTFTs were manufactured on a low-cost flexible polyethylene naphthalate (PEN) substrate using a coater capable of Roll-to-Roll (R2R) deposition. The proposed system can capture physiological indicators with data interrogated by Near Field Communication (NFC). The device has been successfully tested with healthy subjects, demonstrating its feasibility for real-time stress monitoring.
Zhang D, Wu Z, Chen J, et al., 2020, Automatic microsurgical skill assessment based on cross-domain transfer learning, IEEE Robotics and Automation Letters, Vol: 5, Pages: 4148-4155, ISSN: 2377-3766
The assessment of microsurgical skills for Robot-Assisted Microsurgery (RAMS) still relies primarily on subjective observations and expert opinions. A general and automated evaluation method is desirable. Deep neural networks can be used for skill assessment through raw kinematic data, which has the advantages of being objective and efficient. However, one of the major issues of deep learning for the analysis of surgical skills is that it requires a large database to train the desired model, and the training process can be time-consuming. This letter presents a transfer learning scheme for training a model with limited RAMS datasets for microsurgical skill assessment. An in-house Microsurgical Robot Research Platform Database (MRRPD) is built with data collected from a microsurgical robot research platform (MRRP). It is used to verify the proposed cross-domain transfer learning for RAMS skill level assessment. The model is fine-tuned after training with the data obtained from the MRRP. Moreover, microsurgical tool tracking is developed to provide visual feedback while task-specific metrics and the other general evaluation metrics are provided to the operator as a reference. The method proposed has shown to offer the potential to guide the operator to achieve a higher level of skills for microsurgical operation.
Kassanos P, Berthelot M, Kim JA, et al., 2020, Smart sensing for surgery from tethered devices to wearables and implantables, IEEE Systems Man and Cybernetics Magazine, Vol: 6, Pages: 39-48, ISSN: 2333-942X
Recent developments in wearable electronics have fueled research into new materials, sensors, and microelectronic technologies for the realization of devices that have increased functionality and performance. This is further enhanced by advances in fabr ication methods and printing techniques, stimulating research on implantables and the advancement of existing medical devices. This article provides an overview of new designs, embodiments, fabrication methods, instrumentation, and informatics as well as the challenges in developing and deploying such devices and clinical applications that can benefit from them. The need for and use of these technologies across the perioperative surgical-care pathway are highlighted, along with a vision for the future and how these tools can be adopted by potential end users and health-care systems.
Lo FPW, Sun Y, Qiu J, et al., 2020, Image-based food classification and volume estimation for dietary assessment: a review., IEEE Journal of Biomedical and Health Informatics, Vol: 24, Pages: 1926-1939, ISSN: 2168-2194
A daily dietary assessment method named 24-hour dietary recall has commonly been used in nutritional epidemiology studies to capture detailed information of the food eaten by the participants to help understand their dietary behaviour. However, in this self-reporting technique, the food types and the portion size reported highly depends on users' subjective judgement which may lead to a biased and inaccurate dietary analysis result. As a result, a variety of visual-based dietary assessment approaches have been proposed recently. While these methods show promises in tackling issues in nutritional epidemiology studies, several challenges and forthcoming opportunities, as detailed in this study, still exist. This study provides an overview of computing algorithms, mathematical models and methodologies used in the field of image-based dietary assessment. It also provides a comprehensive comparison of the state of the art approaches in food recognition and volume/weight estimation in terms of their processing speed, model accuracy, efficiency and constraints. It will be followed by a discussion on deep learning method and its efficacy in dietary assessment. After a comprehensive exploration, we found that integrated dietary assessment systems combining with different approaches could be the potential solution to tackling the challenges in accurate dietary intake assessment.
Varghese RJ, Nguyen A, Burdet E, et al., 2020, Nonlinearity compensation in a multi-DoF shoulder sensing exosuit for real-time teleoperation, 3rd IEEE International Conference on Soft Robotics (RoboSoft), Publisher: IEEE, Pages: 668-675
The compliant nature of soft wearable robots makes them ideal for complex multiple degrees of freedom (DoF) joints, but also introduce additional structural nonlinearities. Intuitive control of these wearable robots requires robust sensing to overcome the inherent nonlinearities. This paper presents a joint kinematics estimator for a bio-inspired multi- DoF shoulder exosuit capable of compensating the encountered nonlinearities. To overcome the nonlinearities and hysteresis inherent to the soft and compliant nature of the suit, we developed a deep learning-based method to map the sensor data to the joint space. The experimental results show that the new learning-based framework outperforms recent state-of-the-art methods by a large margin while achieving 12ms inference time using only a GPU-based edge-computing device. The effectiveness of our combined exosuit and learning framework is demonstrated through real-time teleoperation with a simulated NAO humanoid robot.
Xiong J, Liang X, Zhao L, et al., 2020, Improving accuracy of heart failure detection using data refinement, Entropy: international and interdisciplinary journal of entropy and information studies, Vol: 22, Pages: 520-520, ISSN: 1099-4300
Due to the wide inter- and intra-individual variability, short-term heart rate variability (HRV) analysis (usually 5 min) might lead to inaccuracy in detecting heart failure. Therefore, RR interval segmentation, which can reflect the individual heart condition, has been a key research challenge for accurate detection of heart failure. Previous studies mainly focus on analyzing the entire 24-h ECG recordings from all individuals in the database which often led to poor detection rate. In this study, we propose a set of data refinement procedures, which can automatically extract heart failure segments and yield better detection of heart failure. The procedures roughly contain three steps: (1) select fast heart rate sequences, (2) apply dynamic time warping (DTW) measure to filter out dissimilar segments, and (3) pick out individuals with large numbers of segments preserved. A physical threshold-based Sample Entropy (SampEn) was applied to distinguish congestive heart failure (CHF) subjects from normal sinus rhythm (NSR) ones, and results using the traditional threshold were also discussed. Experiment on the PhysioNet/MIT RR Interval Databases showed that in SampEn analysis (embedding dimension m = 1, tolerance threshold r = 12 ms and time series length N = 300), the accuracy value after data refinement has increased to 90.46% from 75.07%. Meanwhile, for the proposed procedures, the area under receiver operating characteristic curve (AUC) value has reached 95.73%, which outperforms the original method (i.e., without applying the proposed data refinement procedures) with AUC of 76.83%. The results have shown that our proposed data refinement procedures can significantly improve the accuracy in heart failure detection.
Varghese RJ, Lo BPL, Yang G-Z, 2020, Design and prototyping of a bio-inspired kinematic sensing suit for the shoulder joint: precursor to a multi-DoF shoulder exosuit, IEEE Robotics and Automation Letters, Vol: 5, Pages: 540-547, ISSN: 2377-3766
Soft wearable robots represent a promising new design paradigm for rehabilitation and active assistance applications. Their compliant nature makes them ideal for complex joints, but intuitive control of these robots require robust and compliant sensing mechanisms. In this work, we introduce the sensing framework for a multiple degrees-of-freedom shoulder exosuit capable of sensing the kinematics of the joint. The proposed sensing system is inspired by the body's embodied kinematic sensing, and the organisation of muscles and muscle synergies responsible for shoulder movements. A motion-capture-based evaluation study of the developed framework confirmed conformance with the behaviour of the muscles that inspired its routing. This validation of the tendon-routing hypothesis allows for it to be extended to the actuation framework of the exosuit in the future. The sensor-to-joint-space mapping is based on multivariate multiple regression and derived using an Artificial Neural Network. Evaluation of the derived mapping achieved root mean square error of ≈5.43° and ≃3.65° for the azimuth and elevation joint angles measured over 29,500 frames (4+ minutes) of motion-capture data.
Jobarteh ML, McCrory MA, Lo B, et al., 2020, Development and validation of objective, passive dietary assessment Method for estimating food and nutrient intake in households in Low and Middle-Income Countries (LMICs): a study protocol, Current Developments in Nutrition, Vol: 4, Pages: 1-11, ISSN: 2475-2991
Malnutrition is a major concern in low- and middle-income countries (LMIC), but the full extent of nutritional deficiencies remains unknown largely due to lack of accurate assessment methods. This study seeks to develop and validate an objective, passive method of estimating food and nutrient intake in households in Ghana and Uganda. Household members (including under-5s and adolescents) are assigned a wearable camera device to capture images of their food intake during waking hours. Using custom software, images captured are then used to estimate an individual's food and nutrient (i.e., protein, fat, carbohydrate, energy, and micronutrients) intake. Passive food image capture and assessment provides an objective measure of food and nutrient intake in real time, minimizing some of the limitations associated with self-reported dietary intake methods. Its use in LMIC could potentially increase the understanding of a population's nutritional status, and the contribution of household food intake to the malnutrition burden. This project is registered at clinicaltrials.gov (NCT03723460).
Zhang Y, Guo Y, Yang P, et al., 2020, Epilepsy seizure prediction on EEG using common spatial pattern and convolutional neural network, IEEE Journal of Biomedical and Health Informatics, Vol: 24, Pages: 465-474, ISSN: 2168-2194
Epilepsy seizure prediction paves the way of timely warning for patients to take more active and effective intervention measures. Compared to seizure detection that only identifies the inter-ictal state and the ictal state, far fewer researches have been conducted on seizure prediction because the high similarity makes it challenging to distinguish between the pre-ictal state and the inter-ictal state. In this paper, a novel solution on seizure prediction is proposed using common spatial pattern (CSP) and convolutional neural network (CNN). Firstly, artificial preictal EEG signals based on the original ones are generated by combining the segmented pre-ictal signals to solve the trial imbalance problem between the two states. Secondly, a feature extractor employing wavelet packet decomposition and CSP is designed to extract the distinguishing features in both the time domain and the frequency domain. It can improve overall accuracy while reducing the training time. Finally, a shallow CNN is applied to discriminate between the pre-ictal state and the inter-ictal state. Our proposed solution is evaluated on 23 patients' data from Boston Children's Hospital-MIT scalp EEG dataset by employing a leave-one-out cross-validation, and it achieves a sensitivity of 92.2% and false prediction rate of 0.12/h. Experimental result demonstrates that the proposed approach outperforms most state-of-the-art methods.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.