A primary motivation of our research is the monitoring of physical, physiological, and biochemical parameters - in any environment and without activity restriction and behaviour modification - through using miniaturised, wireless Body Sensor Networks (BSN). Key research issues that are currently being addressed include novel sensor designs, ultra-low power microprocessor and wireless platforms, energy scavenging, biocompatibility, system integration and miniaturisation, processing-on-node technologies combined with novel ASIC design, autonomic sensor networks and light-weight communication protocols. Our research is aimed at addressing the future needs of life-long health, wellbeing and healthcare, particularly those related to demographic changes associated with an ageing population and patients with chronic illnesses. This research theme is therefore closely aligned with the IGHI’s vision of providing safe, effective and accessible technologies for both developed and developing countries.
Some of our latest works were exhibited at the 2015 Royal Society Summer Science Exhibition.
2 column colour block - Research areas
2 column colour block - Research areas 2
- Showing results for:
- Reset all filters
Conference paperHan J, Gu X, Lo B, 2021,
Semi-supervised contrastive learning for generalizable motor imagery eeg classification, 17th IEEE International Conference on Wearable and Implantable Body Sensor Networks, Publisher: IEEE
Electroencephalography (EEG) is one of the most widely used brain-activity recording methods in non-invasive brain-machine interfaces (BCIs). However, EEG data is highly nonlinear, and its datasets often suffer from issues such as data heterogeneity, label uncertainty and data/label scarcity. To address these, we propose a domain independent, end-to-end semi-supervised learning framework with contrastive learning and adversarial training strategies. Our method was evaluated in experiments with different amounts of labels and an ablation study in a motor imagery EEG dataset. The experiments demonstrate that the proposed framework with two different backbone deep neural networks show improved performance over their supervised counterparts under the same condition.
Journal articleSmith M, Withnall R, Anastasova S, et al., 2021,
Journal articleLi W, Shen M, Gao A, et al., 2021,
The advance of flexible robots enables a more efficient and safer way to perform endoscopic submucosal dissection surgery. The robot should be flexible enough for easy insertion and able to maintain a rigid shape to transmit the forces applied on instrumentation during the operation. This article presents a snake-like flexible endoscope design which consists of an active snake robot and a passive flexible body. The active section is composed of metal printed spring-like joints actuated by tendons arranged in a novel fashion. To analyse the performance and clinical feasibility of the proposed flexible robot, Finite Element Analysis, workspace analysis, path following accuracy and force test have been performed. The results have shown that the robot can reach a minimum retro-flex bending radius of 23 mm and the distance errors of each joint when advancing along a simulated colon path are analysed. Validation of the proposed robot demonstrates its potential for ESD surgeries.
Journal articleGu X, Guo Y, Deligianni F, et al., 2021,
For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.
Journal articleKassanos P, Seichepine F, Yang G-Z, 2021,
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.