987 results found
Anastasova S, Kassanos P, Yang G-Z, 2018, Multi-parametric rigid and flexible, low-cost, disposable sensing platforms for biomedical applications, BIOSENSORS & BIOELECTRONICS, Vol: 102, Pages: 668-675, ISSN: 0956-5663
Berthelot M, Yang G-Z, Lo B, 2018, A Self-Calibrated Tissue Viability Sensor for Free Flap Monitoring, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 22, Pages: 5-14, ISSN: 2168-2194
Deligianni F, Wong C, Lo B, et al., 2018, A fusion framework to estimate plantar ground force distributions and ankle dynamics, INFORMATION FUSION, Vol: 41, Pages: 255-263, ISSN: 1566-2535
Fabelo H, Ortega S, Lazcano R, et al., 2018, An Intraoperative Visualization System Using Hyperspectral Imaging to Aid in Brain Tumor Delineation., Sensors (Basel), Vol: 18
Hyperspectral imaging (HSI) allows for the acquisition of large numbers of spectral bands throughout the electromagnetic spectrum (within and beyond the visual range) with respect to the surface of scenes captured by sensors. Using this information and a set of complex classification algorithms, it is possible to determine which material or substance is located in each pixel. The work presented in this paper aims to exploit the characteristics of HSI to develop a demonstrator capable of delineating tumor tissue from brain tissue during neurosurgical operations. Improved delineation of tumor boundaries is expected to improve the results of surgery. The developed demonstrator is composed of two hyperspectral cameras covering a spectral range of 400-1700 nm. Furthermore, a hardware accelerator connected to a control unit is used to speed up the hyperspectral brain cancer detection algorithm to achieve processing during the time of surgery. A labeled dataset comprised of more than 300,000 spectral signatures is used as the training dataset for the supervised stage of the classification algorithm. In this preliminary study, thematic maps obtained from a validation database of seven hyperspectral images of in vivo brain tissue captured and processed during neurosurgical operations demonstrate that the system is able to discriminate between normal and tumor tissue in the brain. The results can be provided during the surgical procedure (~1 min), making it a practical system for neurosurgeons to use in the near future to improve excision and potentially improve patient outcomes.
Fujii K, Gras G, Salerno A, et al., 2018, Gaze gesture based human robot interaction for laparoscopic surgery, MEDICAL IMAGE ANALYSIS, Vol: 44, Pages: 196-214, ISSN: 1361-8415
Gowers SAN, Hamaoui K, Cunnea P, et al., 2018, High temporal resolution delayed analysis of clinical microdialysate streams, ANALYST, Vol: 143, Pages: 715-724, ISSN: 0003-2654
Orihuela-Espina F, Leff DR, James DRC, et al., 2018, Imperial College near infrared spectroscopy neuroimaging analysis framework., Neurophotonics, Vol: 5, ISSN: 2329-423X
This paper describes the Imperial College near infrared spectroscopy neuroimaging analysis (ICNNA) software tool for functional near infrared spectroscopy neuroimaging data. ICNNA is a MATLAB-based object-oriented framework encompassing an application programming interface and a graphical user interface. ICNNA incorporates reconstruction based on the modified Beer-Lambert law and basic processing and data validation capabilities. Emphasis is placed on the full experiment rather than individual neuroimages as the central element of analysis. The software offers three types of analyses including classical statistical methods based on comparison of changes in relative concentrations of hemoglobin between the task and baseline periods, graph theory-based metrics of connectivity and, distinctively, an analysis approach based on manifold embedding. This paper presents the different capabilities of ICNNA in its current version.
Zhou X-Y, Yang G-Z, Lee S-L, 2018, A real-time and registration-free framework for dynamic shape instantiation, MEDICAL IMAGE ANALYSIS, Vol: 44, Pages: 86-97, ISSN: 1361-8415
Anastasova S, Crewther B, Bembnowicz P, et al., 2017, A wearable multisensing patch for continuous sweat monitoring (vol 93, pg 139, 2017), BIOSENSORS & BIOELECTRONICS, Vol: 94, Pages: 730-730, ISSN: 0956-5663
Anastasova S, Crewther B, Bembnowicz P, et al., 2017, A wearable multisensing patch for continuous sweat monitoring, BIOSENSORS & BIOELECTRONICS, Vol: 93, Pages: 139-145, ISSN: 0956-5663
Andreu-Perez J, Garcia-Gancedo L, McKinnell J, et al., 2017, Developing Fine-Grained Actigraphies for Rheumatoid Arthritis Patients from a Single Accelerometer Using Machine Learning, SENSORS, Vol: 17, ISSN: 1424-8220
Avci E, Grammatikopoulou M, Yang G-Z, 2017, Laser-Printing and 3D Optical-Control of Untethered Microrobots, ADVANCED OPTICAL MATERIALS, Vol: 5, ISSN: 2195-1071
Bao S-D, Chen M, Yang G-Z, 2017, A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications, IEEE Journal of Biomedical and Health Informatics, Vol: 21, Pages: 1487-1494, ISSN: 2168-2194
A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.
Berthelot M, Yang G-Z, Lo B, 2017, Preliminary Study for Hemodynamics Monitoring using a Wearable Device Network, 14th Annual IEEE International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, Pages: 115-118, ISSN: 2376-8886
Chi W, Rafii-Tari H, Payne CJ, et al., 2017, A learning based training and skill assessment platform with haptic guidance for endovascular catheterization, Pages: 2357-2363, ISSN: 1050-4729
© 2017 IEEE. Increasing demands in endovascular intervention have motivated technical skill training and competency-based measures of performance. However, there are no well-established online metrics for technical skill assessment; few studies have explored operator behavioral patterns from catheter motion and operator hand motions. This paper proposes a platform for active online training and objective assessment of endovascular skills, through learning optimum catheter motions from multiple demonstrations. An ungrounded hand-held haptic device for providing intuitive haptic guidance to novice users based on this learnt information is also proposed. Statistical models are implemented to extract the underlying catheter motion patterns, and utilize them for performance evaluation and haptic guidance. The results show significant improvements in endovascular navigation for inexperienced operators. Finer catheter motions were achieved with the provided haptic guidance. The results suggest that the proposed platform can be integrated into current clinical training setups, and motivate the improvement of endovascular training platforms with better realism.
Constantinescu M, Lee SL, Ernst S, et al., 2017, Statistical atlases for electroanatomical mapping of cardiac arrhythmias, Pages: 301-310, ISSN: 0302-9743
© Springer International Publishing AG 2017. Electroanatomical mapping is a mandatory time-consuming planning step in cardiac catheter ablation. In practice, interventional cardiologists target specific endocardial areas for mapping based on personal experience, general electrophysiology principles, and preoperative anatomical scans. Effective fusion of all available information towards a useful mapping strategy has not been standardised and achieving the optimal map within time and space constraints is challenging. In this paper, a novel framework for computing optimal endocardial mapping locations in patients with congenital heart disease (CHD) is proposed. The method is based on a statistical electroanatomical model (SEAM) which is instantiated from preoperative anatomy in order to achieve an initial prediction of the electrical map. Simultaneously, the anatomical areas with the highest frequency of mapping among the similar cases in the dataset are detected and a classifier is trained to filter these points based on the electroanatomical data. The framework was tested in an iterative process of adding mapping points to the SEAM and computing the instantiation error, with retrospective clinical data of 66 CHD cases available.
Freer DR, Liu J, Yang G-Z, 2017, Optimization of EMG Movement Recognization for Use in an Upper Limb Wearable Robot, 14th Annual IEEE International Conference on Wearable and Implantable Body Sensor Networks (BSN), Publisher: IEEE, Pages: 202-205, ISSN: 2376-8886
Gambini J, Quinn T, Vila R, et al., 2017, Upgraded portable Indocyanine Green (ICG) detection system - towards Image Guided Cancer Surgery, Annual Meeting of the Society-of-Nuclear-Medicine-and-Molecular-Imaging (SNMMI), Publisher: SOC NUCLEAR MEDICINE INC, ISSN: 0161-5505
Grammatikopoulou M, Yang GZ, 2017, Gaze contingent control for optical micromanipulation, Pages: 5989-5995, ISSN: 1050-4729
© 2017 IEEE. Optical Tweezers (OT) have the advantage of non-contact interaction with target objects such as cells, overcoming the pitfall of obstructive adhesion forces which are present in contact micromanipulation. It is also feasible to manipulate a number of small microparts simultaneously or 3D structures by using multiple laser traps. These capabilities give rise to the potential to develop a human-robot interface to facilitate microassembly tasks. This paper presents a gaze contingent control framework and a method for 3D orientation estimation for optical micromanipulation. The proposed strategy aims to use OT as an interactive microassembly platform. The framework comprises I) a strategy to recognize the operator's intentions in order to interactively place and reconfigure the optical traps using the operator's eye fixation point, II) haptic constraints generated from the user's eye gaze to assist positioning of the assembled microparts and III) a method for 3D orientation estimation. The performance of the proposed framework is assessed through a set of experiments comparing it to the standard OT user interface. Three-dimensional manipulation and orientation estimation of a non-spherical microstructure are also performed.
Grammatikopoulou M, Zhang L, Yang GZ, 2017, Depth estimation of optically transparent laser-driven microrobots, Pages: 2994-2999, ISSN: 2153-0858
© 2017 IEEE. Six degree-of-freedom (DoF) pose feedback is essential for the development of closed-loop control techniques for microrobotics. This paper presents two methods for depth estimation of transparent microrobots inside an Optical Tweezers (OT) setup using image sharpness measurements and model-based tracking. The x-y position and the 3D orientation of the object are estimated using online model-based template matching. The proposed depth estimation methodologies are validated experimentally by comparing the results with the ground truth.
Gras G, Leibrandt K, Wisanuvej P, et al., 2017, Implicit gaze-assisted adaptive motion scaling for highly articulated instrument manipulation, Pages: 4233-4239, ISSN: 1050-4729
© 2017 IEEE. Traditional robotic surgical systems rely entirely on robotic arms to triangulate articulated instruments inside the human anatomy. This configuration can be ill-suited for working in tight spaces or during single access approaches, where little to no triangulation between the instrument shafts is possible. The control of these instruments is further obstructed by ergonomic issues: The presence of motion scaling imposes the use of clutching mechanics to avoid the workspace limitations of master devices, and forces the user to choose between slow, precise movements, or fast, less accurate ones. This paper presents a bi-manual system using novel self-triangulating 6-degrees-of-freedom (DoF) tools through a flexible elbow, which are mounted on robotic arms. The control scheme for the resulting 9-DoF system is detailed, with particular emphasis placed on retaining maximum dexterity close to joint limits. Furthermore, this paper introduces the concept of gaze-assisted adaptive motion scaling. By combining eye tracking with hand motion and instrument information, the system is capable of inferring the user's destination and modifying the motion scaling accordingly. This safe, novel approach allows the user to quickly reach distant locations while retaining full precision for delicate manoeuvres. The performance and usability of this adaptive motion scaling is evaluated in a user study, showing a clear improvement in task completion speed and in the reduction of the need for clutching.
Gu Y, Vyas K, Yang J, et al., 2017, Unsupervised feature learning for endomicroscopy image retrieval, Pages: 64-71, ISSN: 0302-9743
© Springer International Publishing AG 2017. Learning the visual representation for medical images is a critical task in computer-aided diagnosis. In this paper, we propose Unsupervised Multimodal Graph Mining (UMGM) to learn the discriminative features for probe-based confocal laser endomicroscopy (pCLE) mosaics of breast tissue. We build a multiscale multimodal graph based on both pCLE mosaics and histology images. The positive pairs are mined via cycle consistency and the negative pairs are extracted based on geodetic distance. Given the positive and negative pairs, the latent feature space is discovered by reconstructing the similarity between pCLE and histology images. Experiments on a database with 700 pCLE mosaics demonstrate that the proposed method outperforms previous works on pCLE feature learning. Specially, the top-1 accuracy in an eight-class retrieval task is 0.659 which leads to 10% improvement compared with the state-of-the-art method.
Huang B, Ye M, Lee SL, et al., 2017, A vision-guided multi-robot cooperation framework for learning-by-demonstration and task reproduction, Pages: 4797-4804, ISSN: 2153-0858
© 2017 IEEE. This paper presents a vision-based learning-by-demonstration approach for multi-robot manipulation. With this method, a vision system is involved in both the task demonstration and reproduction stages, and the speed and accuracy of the task reproduction are adapted according to the context of the demonstration. An expert first demonstrates how to use tools to perform a task, while the tool motion is observed using a vision system. The demonstrations are then encoded using a statistical model to generate a reference motion trajectory. Equipped with the same tools and the learned model, the robot is guided by vision to reproduce the task. The task performance was evaluated in terms of both accuracy and speed. However, simply increasing the robot's speed could decrease the reproduction accuracy. To this end, a dual-rate Kalman filter is employed to compensate for latency between the robot and vision system. More importantly, the robot speed is adapted according to the learned motion model. We demonstrate the effectiveness of our approach by performing two tasks: a trajectory reproduction task and a bimanual sewing task. We show that using our vision-based approach, the robots can conduct effective learning by demonstrations and perform accurate and fast task reproduction. The proposed approach is generalisable to other manipulation tasks, where bimanual or multi-robot cooperation is required.
Huang B, ye M, hu Y, et al., 2017, A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts, IEEE Transactions on Industrial Informatics, ISSN: 1551-3203
This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.
Leff DR, Yongue G, Vlaev I, et al., 2017, "Contemplating the Next Maneuver" Functional Neuroimaging Reveals Intraoperative Decision-making Strategies, ANNALS OF SURGERY, Vol: 265, Pages: 320-330, ISSN: 0003-4932
Leibrandt K, Bergeles C, Yang G-Z, 2017, Concentric Tube Robots Rapid, Stable Path-Planning and Guidance for Surgical Use, IEEE ROBOTICS & AUTOMATION MAGAZINE, Vol: 24, Pages: 42-53, ISSN: 1070-9932
Leibrandt K, Bergeles C, Yang GZ, 2017, Implicit active constraints for concentric tube robots based on analysis of the safe and dexterous workspace, Pages: 193-200, ISSN: 2153-0858
© 2017 IEEE. The use of concentric tube robots has recognized advantages for accessing target lesions while conforming to certain anatomical constraints. However, their complex kinematics makes their safe telemanipulation in convoluted anatomy a challenging task. Collaborative control schemes, which guide the operator through haptic and visual feedback, can simplify this task and reduce the cognitive burden of the operator. Guaranteeing stable, collision-free robot configurations during manipulation, however, is computationally demanding and, until now, either required long periods of pre-computation time or distributed computing clusters. Furthermore, the operator is often presented with guidance paths which have to be followed approximately. This paper presents a heterogeneous (CPU/GPU) computing approach to enable rapid workspace analysis on a single computer. The method is used in a new navigation scheme that guides the robot operator towards locations of high dexterity or manipulability of the robot. Under this guidance scheme, the user can make informed decisions and maintain full control of the path planning and manipulation processes, with intuitive visual feedback on when the robot's limitations are being reached.
Leibrandt K, Wisanuvej P, Gras G, et al., 2017, Effective Manipulation in Confined Spaces of Highly Articulated Robotic Instruments for Single Access Surgery, IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 2, Pages: 1704-1711, ISSN: 2377-3766
Leibrandt K, Yang G-Z, 2017, Efficient Proximity Queries for Continuum Robots on Parallel Computing Hardware, IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 2, Pages: 1548-1555, ISSN: 2377-3766
Leonhardt S, Yang GZ, Habetha J, 2017, Welcome message
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.