Imperial College London


Faculty of EngineeringDepartment of Computing

Research Postgraduate



d.chappell19 Website CV




Dyson BuildingSouth Kensington Campus





Publication Type

9 results found

Berkovic A, Laganier C, Chappell D, Nanayakkara T, Kormushev P, Bello F, Rojas Net al., 2022, A multi-modal haptic armband for finger-level sensory feedback from a prosthetic hand, EuroHaptics, Publisher: Springer

This paper presents the implementation and evaluation of three specific, yet complementary, mechanisms of haptic feedback—namely, normal displacement, tangential position, and vibration—to render, at a finger-level, aspects of touch and proprioception from a prosthetic hand without specialised sensors. This feedback is executed by an armband worn around the upper arm divided into five somatotopic modules, one per each finger. To evaluate the system, just-noticeable difference experiments for normal displacement and tangential position were carried out, validating that users are most sensitive to feedback from modules located on glabrous (hairless) skin regions of the upper arm. Moreover, users identifying finger-level contact using multi-modal feedback of vibration followed by normal displacement performed significantly better than those using vibration feedback alone, particularly when reporting exact combinations of fingers. Finally, the point of subjective equality of tangential position feedback was measured simultaneously for all modules, which showed promising results, but indicated that further development is required to achieve full finger-level position rendering.

Conference paper

Chappell D, Son HW, Clark AB, Yang Z, Bello F, Kormushev P, Rojas Net al., 2022, Virtual reality pre-prosthetic hand training with physics simulation and robotic force interaction, IEEE Robotics and Automation Letters, Vol: 7, Pages: 1-1, ISSN: 2377-3766

Virtual reality (VR) rehabilitation systems have been proposed to enable prosthetic hand users to perform training before receiving their prosthesis. Improving pre-prosthetic training to be more representative and better prepare the patient for prosthesis use is a crucial step forwards in rehabilitation. However, existing VR platforms lack realism and accuracy in terms of the virtual hand and the forces produced when interacting with the environment. To address these shortcomings, this work presents a VR training platform based on accurate simulation of an anthropomorphic prosthetic hand, utilising an external robot arm to render realistic forces that the user would feel at the attachment point of their prosthesis. Experimental results with non-disabled participants show that training with this platform leads to a significant improvement in Box and Block scores compared to training in VR alone and a control group with no prior training. Results performing pick-and-place tasks with a wider range of objects demonstrates that training in VR alone negatively impacts performance, whereas the proposed platform has no significant impact on performance. User perception results highlight that the platform is much closer to using a physical prosthesis in terms of physical demand and effort, however frustration is significantly higher during training.

Journal article

AlAttar A, Chappell D, Kormushev P, 2022, Kinematic-model-free predictive control for robotic manipulator target reaching with obstacle avoidance, Frontiers in Robotics and AI, Vol: 9, Pages: 1-9, ISSN: 2296-9144

Model predictive control is a widely used optimal control method for robot path planning andobstacle avoidance. This control method, however, requires a system model to optimize controlover a finite time horizon and possible trajectories. Certain types of robots, such as softrobots, continuum robots, and transforming robots, can be challenging to model, especiallyin unstructured or unknown environments. Kinematic-model-free control can overcome thesechallenges by learning local linear models online. This paper presents a novel perception-basedrobot motion controller, the kinematic-model-free predictive controller, that is capable of controllingrobot manipulators without any prior knowledge of the robot’s kinematic structure and dynamicparameters and is able to perform end-effector obstacle avoidance. Simulations and physicalexperiments were conducted to demonstrate the ability and adaptability of the controller toperform simultaneous target reaching and obstacle avoidance.

Journal article

Yang Z, Clark A, Chappell D, Rojas Net al., 2022, Instinctive real-time sEMG-based control of prosthetic hand with reduced data acquisition and embedded deep learning training, IEEE International Conference on Robotics and Automation

Achieving instinctive multi-grasp control of prosthetic hands typically still requires a large number of sensors,such as electromyography (EMG) electrodes mounted on aresidual limb, that can be costly and time consuming to position,with their signals difficult to classify. Deep-learning-based EMGclassifiers however have shown promising results over traditional methods, yet due to high computational requirements,limited work has been done with in-prosthetic training. Bytargeting specific muscles non-invasively, separating graspingaction into hold and release states, and implementing dataaugmentation, we show in this paper that accurate results forembedded, instinctive, multi-grasp control can be achieved withonly 2 low-cost sensors, a simple neural network, and minimalamount of training data. The presented controller, which isbased on only 2 surface EMG (sEMG) channels, is implementedin an enhanced version of the OLYMPIC prosthetic hand.Results demonstrate that the controller is capable of identifyingall 7 specified grasps and gestures with 93% accuracy, and issuccessful in achieving several real-life tasks in a real worldsetting.

Conference paper

Cursi F, Chappell D, Kormushev P, 2022, Augmenting loss functions of feedforward neural networks with differential relationships for robot kinematic modelling, Ljubljana, Slovenia, 20th International Conference on Advanced Robotics (ICAR), Publisher: IEEE, Pages: 201-207

Model learning is a crucial aspect of robotics as it enables the use of traditional and consolidated model-based controllers to perform desired motion tasks. However, due to the increasing complexity of robotic structures, modelling robots is becoming more and more challenging, and analytical models are very difficult to build, particularly for redundant robots. Machine learning approaches have shown great capabilities in learning complex mapping and have widely been used in robot model learning and control. Generally, inverse kinematics is learned, directly obtaining the desired control commands given a desired task. However, learning forward kinematics is simpler and allows the computation of the robot Jacobian and enables the exploitation of the optimality of controllers. Nevertheless, typical learning methods have no knowledge about the differential relationship between the position and velocity mappings. In this work, we present two novel loss functions to train feedforward Artificial Neural network (ANN) which incorporate this information in learning the forward kinematic model of robotic structures, and carry out a comparison with standard ANN training using position data only. Simulation results show that incorporating the knowledge of the velocity mapping improves the suitability of the learnt model for control tasks.

Conference paper

Banerjee M, Chiew D, Patel K, Johns I, Chappell D, Linton N, Cole G, Francis D, Szram J, Ross J, Zaman Set al., 2021, The impact of artificial intelligence on clinical education: Perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers., BMC Medical Education, Vol: 21, Pages: 1-10, ISSN: 1472-6920

BackgroundArtificial intelligence (AI) technologies are increasingly used in clinical practice. Although there is robust evidence that AI innovations can improve patient care, reduce clinicians’ workload and increase efficiency, their impact on medical training and education remains unclear.MethodsA survey of trainee doctors’ perceived impact of AI technologies on clinical training and education was conducted at UK NHS postgraduate centers in London between October and December 2020. Impact assessment mirrored domains in training curricula such as ‘clinical judgement’, ‘practical skills’ and ‘research and quality improvement skills’. Significance between Likert-type data was analysed using Fisher’s exact test. Response variations between clinical specialities were analysed using k-modes clustering. Free-text responses were analysed by thematic analysis.Results210 doctors responded to the survey (response rate 72%). The majority (58%) perceived an overall positive impact of AI technologies on their training and education. Respondents agreed that AI would reduce clinical workload (62%) and improve research and audit training (68%). Trainees were skeptical that it would improve clinical judgement (46% agree, p=0.12) and practical skills training (32% agree, p<0.01). The majority reported insufficient AI training in their current curricula (92%), and supported having more formal AI training (81%).ConclusionsTrainee doctors have an overall positive perception of AI technologies’ impact on clinical training. There is optimism that it will improve ‘research and quality improvement’ skills and facilitate ‘curriculum mapping’. There is skepticism that it may reduce educational opportunities to develop ‘clinical judgement’ and ‘practical skills’. Medical educators should be mindful that these domains are protected as AI develops. We recommend that ‘Applied AI&r

Journal article

Zaman S, Seligman H, Lloyd FH, Patel KT, Chappell D, O'Hare D, Cole GD, Francis DP, Petraco R, Linton NWFet al., 2021, Aerosolised fluorescein can quantify FFP mask faceseal leakage: a cost-effective adaptation to the existing point of care fit test, CLINICAL MEDICINE, Vol: 21, Pages: E263-E268, ISSN: 1470-2118

Journal article

Saputra RP, Rakicevic N, Chappell D, Wang K, Kormushev Pet al., 2021, Hierarchical decomposed-objective model predictive control for autonomous casualty extraction, IEEE Access, Vol: 9, Pages: 39656-39679, ISSN: 2169-3536

In recent years, several robots have been developed and deployed to perform casualty extraction tasks. However, the majority of these robots are overly complex, and require teleoperation via either a skilled operator or a specialised device, and often the operator must be present at the scene to navigate safely around the casualty. Instead, improving the autonomy of such robots can reduce the reliance on expert operators and potentially unstable communication systems, while still extracting the casualty in a safe manner. There are several stages in the casualty extraction procedure, from navigating to the location of the emergency, safely approaching and loading the casualty, to finally navigating back to the medical assistance location. In this paper, we propose a Hierarchical Decomposed-Objective based Model Predictive Control (HiDO-MPC) method for safely approaching and manoeuvring around the casualty. We implement this controller on ResQbot — a proof-of-concept mobile rescue robot we previously developed — capable of safely rescuing an injured person lying on the ground, i.e. performing the casualty extraction procedure. HiDO-MPC achieves the desired casualty extraction behaviour by decomposing the main objective into multiple sub-objectives with a hierarchical structure. At every time step, the controller evaluates this hierarchical decomposed objective and generates the optimal control decision. We have conducted a number of experiments both in simulation and using the real robot to evaluate the proposed method’s performance, and compare it with baseline approaches. The results demonstrate that the proposed control strategy gives significantly better results than baseline approaches in terms of accuracy, robustness, and execution time, when applied to casualty extraction scenarios.

Journal article

Wang K, Marsh DM, Saputra RP, Chappell D, Jiang Z, Raut A, Kon B, Kormushev Pet al., 2020, Design and control of SLIDER: an ultra-lightweight, knee-less, low-cost bipedal walking robot, Las Vegas, USA, International Conference on Intelligence Robots and Systems (IROS), Publisher: IEEE, Pages: 3488-3495

Most state-of-the-art bipedal robots are designedto be highly anthropomorphic and therefore possess legs withknees. Whilst this facilitates more human-like locomotion, thereare implementation issues that make walking with straight ornear-straight legs difficult. Most bipedal robots have to movewith a constant bend in the legs to avoid singularities at theknee joints, and to keep the centre of mass at a constant heightfor control purposes. Furthermore, having a knee on the legincreases the design complexity as well as the weight of the leg,hindering the robot’s performance in agile behaviours such asrunning and jumping.We present SLIDER, an ultra-lightweight, low-cost bipedalwalking robot with a novel knee-less leg design. This nonanthropomorphic straight-legged design reduces the weight ofthe legs significantly whilst keeping the same functionality asanthropomorphic legs. Simulation results show that SLIDER’slow-inertia legs contribute to less vertical motion in the centerof mass (CoM) than anthropomorphic robots during walking,indicating that SLIDER’s model is closer to the widely usedInverted Pendulum (IP) model. Finally, stable walking onflat terrain is demonstrated both in simulation and in thephysical world, and feedback control is implemented to addresschallenges with the physical robot.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00944751&limit=30&person=true