Imperial College London

DrPhilipPratt

Faculty of MedicineDepartment of Surgery & Cancer

Honorary Senior Research Fellow
 
 
 
//

Contact

 

+44 (0)20 3312 5525p.pratt Website

 
 
//

Location

 

005Paterson WingSt Mary's Campus

//

Summary

 

Publications

Publication Type
Year
to

80 results found

Sivananthan A, Gueroult A, Zijlstra G, Martin G, Baheerathan A, Pratt P, Darzi A, Patel N, Kinross Jet al., 2022, A feasibility trial of HoloLens 2™; Using mixed reality headsets to deliver remote bedside teaching during COVID-19, JMIR Formative Research, Vol: 6, Pages: 1-7, ISSN: 2561-326X

BackgroundCOVID-19 has had a catastrophic impact measured in human lives. Medical education has also been impacted: appropriately stringent infection control policies have precluded medical trainees from attending clinical teaching. Lecture-based education has been easily transferred to a digital platform, but bedside teaching has not. This study aims to assess the feasibility of using a mixed reality (MR) headset to deliver remote bedside teaching.MethodsTwo MR sessions were led by senior doctors wearing the HoloLens™ headset. The trainers selected patients requiring their specialist input. The headset allowed bi-directional audio-visual communication between the trainer and trainee doctors. Trainee doctor conceptions of bedside teaching, impact of COVID-19 on bedside teaching and the MR sessions were evaluated using pre- and post-round questionnaires, using Likert scales. Data related to clinician exposure to at risk patients and use of PPE were collected.ResultsPre-questionnaire respondents (n=24) strongly agreed that bedside teaching is key to educating clinicians (7, IQR 6-7). Post-session questionnaires showed that overall users subjectively agreed the MR session was helpful to their learning (6, IQR 5.25 – 7) and that it was worthwhile (6, IQR 5.25 – 7). Mixed-reality versus in-person teaching led to a 79.5% reduction in cumulative clinician exposure time and 83.3% reduction in PPE use. ConclusionsThis study is proof of principle that HoloLens™ can be used effectively to deliver clinical bedside teaching This novel format confers significant advantages in terms of: minimising exposure of trainees to COVID-19; saving PPE; enabling larger attendance; and convenient accessible real-time clinical training.

Journal article

Sivananthan A, Gueroult A, Zijlstra G, Martin G, Baheerathan A, Pratt P, Darzi A, Patel N, Kinross Jet al., 2021, Using Mixed Reality Headsets to Deliver Remote Bedside Teaching During the COVID-19 Pandemic: Feasibility Trial of HoloLens 2 (Preprint)

<sec> <title>BACKGROUND</title> <p>COVID-19 has had a catastrophic impact in terms of human lives lost. Medical education has also been impacted as appropriately stringent infection control policies precluded medical trainees from attending clinical teaching. Lecture-based education has been easily transferred to a digital platform, but bedside teaching has not.</p> </sec> <sec> <title>OBJECTIVE</title> <p>This study aims to assess the feasibility of using a mixed reality (MR) headset to deliver remote bedside teaching.</p> </sec> <sec> <title>METHODS</title> <p>Two MR sessions were led by senior doctors wearing the HoloLens headset. The trainers selected patients requiring their specialist input. The headset allowed bidirectional audiovisual communication between the trainer and trainee doctors. Trainee doctor conceptions of bedside teaching, impact of the COVID-19 pandemic on bedside teaching, and the MR sessions were evaluated using pre- and postround questionnaires, using Likert scales. Data related to clinician exposure to at-risk patients and use of personal protective equipment (PPE) were collected.</p> </sec> <sec> <title>RESULTS</title> <p>Prequestionnaire respondents (n=24) strongly agreed that bedside teaching is key to educating clinicians (median 7, IQR 6-7). Postsession questionnaires showed that, overall, users subjectively agreed the MR session was helpful to their learning (median 6, IQR 5.25-7) and that it was worthwhile (median 6, IQR 5.25-7). Mixed reality versus in-person teaching led to a 79.5% reduction in cumulat

Journal article

Amiras D, Hurkxkens TJ, Figueroa D, Pratt PJ, Pitrola B, Watura C, Rostampour S, Shimshon GJ, Hamady Met al., 2021, Augmented reality simulator for CT-guided interventions, EUROPEAN RADIOLOGY, Vol: 31, Pages: 8897-8902, ISSN: 0938-7994

Journal article

Bala L, Kinross J, Martin G, Koizia LJ, Kooner AS, Shimshon GJ, Hurkxkens TJ, Pratt PJ, Sam AHet al., 2021, A remote access mixed reality teaching ward round, The Clinical Teacher, Vol: 18, Pages: 386-390, ISSN: 1743-4971

BackgroundHeterogeneous access to clinical learning opportunities and inconsistency in teaching is a common source of dissatisfaction among medical students. This was exacerbated during the COVID‐19 pandemic, with limited exposure to patients for clinical teaching.MethodsWe conducted a proof‐of‐concept study at a London teaching hospital using mixed reality (MR) technology (HoloLens2™) to deliver a remote access teaching ward round.ResultsStudents unanimously agreed that use of this technology was enjoyable and provided teaching that was otherwise inaccessible. The majority of participants gave positive feedback on the MR (holographic) content used (n = 8 out of 11) and agreed they could interact with and have their questions answered by the clinician leading the ward round (n = 9). Quantitative and free text feedback from students, patients and faculty members demonstrated that this is a feasible, acceptable and effective method for delivery of clinical education.DiscussionWe have used this technology in a novel way to transform the delivery of medical education and enable consistent access to high‐quality teaching. This can now be integrated across the curriculum and will include remote access to specialist clinics and surgery. A library of bespoke MR educational resources will be created for future generations of medical students and doctors to use on an international scale.

Journal article

Martin G, Koiza L, Kooner A, Cafferkey J, Ross C, Purkayastha S, Sivananthan A, Tanna A, Pratt P, Kinross Jet al., 2020, Protecting healthcare workers during the COVID-19 pandemic with new technologies: acceptability, feasibility and impact of the HoloLens2™ mixed reality headset across multiple clinical settings, Journal of Medical Internet Research, Vol: 22, Pages: 1-9, ISSN: 1438-8871

Background: The COVID-19 pandemic has led to rapid acceleration in the deployment of new digital technologies to improve both accessibility and quality of care, and to protect staff. Mixed reality technology is the latest iteration of telemedicine innovation and is logical next step in the move towards the provision of digitally supported clinical care and medical education. The technology has the potential to revolutionise care both during and after the COVID-19 pandemic.Objective:This pilot project sought to deploy the HoloLens2™ mixed reality (MR) device to support the delivery of remote care in COVID-19 hospital environments.Methods:A prospective observational nested cohort evaluation of the HoloLens2™ was undertaken across three distinct clinical clusters in a UK teaching hospital. Data pertaining to staff exposure to high-risk COVID-19 environments and PPE use were collected, and assessments of acceptability and feasibility conducted.Results:The deployment of HoloLens2™ led to a 51·5% reduction in time exposed to harm for staff looking after COVID-19 patients (3·32 vs. 1·63 hours/day/staff member, p=0·002), and a 83·1% reduction in the amount of PPE used (178 vs. 30 items/round/day, p=0·017). This represents 222.98hrs reduced staff exposure to COVID-19, and 3,100 fewer items of PPE used each week across the three clusters evaluated. The majority of staff using the device agreed it was easy to set up and comfortable to wear, improved the quality of care and decision making, and led to better teamwork and communication. 89·3% of users felt that their clinical team was safer when using the HoloLens2™.Conclusions:New technologies have a role in minimising exposure to nosocomial infection, optimising the use of PPE and enhancing aspects of care. Deploying such technologies at pace requires context specific information security, infection control, and user experience and workflow integration to

Journal article

Martin G, Koizia L, Kooner A, Cafferkey J, Ross C, Purkayastha S, Sivananthan A, Tanna A, Pratt P, Kinross Jet al., 2020, Use of the HoloLens2 Mixed Reality Headset for Protecting Health Care Workers During the COVID-19 Pandemic: Prospective, Observational Evaluation (Preprint)

<sec> <title>BACKGROUND</title> <p>The coronavirus disease (COVID-19) pandemic has led to rapid acceleration in the deployment of new digital technologies to improve both accessibility to and quality of care, and to protect staff. Mixed-reality (MR) technology is the latest iteration of telemedicine innovation; it is a logical next step in the move toward the provision of digitally supported clinical care and medical education. This technology has the potential to revolutionize care both during and after the COVID-19 pandemic.</p> </sec> <sec> <title>OBJECTIVE</title> <p>This pilot project sought to deploy the HoloLens2 MR device to support the delivery of remote care in COVID-19 hospital environments.</p> </sec> <sec> <title>METHODS</title> <p>A prospective, observational, nested cohort evaluation of the HoloLens2 was undertaken across three distinct clinical clusters in a teaching hospital in the United Kingdom. Data pertaining to staff exposure to high-risk COVID-19 environments and personal protective equipment (PPE) use by clinical staff (N=28) were collected, and assessments of acceptability and feasibility were conducted.</p> </sec> <sec> <title>RESULTS</title> <p>The deployment of the HoloLens2 led to a 51.5% reduction in time exposed to harm for staff looking after COVID-19 patients (3.32 vs 1.63 hours/day/staff member; &lt;i&gt;P&lt;/i&gt;=.002), and an 83.1% reduction in the amount of PPE used (178 vs 30 items/round/day; &lt;i&gt;P&lt;/i&gt;=.02). This represents 222.98 hours of reduced staff e

Journal article

Dilley J, Singh H, Pratt P, Darzi A, Mayer Eet al., 2020, Visual behaviour in robotic surgery – demonstrating the validity of the simulated environment, International Journal of Medical Robotics and Computer Assisted Surgery, Vol: 16, ISSN: 1478-5951

BackgroundEye metrics provide insight into surgical behaviour allowing differentiation of performance, however have not been used in robotic surgery. This study explores eye metrics of robotic surgeons in training in simulated and real tissue environments.MethodsFollowing the Fundamentals of Robotic Surgery (FRS), training curriculum novice robotic surgeons were trained to expert‐derived benchmark proficiency using real tissue on the da Vinci Si and the da Vinci skills simulator (dVSS) simulator. Surgeons eye metrics were recorded using eye‐tracking glasses when both “novice” and “proficient” in both environments. Performance was assessed using Global Evaluative Assessment of Robotic skills (GEARS) and numeric psychomotor test score (NPMTS) scores.ResultsSignificant (P ≤ .05) correlations were seen between pupil size, rate of change and entropy, and associated GEARS/NPMTS in “novice” and “proficient” surgeons. Only number of blinks per minute was significantly different between pupilometrics in the simulated and real tissue environments.ConclusionsThis study illustrates the value of eye tracking as an objective physiological tool in the robotic setting. Pupilometrics significantly correlate with established assessment methods and could be incorporated into robotic surgery assessments.

Journal article

Feather C, Appelbaum N, Clarke J, Franklin B, Sinha R, Pratt P, Maconochie I, Darzi Aet al., 2019, Medication errors during simulated paediatric resuscitations: a prospective, observational human reliability analysis, BMJ Open, Vol: 9, Pages: 1-13, ISSN: 2044-6055

Introduction: Medication errors during paediatric resuscitation are thought to be common. However, there is little evidence about the individual process steps that contribute to such medication errors in this context.Objectives: To describe the incidence, nature and severity of medication errors in simulated paediatric resuscitations, and to employ human reliability analysis to understand the contribution of discrepancies in individual process steps to the occurrence of these errors.Methods: We conducted a prospective observational study of simulated resuscitations subjected to video micro-analysis, identification of medication errors, severity assessment and human reliability analysis in a large English teaching hospital. Fifteen resuscitation teams of two doctors and two nurses each conducted one of two simulated paediatric resuscitation scenarios. Results: At least one medication error was observed in every simulated case, and a large magnitude (>25% discrepant) or clinically significant error in 11 of 15 cases. Medication errors were observed in 29% of 180 simulated medication administrations, 40% of which considered to be moderate or severe. These errors were the result of 884 observed discrepancies at a number of steps in the drug ordering, preparation and administration stages of medication use, 8% of which made a major contribution to a resultant medication error. Most errors were introduced by discrepancies during drug preparation and administration. Conclusions: Medication errors were common with a considerable proportion likely to result in patient harm. There is an urgent need to optimise existing systems and to commission research into new approaches to increase the reliability of human interactions during administration of medication in the paediatric emergency setting.

Journal article

Dawda S, Camara M, Pratt P, Vale J, Darzi A, Mayer Eet al., 2019, Patient-specific simulation of pneumoperitoneum for laparoscopic surgical planning, Journal of Medical Systems, Vol: 43, ISSN: 0148-5598

Gas insufflation in laparoscopy deforms the abdomen and stretches the overlying skin. This limits the use of surgical image-guidance technologies and challenges the appropriate placement of trocars, which influences the operative ease and potential quality of laparoscopic surgery. This work describes the development of a platform that simulates pneumoperitoneum in a patient-specific manner, using preoperative CT scans as input data. This aims to provide a more realistic representation of the intraoperative scenario and guide trocar positioning to optimize the ergonomics of laparoscopic instrumentation. The simulation was developed by generating 3D reconstructions of insufflated and deflated porcine CT scans and simulating an artificial pneumoperitoneum on the deflated model. Simulation parameters were optimized by minimizing the discrepancy between the simulated pneumoperitoneum and the ground truth model extracted from insufflated porcine scans. Insufflation modeling in humans was investigated by correlating the simulation’s output to real post-insufflation measurements obtained from patients in theatre. The simulation returned an average error of 7.26 mm and 10.5 mm in the most and least accurate datasets respectively. In context of the initial discrepancy without simulation (23.8 mm and 19.6 mm), the methods proposed here provide a significantly improved picture of the intraoperative scenario. The framework was also demonstrated capable of simulating pneumoperitoneum in humans. This study proposes a method for realistically simulating pneumoperitoneum to achieve optimal ergonomics during laparoscopy. Although further studies to validate the simulation in humans are needed, there is the opportunity to provide a more realistic, interactive simulation platform for future image-guided minimally invasive surgery.

Journal article

Dilley J, Camara M, Omar I, Carter A, Pratt P, Vale J, Darzi A, Mayer EKet al., 2019, Evaluating the impact of image guidance in the surgical setting: A systematic review, Surgical Endoscopy, Vol: 33, Pages: 2785-2793, ISSN: 0930-2794

BACKGROUND: Image guidance has been clinically available for over a period of 20 years. Although research increasingly has a translational emphasis, overall the clinical uptake of image guidance systems in surgery remains low. The objective of this review was to establish the metrics used to report on the impact of surgical image guidance systems used in a clinical setting. METHODS: A systematic review of the literature was carried out on all relevant publications between January 2000 and April 2016. Ovid MEDLINE and Embase databases were searched using a title strategy. Reported outcome metrics were grouped into clinically relevant domains and subsequent sub-categories for analysis. RESULTS: In total, 232 publications were eligible for inclusion. Analysis showed that clinical outcomes and system interaction were consistently reported. However, metrics focusing on surgeon, patient and economic impact were reported less often. No increase in the quality of reporting was observed during the study time period, associated with study design, or when the clinical setting involved a surgical specialty that had been using image guidance for longer. CONCLUSIONS: Publications reporting on the clinical use of image guidance systems are evaluating traditional surgical outcomes and neglecting important human and economic factors, which are pertinent to the uptake, diffusion and sustainability of image-guided surgery. A framework is proposed to assist researchers in providing comprehensive evaluation metrics, which should also be considered in the design phase. Use of these would help demonstrate the impact in the clinical setting leading to increased clinical integration of image guidance systems.

Journal article

Camara M, Dawda S, Mayer E, Darzi A, Pratt Pet al., 2019, Subject-specific modelling of pneumoperitoneum: model implementation, validation and human feasibility assessment, International Journal of Computer Assisted Radiology and Surgery, Vol: 14, Pages: 841-850, ISSN: 1861-6429

PURPOSE: The aim of this study is to propose a model that simulates patient-specific anatomical changes resulting from pneumoperitoneum, using preoperative data as input. The framework can assist the surgeon through a real-time visualisation and interaction with the model. Such could further facilitate surgical planning preoperatively, by defining a surgical strategy, and intraoperatively to estimate port positions. METHODS: The biomechanical model that simulates pneumoperitoneum was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. Datasets of multiple porcine subjects before and after abdominal insufflation were used to generate, calibrate and validate the model. The feasibility of modelling pneumoperitoneum in human subjects was assessed by comparing distances between specific landmarks from a patient abdominal wall, to the same landmark measurements on the simulated model. RESULTS: The calibration of simulation parameters resulted in a successful estimation of an optimal set parameters. A correspondence between the simulation pressure parameter and the experimental insufflation pressure was determined. The simulation of pneumoperitoneum in a porcine subject resulted in a mean Hausdorff distance error of 5-6 mm. Feasibility of modelling pneumoperitoneum in humans was successfully demonstrated. CONCLUSION: Simulation of pneumoperitoneum provides an accurate subject-specific 3D model of the inflated abdomen, which is a more realistic representation of the intraoperative scenario when compared to preoperative imaging alone. The simulation results in a stable and interactive framework that performs in real time, and supports patient-specific data, which can assist in surgical planning.

Journal article

Camara M, Mayer E, Darzi A, Pratt Pet al., 2019, Intraoperative ultrasound for improved 3D tumour reconstruction in robot-assisted surgery: An evaluation of feedback modalities, International Journal of Medical Robotics and Computer Assisted Surgery, Vol: 15, Pages: 1-9, ISSN: 1478-5951

BACKGROUND: Intraoperative ultrasound scanning induces deformation on the tissue in the absence of a feedback modality, which results in a 3D tumour reconstruction that is not directly representative of real anatomy. METHODS: A biomechanical model with different feedback modalities (haptic, visual, or auditory) was implemented in a simulation environment. A user study with 20 clinicians was performed to assess which modality resulted in the 3D tumour volume reconstruction that most resembled the reference configuration from the respective computed tomography (CT) scans. RESULTS: Integrating a feedback modality significantly improved the scanning performance across all participants and data sets. The optimal feedback modality to adopt varied depending on the evaluation. Nonetheless, using guidance with feedback is always preferred compared with none. CONCLUSIONS: The results demonstrated the urgency to integrate a feedback modality framework into clinical practice, to ensure an improved scanning performance. Furthermore, this framework enabled an evaluation that cannot be performed in vivo.

Journal article

Dilley J, Hughes-Hallett A, Pratt P, Pucher P, Camara M, Darzi A, Mayer Eet al., 2019, Perfect registration leads to imperfect performance: a randomised trial of multimodal intraoperative image guidance, Annals of Surgery, Vol: 269, Pages: 236-242, ISSN: 0003-4932

Objective – To compare surgical safety and efficiency of two image guidance modalities, perfect augmented reality (AR) and side-by-side unregistered image guidance (IG), against a no guidance control (NG), when performing a simulated laparoscopic cholecystectomy (LC).Background – Image guidance using AR offers the potential to improve understanding of subsurface anatomy, with positive ramifications for surgical safety and efficiency. No intra-abdominal study has demonstrated any advantage for the technology. Perfect AR cannot be provided in the operative setting in a patient, however it can be generated in the simulated setting. Methods – Thirty six experienced surgeons performed a baseline LC using the LapMentor™ simulator before randomisation to one of three study arms: AR, IG or NG. Each performed three further LC. Safety and efficiency-related simulator metrics, and task workload (SURG-TLX) were collected. Results –The IG group had a shorter total instrument path length and fewer movements than NG and AR groups. Both IG and NG took a significantly shorter time than AR to complete dissection of Calot’s triangle. Use of IG and AR resulted in significantly fewer perforations and serious complications than the NG group. IG had significantly fewer perforations and serious complications than AR group. Compared to IG, AR guidance was found to be significantly more distracting. Conclusion – Side-by-side unregistered image guidance (IG) improved safety and surgical efficiency in a simulated setting when compared to AR or NG. IG provides a more tangible opportunity for integrating image guidance into existing surgical workflow as well as delivering the safety and efficiency benefits desired.

Journal article

Dilley J, Pratt P, Kyrgiou M, Flott K, Darzi A, Mayer Eet al., 2018, Current and future use of radiological images in the management of gynecological malignancies - a survey of practice in the UK, Anticancer Research, Vol: 38, Pages: 5867-5876, ISSN: 0250-7005

Background/Aim: Radiology provides increasingly accurate and complex information. Understanding the clinicians' interpretation of scans could improve surgical planning, decision-making; informed training and development of augmented imaging. This was a survey exploring the interpretation of imaging by clinicians and its use in operative preparation and prediction. Materials and Methods: The survey was open for two-months and circulated online to British Gynaecological Cancer society members. Results: Seventy-three (19%) members completed the survey. Respondents had a confidence level of 51% in their ability to interpret computed tomography (CT) and/or magnetic resonance imaging (MRI) images independently. Preoperative imaging was commonly used to plan operations, predict complications and complete resection. Images were reviewed for primary (96.3%)/interval (92.6%) ovarian debulking, but less so for vulvectomy (45%). Scan (79.6%) and multidisciplinary team meeting (MDT) (66.6%) reports were used more often than scan images (50%) for operative planning. Amount and pattern of disease on scan were the most important factors predicting operating time. Conclusion: Imaging influences the surgeon's planning, however respondents lack confidence. Training of clinicians in radiological interpretation needs to improve. Augmented image interfaces could facilitate this.

Journal article

Linte CA, Kersten-Oertel M, Yaniv Z, Xiao Y, Essert C, Jannin P, Lau J, Pratt P, Reinertsen I, Rivaz Het al., 2018, Guest Editorial: Papers from the 12th workshop on Augmented Environments for Computer-Assisted Interventions, HEALTHCARE TECHNOLOGY LETTERS, Vol: 5, Pages: 136-136, ISSN: 2053-3713

Journal article

Edgcumbe P, Singla R, Pratt P, Schneider C, Nguan C, Rohling Ret al., 2018, Follow the light: projector-based augmented reality intracorporeal system for laparoscopic surgery, JOURNAL OF MEDICAL IMAGING, Vol: 5, ISSN: 2329-4302

Journal article

Pratt P, Ives M, Lawton G, Simmons J, Radev N, Spyropoulou L, Amiras Det al., 2018, Through the HoloLens looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels, European Radiology Experimental, Vol: 2, ISSN: 2509-9280

Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to ‘see through’ the patient’s skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.

Journal article

Pratt P, Arora A, 2018, Transoral Robotic Surgery: Image Guidance and Augmented Reality, ORL-JOURNAL FOR OTO-RHINO-LARYNGOLOGY HEAD AND NECK SURGERY, Vol: 80, Pages: 204-212, ISSN: 0301-1569

Journal article

Fallavollita P, Kersten M, Linte CA, Pratt P, Yaniv Zet al., 2017, Special Issue on Augmented Environments for Computer-Assisted Interventions Foreword, HEALTHCARE TECHNOLOGY LETTERS, Vol: 4, Pages: 149-149

Journal article

Singla R, Edgcumbe P, Pratt P, Nguan C, Rohling Ret al., 2017, Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery, HEALTHCARE TECHNOLOGY LETTERS, Vol: 4, Pages: 204-209

Journal article

Camara M, Pratt P, Darzi A, Mayer Eet al., 2017, Simulation of Patient-Specific Deformable Ultrasound Imaging in Real Time, Lecture Notes in Computer Science, Vol: 10549, Pages: 11-18, ISSN: 0302-9743

Intraoperative ultrasound is an imaging modality frequently used to provide delineation of tissue boundaries. This paper proposes a simulation platform that enables rehearsal of patient-specific deformable ultrasound scanning in real-time, using preoperative CT as the data source. The simulation platform was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. The high-resolution particle model is used to deform both surface and volume meshes. The latter is used to compute the barycentric coordinates of each simulated ultrasound image pixel in the surrounding volume, which is then mapped back to the original undeformed CT volume. To validate the computation of simulated ultrasound images, a kidney phantom with an embedded tumour was CT-scanned in the rest position and at five different levels of probe-induced deformation. Measures of normalised cross-correlation and similarity between features were adopted to compare pairs of simulated and ground truth images. The accurate results demonstrate the potential of this approach for clinical translation.

Journal article

Zhang L, Ye M, Giannarou S, Pratt P, Yang GZet al., 2017, Motion-compensated autonomous scanning for tumour localisation using intraoperative ultrasound, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 10434, Pages: 619-627, ISSN: 0302-9743

Intraoperative ultrasound facilitates localisation of tumour boundaries during minimally invasive procedures. Autonomous ultrasound scanning systems have been recently proposed to improve scanning accuracy and reduce surgeons’ cognitive load. However, current methods mainly consider static scanning environments typically with the probe pressing against the tissue surface. In this work, a motion-compensated autonomous ultrasound scanning system using the da Vinci® Research Kit (dVRK) is proposed. An optimal scanning trajectory is generated considering both the tissue surface shape and the ultrasound transducer dimensions. An effective vision-based approach is proposed to learn the underlying tissue motion characteristics. The learned motion model is then incorporated into the visual servoing framework. The proposed system has been validated with both phantom and ex vivo experiments.

Journal article

Abeles A, Kwasnicki RM, Geoghegan L, Pratt P, Darzi Aet al., 2017, Wearable activity sensors: using physical activity to predict length of hospital stay?, International Congress of the Association-of-Surgeons-of-Great-Britain-and-Ireland, Publisher: Wiley, Pages: 53-53, ISSN: 1365-2168

Conference paper

Omar I, Dilley J, Pucher P, Pratt P, Ameen T, Vale J, Darzi A, Mayer Eet al., 2017, The robotix simulator: face and content validation using the fundamentals of robotic surgery (FRS) curriculum, Annual Meeting of the American-Urological-Association (AUA), Publisher: Elsevier, Pages: E700-E701, ISSN: 0022-5347

Conference paper

Tarunina M, Hernandez D, Kronsteiner-Dobramysl B, Pratt P, Watson T, Hua P, Gullo F, Van der Garde M, Zhang Y, Hook L, Choo Y, Watt Set al., 2016, A novel high throughput screening platform reveals an optimised cytokine formulation for human hematopoietic progenitor cell expansion, Stem Cells and Development, ISSN: 1557-8534

The main limitations of hematopoietic cord blood (CB) transplantation, viz. low cell dosage and delayed reconstitution, can be overcome by ex-vivo expansion. CB expansion under conventional culture causes the rapid cell differentiation and depletion of hematopoietic stem/progenitor cells (HSPC) responsible for engraftment. Here, we use combinatorial cell culture technology (CombiCult®) to identify media formulations that promote CD133+ CB HSPC proliferation while maintaining their phenotypic characteristics. We employed second generation CombiCult® screens that use electro-spraying technology to encapsulate CB cells in alginate beads. Our results suggest that, not only the combination, but also the order of addition of individual components has a profound influence on expansion of specific HSPC populations. Top protocols identified by the CombiCult® screen were used to culture human CD133+ CB HSPCs on nanofiber scaffolds and validate the expansion of the phenotypically defined CD34+CD38lo/-CD45RA-CD90+CD49f+ population of hematopoietic stem cells and their differentiation into defined progeny.

Journal article

Camara M, Mayer E, Darzi A, Pratt Pet al., 2016, Soft tissue deformation for surgical simulation: a position-based dynamics approach, International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 919-928, ISSN: 1861-6410

Journal article

Di Marco AN, Jeyakumar J, Pratt PJ, Yang G-Z, Darzi AWet al., 2016, Evaluating a novel 3D stereoscopic visual display for transanal endoscopic surgery: a randomized controlled crossover study, Annals of Surgery, Vol: 263, Pages: 36-42, ISSN: 1528-1140

Journal article

Pratt P, Hughes-Hallett A, Zhang L, Patel N, Mayer E, Darzi A, Yang G-Zet al., 2015, Autonomous Ultrasound-Guided Tissue Dissection, 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: SPRINGER INT PUBLISHING AG, Pages: 249-257, ISSN: 0302-9743

Intraoperative ultrasound imaging can act as a valuable guide during minimally invasive tumour resection. However, contemporaneous bimanual manipulation of the transducer and cutting instrument presents significant challenges for the surgeon. Both cannot occupy the same physical location, and so a carefully coordinated relative motion is required. Using robotic partial nephrectomy as an index procedure, and employing PVA cryogel tissue phantoms in a reduced dimensionality setting, this study sets out to achieve autonomous tissue dissection with a high velocity waterjet under ultrasound guidance. The open-source da Vinci Research Kit (DVRK) provides the foundation for a novel multimodal visual servoing approach, based on the simultaneous processing and analysis of endoscopic and ultrasound images. Following an accurate and robust Jacobian estimation procedure, dissections are performed with specified theoretical tumour margin distances. The resulting margins, with a mean difference of 0.77mm, indicate that the overall system performs accurately, and that future generalisation to 3D tumour and organ surface morphologies is warranted.

Conference paper

Gras G, Marcus HJ, Payne CJ, Pratt P, Yang GZet al., 2015, Visual Force Feedback for Hand-Held Microsurgical Instruments, Medical Image Computing and Computer-Assisted Intervention, ISSN: 0302-9743

Conference paper

Edgcumbe P, Pratt P, Yang G-Z, Nguan C, Rohling Ret al., 2015, Pico Lantern: Surface reconstruction and augmented reality in laparoscopic surgery using a pick-up laser projector, MEDICAL IMAGE ANALYSIS, Vol: 25, Pages: 95-102, ISSN: 1361-8415

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00667466&limit=30&person=true