Imperial College London

DrFabioTatti

Faculty of EngineeringDepartment of Bioengineering

Core Facility Manager (Multilimb Virtual Environment)
 
 
 
//

Contact

 

f.tatti Website

 
 
//

Location

 

U619Sir Michael Uren HubWhite City Campus

//

Summary

 

Publications

Publication Type
Year
to

15 results found

Darwood A, Hurst SA, Villatte G, Tatti F, El Daou H, Reilly P, Baena FRY, Majed A, Emery Ret al., 2022, Novel robotic technology for the rapid intraoperative manufacture of patient-specific instrumentation allowing for improved glenoid component accuracy in shoulder arthroplasty: a cadaveric study, JOURNAL OF SHOULDER AND ELBOW SURGERY, Vol: 31, Pages: 561-570, ISSN: 1058-2746

Journal article

Schlueter-Brust K, Henckel J, Katinakis F, Buken C, Opt-Eynde J, Pofahl T, Rodriguez y Baena F, Tatti Fet al., 2021, Augmented-reality-assisted K-wire placement for glenoid component positioning in reversed shoulder arthroplasty: a proof-of-concept study, Journal of Personalized Medicine, Vol: 11, Pages: 1-8, ISSN: 2075-4426

The accuracy of the implant’s post-operative position and orientation in reverse shoulder arthroplasty is known to play a significant role in both clinical and functional outcomes. Whilst technologies such as navigation and robotics have demonstrated superior radiological outcomes in many fields of surgery, the impact of augmented reality (AR) assistance in the operating room is still unknown. Malposition of the glenoid component in shoulder arthroplasty is known to result in implant failure and early revision surgery. The use of AR has many promising advantages, including allowing the detailed study of patient-specific anatomy without the need for invasive procedures such as arthroscopy to interrogate the joint’s articular surface. In addition, this technology has the potential to assist surgeons intraoperatively in aiding the guidance of surgical tools. It offers the prospect of increased component placement accuracy, reduced surgical procedure time, and improved radiological and functional outcomes, without recourse to the use of large navigation or robotic instruments, with their associated high overhead costs. This feasibility study describes the surgical workflow from a standardised CT protocol, via 3D reconstruction, 3D planning, and use of a commercial AR headset, to AR-assisted k-wire placement. Post-operative outcome was measured using a high-resolution laser scanner on the patient-specific 3D printed bone. In this proof-of-concept study, the discrepancy between the planned and the achieved glenoid entry point and guide-wire orientation was approximately 3 mm with a mean angulation error of 5°.

Journal article

Iqbal H, Tatti F, Baena FRY, 2021, Augmented reality in robotic assisted orthopaedic surgery: A pilot study, JOURNAL OF BIOMEDICAL INFORMATICS, Vol: 120, ISSN: 1532-0464

Journal article

Tatti F, Iqbal H, Jaramaz B, Rodriguez Y Baena Fet al., 2020, A novel computer-assisted workflow for treatment of osteochondral lesions in the knee, CAOS 2020. The 20th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery, Publisher: EasyChair, Pages: 250-253

Computer-Assisted Orthopaedic Surgery (CAOS) is now becoming more prevalent, especially in knee arthroplasty. CAOS systems have the potential to improve the accuracy and repeatability of surgical procedures by means of digital preoperative planning and intraoperative tracking of the patient and surgical instruments.One area where the accuracy and repeatability of computer-assisted interventions could prove especially beneficial is the treatment of osteochondral defects (OCD). OCDs represent a common problem in the patient population, and are often a cause of pain and discomfort. The use of synthetic implants is a valid option for patients who cannot be treated with regenerative methods, but the outcome can be negatively impacted by incorrect positioning of the implant and lack of congruency with the surrounding anatomy.In this paper, we present a novel computer-assisted surgical workflow for the treatment of osteochondral defects. The software we developed automatically selects the implant that most closely matches the patient’s anatomy and computes the best pose. By combining this software with the existing capabilities of the Navio™ surgical system (Smith & Nephew inc.), we were able to create a complete workflow that incorporates both surgical planning and assisted bone preparation.Our preliminary testing on plastic bone models was successful and demonstrated that the workflow can be used to select and position an appropriate implant for a given defect.

Conference paper

Hu X, Fabrizio C, Tatti F, Rodriguez y Baena Fet al., 2020, Automatic calibration of commercial optical see-through head-mounted displays for medical applications, 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, Publisher: IEEE, Pages: 754-755

The simplified, manual calibration of commercial Optical See-Through Head-Mounted Displays (OST-HMDs) is neither accurate nor convenient for medical applications. An interaction-free calibration method that can be easily implemented in commercial headsets is thus desired. State-of-the-art automatic calibrations simplify the eye-screen system as a pinhole camera and tedious offline calibrations are required. Furthermore, they have never been tested on original commercial products. We present a gaze-based automatic calibration method that can be easily implemented in commercial headsets without knowing hardware details. The location of the virtual target is revised in world coordinate according to the real-time tracked eye gaze. The algorithm has been tested with the Microsoft HoloLens. Current quantitative and qualitative user studies show that the automatically calibrated display is statistically comparable with the manually calibrated display under both monocular and binocular rendering mode. Since it is cumbersome to ask users to perform manual calibrations every time the HMD is re-positioned, our method provides a comparably accurate but more convenient and practical solution to the HMD calibration.

Conference paper

Iqbal H, Tatti F, Rodriguez Y Baena F, 2019, Augmented-reality within computer assisted orthopaedic surgery workflows: a proof of concept study, CAOS 2019. The 19th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery, Publisher: EasyChair

The integration of augmented-reality (AR) in medical robotics has been shown to reduce cognitive burden and improve information management in the typically cluttered environment of computer-assisted surgery. A key benefit of such systems is the ability to generate a composite view of medical-informatics and the real environment, streamlining the pathway for delivering patient-specific data. Consequently, AR was integrated within an orthopaedic setting by designing a system that captured and replicated the user- interface of a commercially available surgical robot onto a commercial head mounted see through display. Thus, a clinician could simultaneously view the operating-site and real- time informatics when carrying out an assisted patellofemoral-arthroplasty (PFA). The system was tested with 10 surgeons to examine its usability and impact on procedure- completion times when conducting simulated PFA on sawbone models. A statistically insignificant mean increase in procedure completion-time (+23.7s, p=0.240) was found, and the results of a post-operative qualitative-evaluation indicated a strongly positive consensus on the system, with a large majority of subjects agreeing the system provided value to the procedure without incurring noticeable physical discomfort. Overall, this study provides an encouraging insight into the high levels of engagement AR has with a clinical audience as well as its ability to enhance future generations of medical robotics.

Conference paper

Darwood A, Hurst S, Villatte G, Fenton R, Tatti F, El-Daou H, Reilly P, Emery R, Rodriguez Y Baena Fet al., 2019, Towards a commercial system for intraoperative manufacture of patient-specific guides for shoulder arthroplasty, CAOS 2019. The 19th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery, Publisher: EasyChair, Pages: 110-114

The accurate placement of orthopaedic implants according to a biomechanically derived preoperative plan is an important consideration in the long-term success of these interventions. Guidance technologies are widely described however, high cost, complex theatre integration, intraoperative inefficiency and functional limitations have prevented the widespread use. A novel, intraoperative mechatronics platform is presented, capable of the rapid, intraoperative manufacture of low-cost patient-specific guides. The device consists of a tableside robot with sterile drapes and some low cost, sterile disposable components. The robot comprises a 3D optical scanner, a three-axis sterile computer numerical control (CNC) drill and a two-axis receptacle into which the disposable consumables may be inserted. The sterile consumable comprises a region of rapidly setting moldable material and a clip allowing it to be reversibly attached to the tableside robot. In use, patient computed tomography (CT) imaging is obtained at any point prior to surgery and a surgical plan is created on associated software. This plan describes the axis and positioning of one or more guidewires which may, in turn, locate the prosthesis into position. Intraoperatively, osseous anatomy is exposed, and the sterile disposable is used to rapidly create a mould of the joint surface. Once set, the mould is inserted into the robot and an optical scan of the surface is taken followed by automatic surface registration, bringing the optical scan into the same coordinate frame of reference as the CT data and plan. The CNC drill is orientated such that the drill axis and position exactly matches the planned axis and position with respect to the moulded surface. A guide hole is drilled into the mould blank, which is removed from the robot and placed back into the patient with the moulded surface ensuring exact replacement. A wire is subsequently driven through the guide hole into the osseous anatomy in accordance with

Conference paper

Tatti F, Baud-Bovy G, 2018, Force sharing and other collaborative strategies in a dyadic force perception task, PLoS ONE, Vol: 13, ISSN: 1932-6203

When several persons perform a physical task jointly, such as transporting an object together, the interaction force that each person experiences is the sum of the forces applied by all other persons on the same object. Therefore, there is a fundamental ambiguity about the origin of the force that each person experiences. This study investigated the ability of a dyad (two persons) to identify the direction of a small force produced by a haptic device and applied to a jointly held object. In this particular task, the dyad might split the force produced by the haptic device (the external force) in an infinite number of ways, depending on how the two partners interacted physically. A major objective of this study was to understand how the two partners coordinated their action to perceive the direction of the third force that was applied to the jointly held object. This study included a condition where each participant responded independently and another one where the two participants had to agree upon a single negotiated response. The results showed a broad range of behaviors. In general, the external force was not split in a way that would maximize the joint performance. In fact, the external force was often split very unequally, leaving one person without information about the external force. However, the performance was better than expected in this case, which led to the discovery of an unanticipated strategy whereby the person who took all the force transmitted this information to the partner by moving the jointly held object. When the dyad could negotiate the response, we found that the participant with less force information tended to switch his or her response more often.

Journal article

Tatti F, Baud-Bovy G, 2016, Error feedback does not change response strategies in a joint force detection task, IEEE International Conference on Systems, Man, and Cybernetics (SMC), Publisher: IEEE, Pages: 1159-1164, ISSN: 1062-922X

Conference paper

Tatti F, Baud-Bovy G, 2015, Force Sharing Strategies in a Collaborative Force Detection Task, IEEE World Haptics Conference, Publisher: IEEE, Pages: 463-468

Conference paper

Baud-Bovy G, Tatti F, Borghese NA, 2014, Ability of Low-Cost Force-Feedback Device to Influence Postural Stability, IEEE TRANSACTIONS ON HAPTICS, Vol: 8, Pages: 130-139, ISSN: 1939-1412

Journal article

Tatti F, Gurari N, Baud-Bovy G, 2014, Static Force Rendering Performance of Two Commercial Haptic Systems, 9th International Conference on Haptics - Neuroscience, Devices, Modeling, and Applications (EuroHaptics), Publisher: SPRINGER-VERLAG BERLIN, Pages: 342-350, ISSN: 0302-9743

Conference paper

Surer E, Pirovano M, Mainetti R, Tatti F, Baud-Bovy G, Borghese Aet al., 2014, VIDEO-GAMES BASED NEGLECT REHABILITATION USING HAPTICS, 22nd IEEE Signal Processing and Communications Applications Conference (SIU), Publisher: IEEE, Pages: 1726-1729, ISSN: 2165-0608

Conference paper

Baud-Bovy G, Tatti F, Borghese NA, 2013, An Evaluation of the Effects on Postural Stability of a Force Feedback Rendered by a Low-Cost Haptic Device in Various Tasks, IEEE World Haptics Conference (WHC), Publisher: IEEE, Pages: 667-672

Conference paper

Chessa M, Sabatini SP, Solari F, Tatti Fet al., 2011, A quantitative comparison of speed and reliability for log-polar mapping techniques, Pages: 41-50, ISBN: 9783642239670

A space-variant representation of images is of great importance for active vision systems capable of interacting with the environment. A precise processing of the visual signal is achieved in the fovea, and, at the same time, a coarse computation in the periphery provides enough information to detect new saliences on which to bring the focus of attention. In this work, different techniques to implement the blind-spot model for the log-polar mapping are quantitatively analyzed to assess the visual quality of the transformed images and to evaluate the associated computational load. The technique with the best trade-off between these two aspects is expected to show the most efficient behaviour in robotic vision systems, where the execution time and the reliability of the visual information are crucial. © 2011 Springer-Verlag.

Book chapter

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00990034&limit=30&person=true