Imperial College London

Dr Thrishantha Nanayakkara

Faculty of EngineeringDyson School of Design Engineering

Reader in Design Engineering and Robotics



+44 (0)20 7594 0965t.nanayakkara Website CV




RCS1 M229Dyson BuildingSouth Kensington Campus






BibTex format

author = {Cotugno, G and Konstantinova, J and Althoefer, K and Nanayakkara, DPT},
doi = {10.1371/journal.pone.0208228},
journal = {PLoS ONE},
title = {Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands},
url = {},
volume = {13},
year = {2018}

RIS format (EndNote, RefMan)

AB - Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The existing frameworks, however, can be difficult to extend towards a more general and domain independent approach. This work is the first step towards a modular implementation of grasp affordances that can be separated into two stages: approach to grasp and grasp execution. In this study, human experiments of approaching to grasp are analysed, and object-independent patterns of motion are defined and modelled analytically from the data. Human subjects performed a specific action (hammering) using objects of different geometry, size and weight. Motion capture data relating the hand-object approach distance was used for the analysis. The results showed that approach to grasp can be structured in four distinct phases that are best represented by non-linear models, independent from the objects being handled. This suggests that approaching to grasp patterns are following an intentionally planned control strategy, rather than implementing a reactive execution.
AU - Cotugno,G
AU - Konstantinova,J
AU - Althoefer,K
AU - Nanayakkara,DPT
DO - 10.1371/journal.pone.0208228
PY - 2018///
SN - 1932-6203
TI - Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands
UR -
UR -
VL - 13
ER -