Research
In our laboratory, we perform research towards intelligent robotic devices that interact with their users, learn from them, and adapt their assistance to maximise their users' physical, cognitive and social well-being. Our research spans several topic areas, including machine learning, user modelling, cognitive architectures, human action analysis, and shared control. We are aiming at advancing fundamental theoretical concepts in these fields, but without ignoring the engineering challenges of the real world, so our experiments involve real robots, real humans, and real tasks. Do feel free to contact us if you have any queries, are interested in joining us as a student or a researcher, or have a great idea for scientific collaboration.

Research Themes
Adaptive Cognitive Architectures for Human Robot Interaction
Over the past 15 years, we have been developing a core distributed cognitive architecture for understanding human actions, predicting the intention behind them, and if needed, generating assistance to ensure that the human achieve their desired intention. The core of our architecture relies on learned hierarchical ensembles of inverse and forward models that predict future states of an observed system. In order to ensure scalability, and real-time operation on embedded devices (such as our robots), we employ an attention mechanism that distributes the computational and sensorimotor resources of the robotic device in an optimal manner.
We have evaluated this architecture in many diverse tasks including human-robot collaboration, multiagent computer games, intelligent robotic wheelchairs for disabled adults and children, collaborative music generation, physical education tasks (eg. dance) and multirobot coordination and control. We generally use the label HAMMER (for “Hierarchical Attentive Multiple Models for Execution and Recognition”) to describe this architectural approach, with the implementation details of the inverse/forward models used frequently optimised for the particular task.
Key Publications:
- Demiris Y, Aziz-Zadeh L, Bonaiuto J (2014) Information processing in the mirror neuron system in primates and machines, Neuroinformatics, pp: 3357-3360.
- Demiris Y (2007) Prediction of intent in robotics and muti-agent systems, Cognitive Processing (8), pp: 151-158.
Assistive Robotics
We design and implement algorithms for robots which can assist humans in their daily lifes. Specifically, we are interested in adaptive robots, where we first build user models, and employ these models in order to give personalised assistance. The main focus of our research is robot-assisted dressing.
More information can be found on Robot-Assisted Dressing.
KEY PUBLICATIONS:
- Zhang F, Demiris Y (2020). Learning Grasping Points for Garment Manipulation in Robot-Assisted Dressing. International Conference on Robotics and Automation (ICRA 2020).
- Zhang F, Cully A, Demiris Y (2019). Probabilistic Real-Time User Posture Tracking for Personalized Robot-Assisted Dressing. IEEE Transactions on Robotics, 35.4: 873-888.
- Zhang F, Cully A, Demiris Y (2017). Personalized Robot-assisted Dressing using User Modeling in Latent Spaces. International Conference on Intelligent Robots and Systems (IROS 2017), pp: 3603-3610.
- Gao Y, Chang HJ, Demiris Y, (2016). Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), pp: 4398-4403.
- Gao Y, Chang HJ, Demiris Y (2015), User Modelling for Personalised Dressing Assistance by Humanoid Robots, IEEE International Conference on Intelligent Robots and Systems (IROS 2015), pp:1840-1845.
Hierarchical Task Representations and Machine Learning
When you observe a person performing an action, there are multiple levels of abstraction that you can use to describe what they are doing. You can describe the trajectories of their body parts, the objects they are using, and/or the effects they are having on their environment. Additionally, if you observe them long enough, you might notice particular usage patterns, traits, and preferences. We are performing research in algorithms for learning task representations that accommodate these abstraction levels. Our published work includes representations at the trajectory level using statistical methods (including gaussian processes, quantum statistics, Dirichlet processes, among others), neural networks (including reservoir computing algorithms), and linguistic approaches (for example stochastic context free grammars).
Key Publications:
- Zambelli M, Cully A, Demiris Y, 2020, Multimodal representation models for prediction and control from partial information, Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890
- Soh H, Demiris Y (2014) Spatio-Temporal Learning with the Online Finite and Infinite Echo-state Gaussian Processes, IEEE Transactions on Neural Networks and Learning Systems
- Lee K, Su Y, Kim TK, Demiris Y (2013) A syntactic approach to robot imitation learning using probabilistic activity grammars, Robotics and Autonomous Systems (61) pp: 1323-34
In-vehicle Intelligent Systems
We design and implement algorithms for modelling the user's behaviour during driving. Our aim is to provide personal assistance and training as well as predict and avoid forthcoming critical situations.
APPLICATIONS:
Key Publications:
- Amadori P, Fischer T, Wang R, Demiris Y, Decision Anticipation for Driving Assistance Systems, IEEE International Conference on Intelligent Transportation Systems 2020.
- Georgiou, T., Demiris, Y. (2017), Adaptive user modelling in car racing games using behavioural and physiological data. User Modeling and User-Adapted Interaction (UMUAI), Pages 1-45.
- Georgiou T, Demiris Y, (2016), Personalised Track Design in Car Racing Games, IEEE Computational Intelligence and Games
- Georgiou T, Demiris Y, (2015), Predicting car states through learned models of vehicle dynamics and user behaviours, Intelligent Vehicles Symposium (IV), Publisher: IEEE, Pages: 1240-1245.
Robot Vision
Building kinematic structures of articulated objects from visual input data is an active research topic in computer vision and robotics. The accurately estimated kinematic structure represents motion properties as well as shape information of an object in a topological manner, and it encodes relationships between rigid body parts connected by kinematic joints.
Accurate and efficient estimation of kinematic correspondences between heterogeneous objects is beneficial in the computer vision and robotic fields for many high level tasks such as learning by imitation, human motion retargeting to robots, human action recognition from different sensors, viewpoint invariant human action recognition by 3D skeletons, behaviour discovery and alignment, affordance based object/tool categorisation, body scheme learning for robotic manipulators, and articulated object manipulation. Therefore, in our lab we focus on estimating accurate kinematic structures and finding correspondences between two articulated kinematic structures extracted from different objects.
Key Publications:
- Buizza C, Fischer T, Demiris Y, 2019, Real-time multi-person pose tracking using data assimilation, IEEE Winter Conference on Applications of Computer Vision.
- Fischer T, Chang HJ, Demiris Y, 2018, RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments, Proceedings of the European Conference on Computer Vision, pp:339-357
- Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Y, 2018, Learning Kinematic Structure Correspondences Using Multi-Order Similarities, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp:2920-2934
- Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Y, 2016, Kinematic Structure Correspondences via Hypergraph Matching, IEEE Conference on Computer Vision and Pattern Recognition, pp:4216-4425
- Chang HJ, Demiris Y, 2015, Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp:3138-3146
Sensorimotor Self and Mirroring
The aim of this research theme is to design and implement efficient learning algorithms for self-exploration and body-schema building in humanoid robots, and for the understanding the different environmental objects. These features are embedded into a cognitive-developmental framework in order for the robot to acquire a mirror system by bootstrapping the understanding of the actions of others based on the learned model of the self.
Key Publications:
- Fischer T, Demiris Y, Computational Modelling of Embodied Visual Perspective-taking, IEEE Transactions on Cognitive and Developmental Systems, to appear
- Petit M, Fischer T, Demiris Y, 2016, Lifelong Augmentation of Multi-Modal Streaming Autobiographical Memories, IEEE Transactions on Cognitive and Developmental Systems, vol. 8, no. 3, pp:201-213
- Fischer T, Demiris Y, 2016, Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments, IEEE International Conference on Robotics and Automation (ICRA), pp:3309-3316
- Zambelli M, Demiris Y, 2016, Multimodal Imitation using Self-learned Sensorimotor Representations, IEEE/RAS International Conference on Intelligent Robots and Systems
- Petit M, Demiris Y, 2016, Hierarchical action learning by instruction through interactive grounding of body parts and proto-actions, IEEE International Conference on Robotics and Automation, pp:3375-3382
- Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Y, 2016, Kinematic Structure Correspondences via Hypergraph Matching, IEEE Conference on Computer Vision and Pattern Recognition, pp:4216-4425
Shared Control & Haptic Telepresence
One of the important lines of research in our lab is concerned with how the control system of the robotic device can be shared between a human collaborator and a sensor-based autonomous decision-making process, so that the final outcome takes advantage of the strengths of both. Our shared (or collaborative) control methods receive input from the human collaborator, calculate the current environmental state, form predictions regarding the the intention of the user as well as the expected outcome from the current control commands, and generate assistance (complementary control signals) when and only if needed. We have applied these shared control methods in multiple domains, including shared control of robotic wheelchairs for the elderly, and disabled children and adults, as well as in high-performance scenarios such as F1 racing.
Key Publications:
- Zolotas M, Elsdon J, Demiris Y, 2018, Head-mounted augmented reality for explainable robotic wheelchair assistance, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE
- Chacon Quesada R, Demiris Y, 2018, Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances, https://www.idiap.ch/workshop/iros2018/files/, IROS 2018 Workshop on Robots for Assisted Living
- Kucukyilmaz A, Demiris Y (2015) One-shot assistance estimation from expert demonstrations for a shared control wheelchair system, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp:438-443
- Soh H, Demiris Y (2013) When and How To Help: An Iterative Probabilistic Model for Learning Assistance by Demonstration, IROS, pp:3230-3236.
- Carlson T, Demiris Y (2012) Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload, IEEE Transactions on Systems, Man and Cybernetics, Part B (42), pp:876-888.
Related European Research Projects

Research Sponsors and Industrial Collaborators
