Imperial College London


Faculty of EngineeringDepartment of Computing

Professor in Machine Learning & Computer Vision



+44 (0)20 7594 8461s.zafeiriou Website CV




375Huxley BuildingSouth Kensington Campus






BibTex format

author = {Zhou, Y and Deng, J and Zafeiriou, S},
doi = {10.1109/FG.2018.00077},
pages = {480--484},
publisher = {IEEE},
title = {Improve accurate pose alignment and action localization by dense pose estimation},
url = {},
year = {2018}

RIS format (EndNote, RefMan)

AB - In this work we explore the use of shape-based representations as an auxiliary source of supervision for pose estimation and action recognition. We show that shape-based representations can act as a source of `privileged information' that complements and extends the pure landmark-level annotations. We explore 2D shape-based supervision signals, such as Support Vector Shape. Our experiments show that shape-based supervision signals substantially improve pose alignment accuracy in the form of a cascade architecture. We outperform state-of-the-art methods on the MPII and LSP datasets, while using substantially shallower networks. For action localization in untrimmed videos, our method introduces additional classification signals based on the structured segment networks (SSN) and further improved the performance. To be specific, dense human pose and landmarks localization signals are involved in detection progress. We applied out network to all frames of videos alongside with output from SSN to further improve detection accuracy, especially for pose related and sparsely annotated videos. The method in general achieves state-of-the-art performance on Activity Detection Task for ActivityNet Challenge2017 test set and witnesses remarkable improvement on pose related and sparsely annotated categories e.g. sports.
AU - Zhou,Y
AU - Deng,J
AU - Zafeiriou,S
DO - 10.1109/FG.2018.00077
EP - 484
PY - 2018///
SN - 2326-5396
SP - 480
TI - Improve accurate pose alignment and action localization by dense pose estimation
UR -
UR -
UR -
ER -