Imperial College London

DrStefanLeutenegger

Faculty of EngineeringDepartment of Computing

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 7123s.leutenegger Website

 
 
//

Location

 

360ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Li:2018,
author = {Li, M and Songur, N and Orlov, P and Leutenegger, S and Faisal, AA},
title = {Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos},
url = {http://arxiv.org/abs/1807.10561v1},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Incorporating the physical environment is essential for a completeunderstanding of human behavior in unconstrained every-day tasks. This isespecially important in ego-centric tasks where obtaining 3 dimensionalinformation is both limiting and challenging with the current 2D video analysismethods proving insufficient. Here we demonstrate a proof-of-concept systemwhich provides real-time 3D mapping and semantic labeling of the localenvironment from an ego-centric RGB-D video-stream with 3D gaze pointestimation from head mounted eye tracking glasses. We augment existing work inSemantic Simultaneous Localization And Mapping (Semantic SLAM) with collectedgaze vectors. Our system can then find and track objects both inside andoutside the user field-of-view in 3D from multiple perspectives with reasonableaccuracy. We validate our concept by producing a semantic map from images ofthe NYUv2 dataset while simultaneously estimating gaze position and gazeclasses from recorded gaze data of the dataset images.
AU - Li,M
AU - Songur,N
AU - Orlov,P
AU - Leutenegger,S
AU - Faisal,AA
PY - 2018///
TI - Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos
UR - http://arxiv.org/abs/1807.10561v1
UR - http://hdl.handle.net/10044/1/71564
ER -