Imperial College London

DrStefanLeutenegger

Faculty of EngineeringDepartment of Computing

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 7123s.leutenegger Website

 
 
//

Location

 

360ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Bloesch,
author = {Bloesch, M and Czarnowski, J and Clark, R and Leutenegger, S and Davison, AJ},
publisher = {IEEE},
title = {CodeSLAM - Learning a Compact, Optimisable Representation for Dense Visual SLAM},
url = {http://hdl.handle.net/10044/1/58316},
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - The representation of geometry in real-time 3D per-ception systems continues to be a critical research issue.Dense maps capture complete surface shape and can beaugmented with semantic labels, but their high dimension-ality makes them computationally costly to store and pro-cess, and unsuitable for rigorous probabilistic inference.Sparse feature-based representations avoid these problems,but capture only partial scene information and are mainlyuseful for localisation only.We present a new compact but dense representation ofscene geometry which is conditioned on the intensity datafrom a single image and generated from a code consistingof a small number of parameters. We are inspired by workboth on learned depth from images, and auto-encoders. Ourapproach is suitable for use in a keyframe-based monoculardense SLAM system: While each keyframe with a code canproduce a depth map, the code can be optimised efficientlyjointlywith pose variables and together with the codes ofoverlapping keyframes to attain global consistency. Condi-tioning the depth map on the image allows the code to onlyrepresent aspects of the local geometry which cannot di-rectly be predicted from the image. We explain how to learnour code representation, and demonstrate its advantageousproperties in monocular SLAM.
AU - Bloesch,M
AU - Czarnowski,J
AU - Clark,R
AU - Leutenegger,S
AU - Davison,AJ
PB - IEEE
TI - CodeSLAM - Learning a Compact, Optimisable Representation for Dense Visual SLAM
UR - http://hdl.handle.net/10044/1/58316
ER -