Imperial College London

DrStefanLeutenegger

Faculty of EngineeringDepartment of Computing

Visiting Reader
 
 
 
//

Contact

 

s.leutenegger Website

 
 
//

Location

 

ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Nicastro:2019,
author = {Nicastro, A and Clark, R and Leutenegger, S},
title = {X-Section: cross-section prediction for enhanced RGBD fusion},
url = {http://arxiv.org/abs/1903.00987v2},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Detailed 3D reconstruction is an important challenge with application torobotics, augmented and virtual reality, which has seen impressive progressthroughout the past years. Advancements were driven by the availability ofdepth cameras (RGB-D), as well as increased compute power, e.g.\ in the form ofGPUs -- but also thanks to inclusion of machine learning in the process. Here,we propose X-Section, an RGB-D 3D reconstruction approach that leverages deeplearning to make object-level predictions about thicknesses that can be readilyintegrated into a volumetric multi-view fusion process, where we propose anextension to the popular KinectFusion approach. In essence, our method allowsto complete shape in general indoor scenes behind what is sensed by the RGB-Dcamera, which may be crucial e.g.\ for robotic manipulation tasks or efficientscene exploration. Predicting object thicknesses rather than volumes allows usto work with comparably high spatial resolution without exploding memory andtraining data requirements on the employed Convolutional Neural Networks. In aseries of qualitative and quantitative evaluations, we demonstrate how weaccurately predict object thickness and reconstruct general 3D scenescontaining multiple objects.
AU - Nicastro,A
AU - Clark,R
AU - Leutenegger,S
PY - 2019///
TI - X-Section: cross-section prediction for enhanced RGBD fusion
UR - http://arxiv.org/abs/1903.00987v2
UR - http://hdl.handle.net/10044/1/71988
ER -