An RGB-D 3D reconstruction approach that leverages deep learning to make object-level preditions about thicknesses.

Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual re- ality, which has seen impressive progress throughout the past years. Advancements were driven by the availabil- ity of depth cameras (RGB-D), as well as increased com- pute power, e.g. in the form of GPUs – but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predic- tions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shapes in general indoor scenes behind what is sensed by the RGB-D cam- era, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thick- nesses rather than volumes allows us to work with compa- rably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict ob- ject thickness and reconstruct general 3D scenes containing multiple objects.

Andrea Nicastro, Ronald Clark, Stefan LeuteneggerX-Section: Cross-Section Prediction for Enhanced RGB-D Fusion. IEEE International Conference on Computer Vision (ICCV), 2019

The X-Section software is available through the link on the right and is free to be used for non-commercial purposes. Full terms and conditions which govern its use are detailed here.