SceneNet imageSceneNet RGB-D generator randomly samples and positions objects, and runs a physics simulation to produce a scene description. Then a camera trajectory is generated using OpenGL z-buffer collision detection according to the two-body simulation. Finally the renderer (a modified version of the Opposite Renderer, using the OptiX framework) renders the trajectory, outputting rgb, depth, and instance mask. 

SceneNet RGB-D generator provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations.

  

John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J Davison. SceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-training on Indoor Segmentation?. International Conference on Computer Vision (ICCV), 2017

The SceneNet RGB-D generator is available through the following link and is free to be used for non-commercial purposes.  Full terms and conditions which govern its use are detailed here:

https://bitbucket.org/dysonroboticslab/scenenetrgb-d

A dataset generated using SceneNet RGB-D software is available here.