Imperial College London

ProfessorAndrewDavison

Faculty of EngineeringDepartment of Computing

Professor of Robot Vision
 
 
 
//

Contact

 

+44 (0)20 7594 8316a.davison Website

 
 
//

Assistant

 

Ms Lucy Atthis +44 (0)20 7594 8259

 
//

Location

 

303William Penney LaboratorySouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{James:2020:10.1109/lra.2020.2974707,
author = {James, S and Ma, Z and Arrojo, DR and Davison, AJ},
doi = {10.1109/lra.2020.2974707},
journal = {IEEE Robotics and Automation Letters},
pages = {3019--3026},
title = {RLBench: The robot learning benchmark & learning environment},
url = {http://dx.doi.org/10.1109/lra.2020.2974707},
volume = {5},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - We present a challenging new benchmark and learning-environment for robot learning: RLBench. The benchmark features 100 completely unique, hand-designed tasks, ranging in difficulty from simple target reaching and door opening to longer multi-stage tasks, such as opening an oven and placing a tray in it. We provide an array of both proprioceptive observations and visual observations, which include rgb, depth, and segmentation masks from an over-the-shoulder stereo camera and an eye-in-hand monocular camera. Uniquely, each task comes with an infinite supply of demos through the use of motion planners operating on a series of waypoints given during task creation time; enabling an exciting flurry of demonstration-based learning possibilities. RLBench has been designed with scalability in mind; new tasks, along with their motion-planned demos, can be easily created and then verified by a series of tools, allowing users to submit their own tasks to the RLBench task repository. This large-scale benchmark aims to accelerate progress in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. With the benchmark's breadth of tasks and demonstrations, we propose the first large-scale few-shot challenge in robotics. We hope that the scale and diversity of RLBench offers unparalleled research opportunities in the robot learning community and beyond. Benchmarking code and videos can be found at https://sites.google.com/view/rlbench .
AU - James,S
AU - Ma,Z
AU - Arrojo,DR
AU - Davison,AJ
DO - 10.1109/lra.2020.2974707
EP - 3026
PY - 2020///
SN - 2377-3766
SP - 3019
TI - RLBench: The robot learning benchmark & learning environment
T2 - IEEE Robotics and Automation Letters
UR - http://dx.doi.org/10.1109/lra.2020.2974707
UR - https://ieeexplore.ieee.org/document/9001253
UR - http://hdl.handle.net/10044/1/77812
VL - 5
ER -