Imperial College London

ProfessorWayneLuk

Faculty of EngineeringDepartment of Computing

Professor of Computer Engineering
 
 
 
//

Contact

 

+44 (0)20 7594 8313w.luk Website

 
 
//

Location

 

434Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Shao:2018:10.1109/ASAP.2018.8445099,
author = {Shao, S and Tsai, J and Mysior, M and Luk, W and Chau, T and Warren, A and Jeppesen, B},
doi = {10.1109/ASAP.2018.8445099},
title = {Towards Hardware Accelerated Reinforcement Learning for Application-Specific Robotic Control},
url = {http://dx.doi.org/10.1109/ASAP.2018.8445099},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - © 2018 IEEE. Reinforcement Learning (RL) is an area of machine learning in which an agent interacts with the environment by making sequential decisions. The agent receives reward from the environment based on how good the decisions are and tries to find an optimal decision-making policy that maximises its longterm cumulative reward. This paper presents a novel approach which has showon promise in applying accelerated simulation of RL policy training to automating the control of a real robot arm for specific applications. The approach has two steps. First, design space exploration techniques are developed to enhance performance of an FPGA accelerator for RL policy training based on Trust Region Policy Optimisation (TRPO), which results in a 43% speed improvement over a previous FPGA implementation, while achieving 4.65 times speed up against deep learning libraries running on GPU and 19.29 times speed up against CPU. Second, the trained RL policy is transferred to a real robot arm. Our experiments show that the trained arm can successfully reach to and pick up predefined objects, demonstrating the feasibility of our approach.
AU - Shao,S
AU - Tsai,J
AU - Mysior,M
AU - Luk,W
AU - Chau,T
AU - Warren,A
AU - Jeppesen,B
DO - 10.1109/ASAP.2018.8445099
PY - 2018///
SN - 1063-6862
TI - Towards Hardware Accelerated Reinforcement Learning for Application-Specific Robotic Control
UR - http://dx.doi.org/10.1109/ASAP.2018.8445099
UR - http://hdl.handle.net/10044/1/64190
ER -