Publications from our Researchers

Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.  

Citation

BibTex format

@unpublished{Arulkumaran:2016,
author = {Arulkumaran, K and Dilokthanakul, N and Shanahan, M and Bharath, AA},
publisher = {IJCAI},
title = {Classifying options for deep reinforcement learning},
url = {http://arxiv.org/abs/1604.08153v1},
year = {2016}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Deep reinforcement learning is the learning of multiple levels ofhierarchical representations for reinforcement learning. Hierarchicalreinforcement learning focuses on temporal abstractions in planning andlearning, allowing temporally-extended actions to be transferred between tasks.In this paper we combine one method for hierarchical reinforcement learning -the options framework - with deep Q-networks (DQNs) through the use ofdifferent "option heads" on the policy network, and a supervisory network forchoosing between the different options. We show that in a domain where we haveprior knowledge of the mapping between states and options, our augmented DQNachieves a policy competitive with that of a standard DQN, but with much lowersample complexity. This is achieved through a straightforward architecturaladjustment to the DQN, as well as an additional supervised neural network.
AU - Arulkumaran,K
AU - Dilokthanakul,N
AU - Shanahan,M
AU - Bharath,AA
PB - IJCAI
PY - 2016///
TI - Classifying options for deep reinforcement learning
UR - http://arxiv.org/abs/1604.08153v1
UR - http://hdl.handle.net/10044/1/32327
ER -

Contact us

Data Science Institute

William Penney Laboratory
Imperial College London
South Kensington Campus
London SW7 2AZ
United Kingdom

Email us.

Sign up to our mailing list.

Follow us on Twitter, LinkedIn and Instagram.