Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • JOURNAL ARTICLE
    Ceran ET, Gündüz D, György A, 2018,

    Average age of information with hybrid ARQ under a resource constraint

    , IEEE Wireless Communications and Networking Conference, WCNC, Vol: 2018-April, Pages: 1-6, ISSN: 1525-3511

    © 2018 IEEE. Scheduling the transmission of status updates over an error-prone communication channel is studied in order to minimize the long-term average age of information (AoI) at the destination under a constraint on the average number of transmissions at the source node. After each transmission, the source receives an instantaneous ACK/NACK feedback, and decides on the next update without prior knowledge on the success of future transmissions. First, the optimal scheduling policy is studied under different feedback mechanisms when the channel statistics are known; in particular, the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols are considered. Then, for an unknown environment, an average-cost reinforcement learning (RL) algorithm is proposed that learns the system parameters and the transmission policy in real time. The effectiveness of the proposed methods are verified through numerical simulations.

  • JOURNAL ARTICLE
    Chamberlain B, Levy-Kramer J, Humby C, Deisenroth MPet al., 2018,

    Real-time community detection in full social networks on a laptop

    , PLoS ONE, Vol: 13, ISSN: 1932-6203

    For a broad range of research and practical applications it is important to understand the allegiances, communities and structure of key players in society. One promising direction towards extracting this information is to exploit the rich relational data in digital social networks (the social graph). As global social networks (e.g., Facebook and Twitter) are very large, most approaches make use of distributed computing systems for this purpose. Distributing graph processing requires solving many difficult engineering problems, which has lead some researchers to look at single-machine solutions that are faster and easier to maintain. In this article, we present an approach for analyzing full social networks on a standard laptop, allowing for interactive exploration of the communities in the locality of a set of user specified query vertices. The key idea is that the aggregate actions of large numbers of users can be compressed into a data structure that encapsulates the edge weights between vertices in a derived graph. Local communities can be constructed by selecting vertices that are connected to the query vertices with high edge weights in the derived graph. This compression is robust to noise and allows for interactive queries of local communities in real-time, which we define to be less than the average human reaction time of 0.25s. We achieve single-machine real-time performance by compressing the neighborhood of each vertex using minhash signatures and facilitate rapid queries through Locality Sensitive Hashing. These techniques reduce query times from hours using industrial desktop machines operating on the full graph to milliseconds on standard laptops. Our method allows exploration of strongly associated regions (i.e., communities) of large graphs in real-time on a laptop. It has been deployed in software that is actively used by social network analysts and offers another channel for media owners to monetize their data, helping them to continue to provide

  • CONFERENCE PAPER
    Kamthe S, Deisenroth MP, 2018,

    Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control.

    , Artificial Intelligence and Statistics, Publisher: PMLR, Pages: 1701-1710
  • CONFERENCE PAPER
    Pardo F, Tavakoli A, Levdik V, Kormushev Pet al., 2018,

    Time limits in reinforcement learning

    , International Conference on Machine Learning, Pages: 4042-4051

    In reinforcement learning, it is common to let anagent interact for a fixed amount of time with itsenvironment before resetting it and repeating theprocess in a series of episodes. The task that theagent has to learn can either be to maximize itsperformance over (i) that fixed period, or (ii) anindefinite period where time limits are only usedduring training to diversify experience. In thispaper, we provide a formal account for how timelimits could effectively be handled in each of thetwo cases and explain why not doing so can causestate-aliasing and invalidation of experience re-play, leading to suboptimal policies and traininginstability. In case (i), we argue that the termi-nations due to time limits are in fact part of theenvironment, and thus a notion of the remainingtime should be included as part of the agent’s in-put to avoid violation of the Markov property. Incase (ii), the time limits are not part of the envi-ronment and are only used to facilitate learning.We argue that this insight should be incorporatedby bootstrapping from the value of the state atthe end of each partial episode. For both cases,we illustrate empirically the significance of ourconsiderations in improving the performance andstability of existing reinforcement learning algo-rithms, showing state-of-the-art results on severalcontrol tasks.

  • CONFERENCE PAPER
    Sæmundsson S, Hofmann K, Deisenroth MP, 2018,

    Meta Reinforcement Learning with Latent Variable Gaussian Processes.

    , Uncertainty in Artificial Intelligence
  • CONFERENCE PAPER
    Saputra RP, Kormushev P, 2018,

    ResQbot: A Mobile Rescue Robot for Casualty Extraction

    , Pages: 239-240
  • CONFERENCE PAPER
    Saputra RP, Kormushev P, 2018,

    Casualty Detection from 3D Point Cloud Data for Autonomous Ground Mobile Rescue Robots

    , SSRR 2018
  • CONFERENCE PAPER
    Saputra RP, Kormushev P, 2018,

    Casualty Detection for Mobile Rescue Robots via Ground-Projected Point Clouds

  • CONFERENCE PAPER
    Saputra RP, Kormushev P, 2018,

    ResQbot: A Mobile Rescue Robot with Immersive Teleperception for Casualty Extraction

  • CONFERENCE PAPER
    Tavakoli A, Pardo F, Kormushev P, 2018,

    Action Branching Architectures for Deep Reinforcement Learning

    Discrete-action algorithms have been central to numerous recent successes ofdeep reinforcement learning. However, applying these algorithms tohigh-dimensional action tasks requires tackling the combinatorial increase ofthe number of possible actions with the number of action dimensions. Thisproblem is further exacerbated for continuous-action tasks that require finecontrol of actions via discretization. In this paper, we propose a novel neuralarchitecture featuring a shared decision module followed by several networkbranches, one for each action dimension. This approach achieves a linearincrease of the number of network outputs with the number of degrees of freedomby allowing a level of independence for each individual action dimension. Toillustrate the approach, we present a novel agent, called Branching DuelingQ-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network(Dueling DDQN). We evaluate the performance of our agent on a set ofchallenging continuous control tasks. The empirical results show that theproposed agent scales gracefully to environments with increasing actiondimensionality and indicate the significance of the shared decision module incoordination of the distributed action branches. Furthermore, we show that theproposed agent performs competitively against a state-of-the-art continuouscontrol algorithm, Deep Deterministic Policy Gradient (DDPG).

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=1&respub-action=search.html Current Millis: 1532111452965 Current Time: Fri Jul 20 19:30:52 BST 2018