Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Ahmadzadeh SR, Kormushev P, Caldwell DG, 2013,

    Interactive Robot Learning of Visuospatial Skills

  • Journal article
    Silk D, Filippi S, Stumpf MPH, 2013,

    Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems

    , Statistical Applications in Genetics and Molecular Biology, Vol: 12, Pages: 603-618, ISSN: 2194-6302

    The likelihood–free sequential Approximate Bayesian Computation (ABC) algorithms are increasingly popular inference tools for complex biological models. Such algorithms proceed by constructing a succession of probability distributions over the parameter space conditional upon the simulated data lying in an ε–ball around the observed data, for decreasing values of the threshold ε. While in theory, the distributions (starting from a suitably defined prior) will converge towards the unknown posterior as ε tends to zero, the exact sequence of thresholds can impact upon the computational efficiency and success of a particular application. In particular, we show here that the current preferred method of choosing thresholds as a pre-determined quantile of the distances between simulated and observed data from the previous population, can lead to the inferred posterior distribution being very different to the true posterior. Threshold selection thus remains an important challenge. Here we propose that the threshold–acceptance rate curve may be used to determine threshold schedules that avoid local optima, while balancing the need to minimise the threshold with computational efficiency. Furthermore, we provide an algorithm based upon the unscented transform, that enables the threshold–acceptance rate curve to be efficiently predicted in the case of deterministic and stochastic state space models.

  • Conference paper
    Leonetti M, Ahmadzadeh SR, Kormushev P, 2013,

    On-line Learning to Recover from Thruster Failures on Autonomous Underwater Vehicles

  • Conference paper
    Ahmadzadeh SR, Leonetti M, Kormushev P, 2013,

    Online Direct Policy Search for Thruster Failure Recovery in Autonomous Underwater Vehicles

  • Conference paper
    Jamali N, Kormushev P, Caldwell DG, 2013,

    Contact State Estimation using Machine Learning

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Comparative Evaluation of Reinforcement Learning with Scalar Rewards and Linear Regression with Multidimensional Feedback

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Towards Improved AUV Control Through Learning of Periodic Signals

  • Book
    Deisenroth MP, Neumann G, Peters J, 2013,

    A Survey on Policy Search for Robotics

    , Publisher: now Publishers

    Policy search is a subfield in reinforcement learning which focuses onfinding good parameters for a given policy parametrization. It is wellsuited for robotics as it can cope with high-dimensional state and actionspaces, one of the main challenges in robot learning. We review recentsuccesses of both model-free and model-based policy search in robotlearning.Model-free policy search is a general approach to learn policiesbased on sampled trajectories. We classify model-free methods based ontheir policy evaluation strategy, policy update strategy, and explorationstrategy and present a unified view on existing algorithms. Learning apolicy is often easier than learning an accurate forward model, and,hence, model-free methods are more frequently used in practice. How-ever, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulatorof the robot’s dynamics from data. Subsequently, the simulator gen-erates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respectiveproperties and their applicability to robotic systems.

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Reinforcement Learning with Heterogeneous Policy Representations

  • Conference paper
    Kryczka P, Hashimoto K, Takanishi A, Kormushev P, Tsagarakis N, Caldwell DGet al., 2013,

    Walking Despite the Passive Compliance: Techniques for Using Conventional Pattern Generators to Control Instrinsically Compliant Humanoid Robots

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=14&respub-action=search.html Current Millis: 1632820267573 Current Time: Tue Sep 28 10:11:07 BST 2021