Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Carrera A, Palomeras N, Hurtos N, Kormushev P, Carreras Met al., 2014,

    Learning by demonstration applied to underwater intervention

  • Book chapter
    Maimari N, Broda K, Kakas A, Krams R, Russo Aet al., 2014,

    Symbolic Representation and Inference of Regulatory Network Structures

    , Logical Modeling of Biological Systems, Publisher: John Wiley & Sons, Inc., Pages: 1-48, ISBN: 9781119005223
  • Conference paper
    Turliuc C-R, Maimari N, Russo A, Broda Ket al., 2013,

    On Minimality and Integrity Constraints in Probabilistic Abduction

    , LPAR Logic for Programming,Artificial Intelligence and Reasoning, Publisher: Springer Verlag
  • Journal article
    Goodman DF, Benichoux V, Brette R, 2013,

    Decoding neural responses to temporal cues for sound localization

    , eLife, Vol: 2, ISSN: 2050-084X

    The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001.

  • Conference paper
    Ahmadzadeh SR, Kormushev P, Caldwell DG, 2013,

    Autonomous robotic valve turning: A hierarchical learning approach

    , 2013 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4629-4634, ISSN: 1050-4729

    Autonomous valve turning is an extremely challenging task for an Autonomous Underwater Vehicle (AUV). To resolve this challenge, this paper proposes a set of different computational techniques integrated in a three-layer hierarchical scheme. Each layer realizes specific subtasks to improve the persistent autonomy of the system. In the first layer, the robot acquires the motor skills of approaching and grasping the valve by kinesthetic teaching. A Reactive Fuzzy Decision Maker (RFDM) is devised in the second layer which reacts to the relative movement between the valve and the AUV, and alters the robot's movement accordingly. Apprenticeship learning method, implemented in the third layer, performs tuning of the RFDM based on expert knowledge. Although the long-term goal is to perform the valve turning task on a real AUV, as a first step the proposed approach is tested in a laboratory environment. © 2013 IEEE.

  • Conference paper
    Ahmadzadeh SR, Kormushev P, Caldwell DG, 2013,

    Interactive Robot Learning of Visuospatial Skills

  • Conference paper
    Karras GC, Bechlioulis CP, Leonetti M, Palomeras N, Kormushev P, Kyriakopoulos KJ, Caldwell DGet al., 2013,

    On-Line Identification of Autonomous Underwater Vehicles through Global Derivative-Free Optimization

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Improving the Energy Efficiency of Autonomous Underwater Vehicles by Learning to Model Disturbances

  • Conference paper
    Ahmadzadeh SR, Kormushev P, Caldwell DG, 2013,

    Visuospatial Skill Learning for Object Reconfiguration Tasks

  • Journal article
    Koos S, Cully A, Mouret J-B, 2013,

    Fast damage recovery in robotics with the T-resilience algorithm

    , The International Journal of Robotics Research, Vol: 32, Pages: 1700-1723, ISSN: 0278-3649

    Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behavior in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behavior by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behavior that does not use them. We evaluate the T-resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to an RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 min, T-resilience consistently leads to substantially better results than the other approaches.

  • Conference paper
    Leonetti M, Ahmadzadeh SR, Kormushev P, 2013,

    On-line Learning to Recover from Thruster Failures on Autonomous Underwater Vehicles

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Comparative Evaluation of Reinforcement Learning with Scalar Rewards and Linear Regression with Multidimensional Feedback

  • Conference paper
    Ahmadzadeh SR, Leonetti M, Kormushev P, 2013,

    Online Direct Policy Search for Thruster Failure Recovery in Autonomous Underwater Vehicles

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Towards Improved AUV Control Through Learning of Periodic Signals

  • Conference paper
    Jamali N, Kormushev P, Caldwell DG, 2013,

    Contact State Estimation using Machine Learning

  • Book
    Deisenroth MP, Neumann G, Peters J, 2013,

    A Survey on Policy Search for Robotics

    , Publisher: now Publishers

    Policy search is a subfield in reinforcement learning which focuses onfinding good parameters for a given policy parametrization. It is wellsuited for robotics as it can cope with high-dimensional state and actionspaces, one of the main challenges in robot learning. We review recentsuccesses of both model-free and model-based policy search in robotlearning.Model-free policy search is a general approach to learn policiesbased on sampled trajectories. We classify model-free methods based ontheir policy evaluation strategy, policy update strategy, and explorationstrategy and present a unified view on existing algorithms. Learning apolicy is often easier than learning an accurate forward model, and,hence, model-free methods are more frequently used in practice. How-ever, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulatorof the robot’s dynamics from data. Subsequently, the simulator gen-erates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respectiveproperties and their applicability to robotic systems.

  • Conference paper
    Kormushev P, Caldwell DG, 2013,

    Reinforcement Learning with Heterogeneous Policy Representations

  • Conference paper
    Cully AHR, Mouret J-B, 2013,

    Behavioral repertoire learning in robotics

    , Proceedings of the 15th annual conference on Genetic and evolutionary computation, Publisher: ACM, Pages: 175-182

    Behavioral Repertoire Learning in RoboticsAntoine CullyISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francecully@isir.upmc.frJean-Baptiste MouretISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francemouret@isir.upmc.frABSTRACTLearning in robotics typically involves choosing a simple goal(e.g. walking) and assessing the performance of each con-troller with regard to this task (e.g. walking speed). How-ever, learning advanced, input-driven controllers (e.g. walk-ing in each direction) requires testing each controller on alarge sample of the possible input signals. This costly pro-cess makes difficult to learn useful low-level controllers inrobotics.Here we introduce BR-Evolution, a new evolutionary learn-ing technique that generates a behavioral repertoire by tak-ing advantage of the candidate solutions that are usuallydiscarded. Instead of evolving a single, general controller,BR-evolution thus evolves a collection of simple controllers,one for each variant of the target behavior; to distinguishsimilar controllers, it uses a performance objective that al-lows it to produce a collection of diverse but high-performingbehaviors. We evaluated this new technique by evolving gaitcontrollers for a simulated hexapod robot. Results show thata single run of the EA quickly finds a collection of controllersthat allows the robot to reach each point of the reachablespace. Overall, BR-Evolution opens a new kind of learningalgorithm that simultaneously optimizes all the achievablebehaviors of a robot.

  • Conference paper
    Kryczka P, Hashimoto K, Takanishi A, Kormushev P, Tsagarakis N, Caldwell DGet al., 2013,

    Walking Despite the Passive Compliance: Techniques for Using Conventional Pattern Generators to Control Instrinsically Compliant Humanoid Robots

  • Conference paper
    Carrera A, Carreras M, Kormushev P, Palomeras N, Nagappa Set al., 2013,

    Towards valve turning with an AUV using Learning by Demonstration

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=8&respub-action=search.html Current Millis: 1591334830251 Current Time: Fri Jun 05 06:27:10 BST 2020