Download a PDF with the full list of our publications: Robot-Intelligence-Lab-Publications-2021.pdf

A comprehensive list can also be found at Google Scholar, or by searching for the publications of author Kormushev, Petar.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Kormushev P, Nenchev DN, Calinon S, Caldwell DGet al., 2011,

    Upper-body Kinesthetic Teaching of a Free-standing Humanoid Robot

    , Pages: 3970-3975
  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2011,

    Time Hopping Technique for Faster Reinforcement Learning in Simulations

    , International Journal of Cybernetics and Information Technologies, Vol: 11, Pages: 42-59
  • Conference paper
    Kormushev P, Calinon S, Saegusa R, Metta Get al., 2010,

    Learning the skill of archery by a humanoid robot iCub

    , Pages: 417-423
  • Conference paper
    Kormushev P, Calinon S, Caldwell DG, 2010,

    Approaches for Learning Human-like Motor Skills which Require Variable Stiffness During Execution

  • Conference paper
    Kormushev P, Calinon S, Caldwell DG, 2010,

    Robot Motor Skill Coordination with EM-based Reinforcement Learning

    , Pages: 3232-3237
  • Conference paper
    Sato F, Nishii T, Takahashi J, Yoshida Y, Mitsuhashi M, Kormushev P, Kanamiya Yet al., 2010,

    Whiteboard Cleaning Task Realization with HOAP-2

    , Pages: 426-429
  • Conference paper
    Kormushev P, Dong F, Hirota K, 2009,

    Probability redistribution using time hopping for reinforcement learning

  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2009,

    Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning

    , Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol: 13, No. 6
  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2008,

    Time manipulation technique for speeding up reinforcement learning in simulations

    , Cybernetics and Information Technologies, Vol: 8, Pages: 12-24, ISSN: 1311-9702

    A technique for speeding up reinforcement learning algorithms by usingtime manipulation is proposed. It is applicable to failure-avoidance controlproblems running in a computer simulation. Turning the time of the simulationbackwards on failure events is shown to speed up the learning by 260% andimprove the state space exploration by 12% on the cart-pole balancing task,compared to the conventional Q-learning and Actor-Critic algorithms.

  • Conference paper
    Yamazaki Y, Dong F, Masuda Y, Uehara Y, Kormushev P, Vu HA, Le PQ, Hirota Ket al., 2007,

    Fuzzy inference based mentality estimation for eye robot agent

  • Conference paper
    Yamazaki Y, Dong F, Masuda Y, Uehara Y, Kormushev P, Vu HA, Le PQ, Hirota Ket al., 2007,

    Intent expression using eye robot for mascot robot system

  • Journal article
    Agre G, Kormushev P, Dilov I, 2006,

    INFRAWEBS Axiom Editor - A graphical ontology-driven tool for creating complex logical expressions

    , International Journal of Information Theories and Applications, Vol: 13, Pages: 169-178
  • Conference paper
    Agre G, Kormushev P, Dilov I, 2005,

    INFRAWEBS Capability Editor - A graphical ontology-driven tool for creating capabilities of Semantic Web Services

    , Pages: 228-228

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=815&limit=20&page=6&respub-action=search.html Current Millis: 1711709273313 Current Time: Fri Mar 29 10:47:53 GMT 2024