Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperDallali H, Mosadeghzad M, Medrano-Cerda GA, et al., 2013,
Development of a dynamic simulator for a compliant humanoid robot based on a symbolic multibody approach
, Pages: 598-603 -
Conference paperKryczka P, Shiguematsu YM, Kormushev P, et al., 2013,
Towards dynamically consistent real-time gait pattern generation for full-size humanoid robots
-
Journal articleDeisenroth MP, Turner RD, Huber MF, et al., 2012,
Robust Filtering and Smoothing with Gaussian Processes
, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, Vol: 57, Pages: 1865-1871, ISSN: 0018-9286- Author Web Link
- Open Access Link
- Cite
- Citations: 68
-
Conference paperKormushev P, Caldwell DG, 2012,
Direct policy search reinforcement learning based on particle filtering
-
Journal articleColasanto L, Kormushev P, Tsagarakis N, et al., 2012,
Optimization of a compact model for the compliant humanoid robot COMAN using reinforcement learning
, International Journal of Cybernetics and Information Technologies, Vol: 12, Pages: 76-85, ISSN: 1311-9702COMAN is a compliant humanoid robot. The introduction of passive compliance in some of its joints affects the dynamics of the whole system. Unlike traditional stiff robots, there is a deflection of the joint angle with respect to the desired one whenever an external torque is applied. Following a bottom up approach, the dynamic equations of the joints are defined first. Then, a new model which combines the inverted pendulum approach with a three-dimensional (Cartesian) compliant model at the level of the center of mass is proposed. This compact model is based on some assumptions that reduce the complexity but at the same time affect the precision. To address this problem, additional parameters are inserted in the model equation and an optimization procedure is performed using reinforcement learning. The optimized model is experimentally validated on the COMAN robot using several ZMP-based walking gaits.
-
Conference paperKormushev P, Caldwell DG, 2012,
Simultaneous discovery of multiple alternative optimal policies by reinforcement learning
, Pages: 202-207 -
Journal articleShen H, Yosinski J, Kormushev P, et al., 2012,
Learning Fast Quadruped Robot Gaits with the RL PoWER Spline Parameterization
, International Journal of Cybernetics and Information Technologies, Vol: 12 -
Conference paperLane DM, Maurelli F, Kormushev P, et al., 2012,
Persistent Autonomy: the Challenges of the PANDORA Project
-
Journal articleLeonetti M, Kormushev P, Sagratella S, 2012,
Combining Local and Global Direct Derivative-free Optimization for Reinforcement Learning
, International Journal of Cybernetics and Information Technologies, Vol: 12 -
Journal articleCarrera A, Ahmadzadeh SR, Ajoudani A, et al., 2012,
Towards Autonomous Robotic Valve Turning
, Cybernetics and Information Technologies, Vol: 12, Pages: 17-26 -
Conference paperKormushev P, Calinon S, Ugurlu B, et al., 2012,
Challenges for the policy representation when applying reinforcement learning in robotics
, Pages: 1-8 -
Conference paperKormushev P, Ugurlu B, Colasanto L, et al., 2012,
The anatomy of a fall: Automated real-time analysis of raw force sensor data from bipedal walking robots and humans
, Pages: 3706-3713 -
Journal articleCalinon S, Kormushev P, Caldwell DG, 2012,
Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning
, Robotics and Autonomous Systems -
Conference paperDickens L, Molly I, Lobo J, et al., 2012,
Learning Stochastic Models of Information Flow
, 28th IEEE International Conference on Data Engineering (ICDE), Publisher: IEEE Computer Society, Pages: 570-581, ISSN: 1063-6382 -
Journal articleDallali H, Kormushev P, Li Z, et al., 2012,
On Global Optimization of Walking Gaits for the Compliant Humanoid Robot COMAN Using Reinforcement Learning
, International Journal of Cybernetics and Information Technologies, Vol: 12 -
Conference paperKormushev P, Ugurlu B, Calinon S, et al., 2011,
Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization
, Pages: 318-324 -
Journal articleKormushev P, Calinon S, Caldwell DG, 2011,
Imitation Learning of Positional and Force Skills Demonstrated via Kinesthetic Teaching and Haptic Input
, Advanced Robotics, Vol: 25, Pages: 581-603 -
Journal articleKormushev P, Nomoto K, Dong F, et al., 2011,
Time Hopping Technique for Faster Reinforcement Learning in Simulations
, International Journal of Cybernetics and Information Technologies, Vol: 11, Pages: 42-59 -
Conference paperGoodman DFM, Brette R, 2010,
Learning to localise sounds with spiking neural networks
To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism's lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back.
-
Conference paperKormushev P, Calinon S, Caldwell DG, 2010,
Robot Motor Skill Coordination with EM-based Reinforcement Learning
, Pages: 3232-3237
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact us
Artificial Intelligence Network
South Kensington Campus
Imperial College London
SW7 2AZ
To reach the elected speaker of the network, Dr Rossella Arcucci, please contact:
To reach the network manager, Diana O'Malley - including to join the network - please contact: