Found 13 results.

discovery of complex behaviors through contact-invariant optimization

Author(s): Igor Mordatch, Emanuel Todorov, Zoran Popovic
Venue: ACM Transactions on Graphics
Year Published: 2012
Keywords: planning, contact dynamics, trajectory optimization, locomotion, reinforcement learning
Expert Opinion: The paper demonstrates that with an accurate internal model, planning of complex behaviors including contacts and dynamic interaction with the environment is possible from scratch. I see it as an important result supporting the need for good internal representations, which in the case of real-world interactions need to be at least partially learned.

policy gradient reinforcement learning for fast quadrupedal locomotion

Author(s): Nate Kohl, Peter Stone
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2004
Keywords: reinforcement learning, policy gradients, locomotion, legged robots
Expert Opinion: The work is practical in that it allowed the authors to improve the walking speed of Aibos, something essential to creating top-flight robocup players. The reason I adore this work and frequently cite it in my talks on machine learning is the fantastic way it allowed the robots to learn autonomously. In particular, for the Aibo robots to succeed in robocup, they need to be able to localize on the field based on their perception of provided markers. The authors enabled the robots to measure their own walking speed leveraging this capability. By marching a team of robots back and forth across the width of the pitch, experimenting with and evaluating different gaits each time, the robots were able to find movement patterns that surpassed hand-designed ones. It's a beautiful example of exploiting measurable quantities to drive learning---a key enabling technology for robot learning.

model-agnostic meta-learning for fast adaptation of deep networks

Author(s): Chelsea Finn, Pieter Abbeel, Sergey Levine
Venue: International Conference on Machine Learning
Year Published: 2017
Keywords: policy gradients, reinforcement learning, neural networks, locomotion
Expert Opinion: Most people probably wouldn't compare MAMAL with HER because the algorithms and the problems they address are vastly different; MAML tackles transfer learning while HER tackles sparse reward issues. But from certain perspective, HER and MAML can make a very complementary pair of parents. HER is like an encouraging parent who, in hindsight, thinks everything the child did is a useful learning experience. MAML, on the other hand, thinks in foresight and wants the child to only learn skills for future job prospects. Does that sound like your parents?

modeling and learning walking gaits of biped robots

Author(s): Matthias Hebbel, Ralf Kosse and Walter Nistico
Venue: IEEE-RAS International Conference of Humanoid Robots
Year Published: 2006
Keywords: locomotion, legged robots, genetic algorithms, evolution
Expert Opinion: This paper describes the open loop modelling of a robot gait which mimics the human walking style. The authors develop a parameterized model for the leg and arm motions. They then compare various machine learning methods for finding the best parameters, i.e., the ones that provide the best walk. The paper is very interesting as: - it is one of the pioneer works on robot gait learning - it rises many issues related to practical application of machine learning methods on real hardware - it gives many insights (again, mainly practical) on how to develop a robot learning framework. While the scientific contribution may be limited, the paper has a great importance for its presentation of practical issues. For this reason, I recommend its reading to young students interested in studying this topic for the first time.

an evolutionary approach to gait learning for four-legged robots

Author(s): Sonia Chernova, Manuela Veloso
Venue: International Conference on Intelligent Robots and Systems
Year Published: 2004
Keywords: planning, mobile robots, evolution, legged robots, genetic algorithms, locomotion
Expert Opinion: This paper presents a clear and concrete mapping of genetic algorithms to a compelling hardware domain: Sony AIBO walking gait and the RoboCup soccer competition. The AIBO was an example of a platform where parameter tuning by hand is particularly tedious (54 parameters), plus the platform was safe to have "practice‚" on its own overnight (i.e. without human supervision, a rarity for mobile robots)---offering an opportunity for fully autonomous and on-hardware optimization-based learning, where it was feasible for each generation to be evaluated (according to the fitness function) on the actual robot platform without human supervision or intervention. The learned walk that resulted outperformed all hand-tuned and learned walks that participated in (including that which won) the RoboCup 2003 competition. ** This recommendation is getting a bit into the weeds of specific algorithms---not quite sure if the list is planning to go that deep. It's a work I would present in a class, as a great example of a CS algorithm being translated for use on real robot hardware. Again, not quite sure if that sort of categorization fits the bill.

adjustable bipedal gait generation using genetic algorithm optimized fourier series formulation

Author(s): L. Yang, C. M. Chew, A. N. Poo, T. Zielinska
Venue: IEEE/RSJ International Conference on Intelligent Robots and Systems
Year Published: 2006
Keywords: locomotion, legged robots, genetic algorithms, planning
Expert Opinion: This paper presents a method for optimally generating stable bipedal walking gaits, based on a Truncated Fourier Series Formulation with coefficients tuned by Genetic Algorithms. It also provides a way to adjust the stride-frequency, step-length or walking pattern in real-time. The proposed approach can be adapted to the robot kinematic structure and to different terrains. As for the my previous suggestion, albeit simple the paper is useful to bridge the gap between robot kinematics (model-based design) and machine learning (model-free design). This is why I recommend it.

robots that can adapt like animals

Author(s): Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret
Venue: Nature
Year Published: 2015
Keywords: gaussians, probabilistic models, locomotion
Expert Opinion: because it shows how you can leverage models in simulation for learning how to recover from damages, without necessarily re-learning the damaged model. Also, they learn in very few trials on the real robot, which is fundamental when working with real robots and experiments are expensive

abandoning objectives: evolution through the search for novelty alone

Author(s): Joel Lehman and Kenneth O. Stanley
Venue: Evolutionary Computation
Year Published: 2011
Keywords: evolution, neural networks, locomotion
Expert Opinion: This works nicely demonstrates that optimizing a reward function is not necessarily the best way to find a solution in a complex search space (especially when the search space is deceptive). It proposes to replace the reward function by a behavioral novelty score, which echoes many of the work in developmental robotics. The experiments described in this paper led to an inspirational book (Why greatness cannot be planned: The myth of the objective, Springer, 2015).

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: It introduced end-to-end training with impressive results going from pixels to torques on several interesting tasks.

automatic gait optimization with gaussian process regression

Author(s): Daniel Lizotte, Tao Wang, Michael Bowling, Dale Schuurmans
Venue: International Joint Conference on Artificial Intelligence
Year Published: 2007
Keywords: locomotion, legged robots, gaussians
Expert Opinion: This paper is from the line of papers on Aibo Gate optimization started by Kohl and Stone in 2004. This paper introduced the idea of using Gaussian process regression for learning so as to avoid local optima, make full use of all historical data, and explicitly model noise in gait evaluation. The authors acheived impressive results for optimizing both speed and smoothness with dramatically fewer gait evaluations than prior approaches.

learning to control a low-cost manipulator using data-efficient reinforcement learning

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen, Dieter Fox
Venue: Robotics: Science and Systems VII
Year Published: 2011
Keywords: manipulation, reinforcement learning, probabilistic models, locomotion, planning, gaussians
Expert Opinion: While this was not the first nor the last publication from Deisenroth and colleagues on PILCO (probabilistic inference for learning control), this is the paper that I remember my colleagues talking about that led me to learn about the approach. This paper was prescient bringing our attention to, and attempting to address, a number of problems in robot learning that still remain important today: data-efficient learning and transfer between related tasks. This paper has a had lasting impact on the field forming the basis of other impressive works in areas of robotics ranging from manipulation to learning underwater swimming gaits.

resilient machines through continuous self-modeling

Author(s): Josh Bongard, Victor Zykov, and Hod Lipson
Venue: Science
Year Published: 2006
Keywords: legged robots, locomotion
Expert Opinion: This article shows the potential of active model learning for adaptation, and applies it to damage recovery for a legged robot. While the technique was not new for robotics (active model learning), the paper pushed by identifying more than parameters (e.g., the presence of a leg). The application to damage recovery inspired a lot of my own work.

learning agile and dynamic motor skills for legged robots

Author(s): Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, and Marco Hutter
Venue: Science Robotics
Year Published: 2019
Keywords: policy gradients, neural networks, legged robots, locomotion, dynamical systems
Expert Opinion: Very nice work that combines supervised learning for learning internal models (deep networks) of the series-elastic actuators dynamics with reinforcement learning (specifically, Trust Region Policy Optimization) for learning locomotion policies. They obtained excellent locomotion gaits and were able to learn complex standing-up sequences.