Next

pilco: a model-based and data-efficient approach to policy search

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen
Venue: International Conference of Machine Learning
Year Published: 2011
Keywords: state estimation, reinforcement learning, probabilistic models, gaussians, dynamical systems, visual perception, policy gradients
Expert Opinion: In principle, model based RL offers many advantages for robot learning, such as efficient use of data and the ability to predict in advance how a trajectory will roll out. In practice, however, getting model based RL to work has proved to be very difficult. In this work, the authors tackle a key difficulty F112 when optimizing a policy for a dynamics model that was learned from data, model errors get exploited by the optimization algorithm. A very elegant solution is proposed: uncertainty estimation should be incorporated into the decision making process, thereby discouraging the optimization to visit states where model uncertainty is high and the predictions are likely to be wrong. This intuitive idea is implemented using Gaussian processes, which offer a principled approach to modeling uncertainty in continuous dynamical systems. The resulting algorithm - PILCO - is demonstrated to be very efficient in sample complexity, improving upon the state of the art by orders of magnitude. This paper introduced several key ideas that have since been implemented in many subsequent works on robot learning and model based RL.

learning and generalization of motor skills by learning from demonstration

Author(s): Peter Pastor, Heiko Hoffmann, Tamim Asfour, and Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2009
Keywords: planning, learning from demonstration
Expert Opinion: DMPs (Dynamic Movement Primitives) are a good representation for learning robot movements from demonstration, as well as for doing reinforcement learning based on demonstrations. This paper explains a variant of the original DMP formulation that makes them stable when generalizing movements to accommodate new goals, or obstacles in the robot's path. It then shows how the new DMPs can be used for one-shot learning of tasks such as pick-and-place operations or water serving. More robust than just a trajectory, and less complex than learning with many trials, this is a nice tool to have in your robot learning toolkit.

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: This was the paper which first seriously popularized robotics as a problem of interest to the deep learning community. Many of the previous papers that applied deep learning techniques to robotics tasks looked at the perception problem in isolation. This paper hinted at the fact that the entire stack, from perception to actuation, could potentially be recast as a learning problem without suffering a catastrophic degradation in sample efficiency.

probabilistic movement primitives

Author(s): Alexandros Paraschos, Christian Daniel, Jan Peters, and Gerhard Neumann
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2013
Keywords: manipulation, probabilistic models, gaussians, planning, learning from demonstration
Expert Opinion: This and the following papers using ProMPs, because they provided a very nice formulation for representing probabilistic movement primitives. ProMPs have many advantages and I found them better than classical DMPs in many robotics applications, from gestures to whole-body manipulations.

dynamical movement primitives: learning attractor models for motor behaviors

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, Stefan Schaal
Venue: Neural Computation (Volume 25, Issue 2)
Year Published: 2013
Keywords: planning, learning from demonstration, dynamical systems, nonlinear systems
Expert Opinion: Foundation for motion planning using iterative learning methods

reinforcement learning: an introduction

Author(s): Richard S. Sutton and Andrew G. Barto
Venue: Book
Year Published: 2018
Keywords: mobile robots, reinforcement learning, unsupervised learning, optimal control, genetic algorithms
Expert Opinion: It presents the definite theoretical basis of reinforcement learning, used widely in robotics.

intrinsic motivation systems for autonomous mental development

Author(s): Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner
Venue: IEEE Transactions on Evolutionary Computation (Volume 11, Issue 2)
Year Published: 2007
Keywords: reinforcement learning, evolution, neural networks
Expert Opinion: This work contributes to the general question of obtaining life-long learning robotic systems. Large body of the existing robot learning literature mostly focus on methods that enable the robots to learn particular pre-defined skills and achieve particular tasks. Life-long learning, on the other hand, requires the robots to learn skills and adapt to situations that were not (and cannot be) foreseen. Inspired from human development, intrinsic motivation is an important drive that guides the robots towards regions that can be most effectively and efficiently learned with the capabilities developed so far; exploiting metrics such as novelty, curiosity, diversity, etc. This paper, in particular, is a seminal study that exploits maximization of learning progress in a real robot that explores its continuous sensorimotor space. It nicely shows that the robot exhibits stage-like development, learning easy tasks first, and focusing to more complex problems later; progressively developing more advanced skills.

autonomous helicopter aerobatics through apprenticeship learning

Author(s): Pieter Abbeel, Adam Coates and Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2010
Keywords: learning from demonstration, optimal control, dynamical systems
Expert Opinion: The helicopter stunts achieved in this work are some of the most compelling examples in robotics of both imitation learning and reinforcement learning. (The combination of the two is called apprenticeship learning.) In this work, multiple, imperfect trajectory demonstrations are used to generate ideal trajectories, and then reinforcement learning is used to learn sequences of linear feedback controllers that reproduce those trajectories. When people say things like "but there haven't really been many successes in using reinforcement learning on *real* robots, right?" you can point to this work and say, "sure there are! Have you *seen* these crazy helicopter tricks?"

hindsight experience replay

Author(s): Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2018
Keywords: manipulation, humanoid robotics, reinforcement learning, neural networks
Expert Opinion: A really nice, simple idea for learning parameterized skills (building on UVFAs) and efficiently dealing with sparse reward. I think Learning Parameterized Motor Skills on a Humanoid Robot (Castro Da Silva et. al) has a much better description of the parameterized skill learning problem than the HER or UVFA papers, but the HER paper has better practical ideas.

a survey on policy search for robotics

Author(s): Marc Peter Deisenroth, Gerhard Neumann, Jan Peters
Venue: Book
Year Published: 2013
Keywords: survey, reinforcement learning
Expert Opinion: For learning optimal robot behavior, reinforcement learning is an essential tool. Whereas the standard textbook by Sutton & Barto mainly covers value-function based methods, this survey covers policy-based method that are very popular in robotics application, with a specific focus on a robotics context.

from skills to symbols: learning symbolic representations for abstract high-level planning

Author(s): George Konidaris, Leslie Pack Kaelbling, Tomas Lozano-Perez
Venue: Journal of Artificial Intelligence Research
Year Published: 2018
Keywords: probabilistic models, planning
Expert Opinion: Abstraction is an important aspect of robot learning. This paper addresses the issue of learning state abstractions for efficient high-level planning. Importantly, the state abstraction should be induced from the set of skills/options that the robot is capable of executing. The resulting abstraction can then be used to determine if any plan is feasible. The paper addresses both deterministic and probabilistic planning. It is also a great example of learning the preconditions and effects of skills for planning complex tasks.

movement imitation with nonlinear dynamical systems in humanoid robots

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2002
Keywords: probabilistic models, nonlinear systems, dynamical systems, learning from demonstration, humanoid robotics
Expert Opinion: First work that proproses practical movement primitive representation for robotics. Very concise paper: shows how much can be packed into 6 pages.

supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours

Author(s): Lerrel Pinto, Abhinav Gupta
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2015
Keywords: manipulation, reinforcement learning, neural networks
Expert Opinion: Pinto et al., were the first paper to exploit deep learning techniques to process large amounts of data collected by a robot running 24x7 for significantly improving the grasping accuracy without making any object specific assumptions or requiring 3D models of objects. This paper inspired several works in using large scale data to learn intuitive physics, manipulation of deformable objects and also impressive grasping works such as Google's arm farm and DexNet.

a reduction of imitation learning and structured prediction to no-regret online learning

Author(s): Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell
Venue: 14th International Conference on Artificial Intelligence and Statistics
Year Published: 2011
Keywords: neural networks, learning from demonstration, dynamical systems
Expert Opinion: Imitation learning is a very appealing approach to learning robot skills. This paper shows that the straightforward technique of 'behavioral cloning' - simply copying the expert demonstrations, is actually not a good idea in sequential tasks. The reason is due to an effect of accumulating errors - once the learning agent strays away from states seen in the demonstration, it's learned policy is no longer accurate, causing it to stray even further away from the demonstration. There beauty of the paper is in capturing this idea mathematically, using no regret theoretical framework, and suggesting a simple algorithmic solution to the problem. The method, dubbed Dataset Aggregation (DAgger), asks for additional expert actions *on states visited by the policy*. The idea of controlling the distribution shift between the expert and the learner has since been fundamental to robotic imitation learning, and has manifested in various other methods.

probabilistic robotics

Author(s): Sebastian Thrun, Wolfram Burgard, Dieter Fox
Venue: Book
Year Published: 2005
Keywords: probabilistic models
Expert Opinion: Probabilistic Robotics is a tour de force, replete with material for students and practitioners alike.

policy gradient reinforcement learning for fast quadrupedal locomotion

Author(s): Nate Kohl, Peter Stone
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2004
Keywords: reinforcement learning, policy gradients, locomotion, legged robots
Expert Opinion: The work is practical in that it allowed the authors to improve the walking speed of Aibos, something essential to creating top-flight robocup players. The reason I adore this work and frequently cite it in my talks on machine learning is the fantastic way it allowed the robots to learn autonomously. In particular, for the Aibo robots to succeed in robocup, they need to be able to localize on the field based on their perception of provided markers. The authors enabled the robots to measure their own walking speed leveraging this capability. By marching a team of robots back and forth across the width of the pitch, experimenting with and evaluating different gaits each time, the robots were able to find movement patterns that surpassed hand-designed ones. It's a beautiful example of exploiting measurable quantities to drive learning---a key enabling technology for robot learning.

alvinn: an autonomous land vehicle in a neural network

Author(s): Dean A. Pomerleau
Venue: MITP
Year Published: 1989
Keywords: mobile robots, learning from demonstration, neural networks
Expert Opinion: On the theoretical side, the first paper to recognize covariate shift in imitation learning and provide a simple data-augmentation style strategy to improve it. On the implementation side, a real self-driving first that led to "No Hands Across America".

robotic grasping of novel objects using vision

Author(s): Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2008
Keywords: neural networks, dynamical systems, visual perception, learning from demonstration, manipulation, planning
Expert Opinion: This paper lead a generation of PhD students to reimagine how grasping, and manipulation more generally, could be approached as a machine learning problem. Treating the grasp learning problem as a supervised learning problem without explicit human demonstrations or reinforcement learning, Saxena and colleagues' work stood as an example of how manipulation could be approached from a perceptual angle. A decade before deep learning made a splash in robotics, this work showed how robots could be trained to manipulate previously unseen objects without a need for complete 3D or dynamics models. While the learning techniques and features may have changed, the general formulation still stands as the initial approach many researchers take when implementing a grasp planning algorithm.

maximum entropy inverse reinforcement learning

Author(s): Brian D. Ziebart, Andrew Maas, J.Andrew Bagnell, and Anind K. Dey
Venue: AAAI Conference on Artificial Intelligence
Year Published: 2008
Keywords: probabilistic models, learning from demonstration, reinforcement learning
Expert Opinion: This work is one of the first to connect probabilistic inference with robot policy learning. Maximum Entropy Inverse Reinforcement Learning poses the classical Inverse Reinforcement Learning problem, well-studied for several years before this work, as maximizing the likelihood of observing a state distributing given a noisily optimal agent w.r.t an unknown reward function. The inference method, model, and general principles not only inspired future IRL works (such as RelEnt-IRL, GP-IRL, and Guided Cost Learning), they also have been applied in Human Robot Interaction and general policy search algorithms.

Next