Next

intrinsic motivation systems for autonomous mental development

Author(s): Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V. Hafner
Venue: IEEE Transactions on Evolutionary Computation (Volume 11, Issue 2)
Year Published: 2007
Keywords: reinforcement learning, evolution, neural networks
Expert Opinion: This paper proposes exploration algorithms based on the idea of intrinsic motivations, in particular motivations to explore in order to maximise the learning progress of a robot. This is a prominent example of the work of the Developmental Robotics community that ties link between developmental psychology, neurosciences and concrete robotics implementation and shows that exploring with this approach to learn to predict action consequences (forward models) results in behavior that is organized and shows similarity with human behavior.

alvinn: an autonomous land vehicle in a neural network

Author(s): Dean A. Pomerleau
Venue: MITP
Year Published: 1989
Keywords: mobile robots, learning from demonstration, neural networks
Expert Opinion: On the theoretical side, the first paper to recognize covariate shift in imitation learning and provide a simple data-augmentation style strategy to improve it. On the implementation side, a real self-driving first that led to "No Hands Across America".

from skills to symbols: learning symbolic representations for abstract high-level planning

Author(s): George Konidaris, Leslie Pack Kaelbling, Tomas Lozano-Perez
Venue: Journal of Artificial Intelligence Research
Year Published: 2018
Keywords: probabilistic models, planning
Expert Opinion: As we get better at low-level robotic control, the community will need to start thinking more about longer-horizon problems and how to smoothly flow between reasoning at different levels of abstraction. This paper presents a theoretically-ground formal treatment of the problem, proves some nice stuff about what constitutes necessary and sufficient symbols for various types of planning, and shows some nice demos on a real robot. It is by far the best analysis of hierarchical learning / planning that I know of and provides a much-needed theoretical foundation for moving this area of research forward.

learning and generalization of motor skills by learning from demonstration

Author(s): Peter Pastor, Heiko Hoffmann, Tamim Asfour, and Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2009
Keywords: planning, learning from demonstration
Expert Opinion: DMPs (Dynamic Movement Primitives) are a good representation for learning robot movements from demonstration, as well as for doing reinforcement learning based on demonstrations. This paper explains a variant of the original DMP formulation that makes them stable when generalizing movements to accommodate new goals, or obstacles in the robot's path. It then shows how the new DMPs can be used for one-shot learning of tasks such as pick-and-place operations or water serving. More robust than just a trajectory, and less complex than learning with many trials, this is a nice tool to have in your robot learning toolkit.

policy gradient reinforcement learning for fast quadrupedal locomotion

Author(s): Nate Kohl, Peter Stone
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2004
Keywords: reinforcement learning, policy gradients, locomotion, legged robots
Expert Opinion: The work is practical in that it allowed the authors to improve the walking speed of Aibos, something essential to creating top-flight robocup players. The reason I adore this work and frequently cite it in my talks on machine learning is the fantastic way it allowed the robots to learn autonomously. In particular, for the Aibo robots to succeed in robocup, they need to be able to localize on the field based on their perception of provided markers. The authors enabled the robots to measure their own walking speed leveraging this capability. By marching a team of robots back and forth across the width of the pitch, experimenting with and evaluating different gaits each time, the robots were able to find movement patterns that surpassed hand-designed ones. It's a beautiful example of exploiting measurable quantities to drive learning---a key enabling technology for robot learning.

hindsight experience replay

Author(s): Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2018
Keywords: manipulation, humanoid robotics, reinforcement learning, neural networks
Expert Opinion: HER addresses the issue of sample inefficiency in DRL, especially for those problems with sparse and binary reward functions. It has become one of the most effective algorithms for learning problems with multiple goals which have the potential to solve many challenging manipulation tasks. The idea of "EVERY experience is a good experience for SOME task" is a powerful insight that succinctly reflects how we teach our children to be lifelong learners. We should teach our robots the same way.

probabilistic robotics

Author(s): Sebastian Thrun, Wolfram Burgard, Dieter Fox
Venue: Book
Year Published: 2005
Keywords: probabilistic models
Expert Opinion: Probabilistic Robotics is a tour de force, replete with material for students and practitioners alike.

autonomous helicopter aerobatics through apprenticeship learning

Author(s): Pieter Abbeel, Adam Coates and Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2010
Keywords: learning from demonstration, optimal control, dynamical systems
Expert Opinion: The helicopter stunts achieved in this work are some of the most compelling examples in robotics of both imitation learning and reinforcement learning. (The combination of the two is called apprenticeship learning.) In this work, multiple, imperfect trajectory demonstrations are used to generate ideal trajectories, and then reinforcement learning is used to learn sequences of linear feedback controllers that reproduce those trajectories. When people say things like "but there haven't really been many successes in using reinforcement learning on *real* robots, right?" you can point to this work and say, "sure there are! Have you *seen* these crazy helicopter tricks?"

apprenticeship learning via inverse reinforcement learning

Author(s): Pieter Abbeel, Andrew Y. Ng
Venue: International Conference on Machine Learning
Year Published: 2004
Keywords: reinforcement learning, learning from demonstration
Expert Opinion: Provided a convincing demonstration of the usefulness of inverse reinforcement learning

maximum entropy inverse reinforcement learning

Author(s): Brian D. Ziebart, Andrew Maas, J.Andrew Bagnell, and Anind K. Dey
Venue: AAAI Conference on Artificial Intelligence
Year Published: 2008
Keywords: probabilistic models, learning from demonstration, reinforcement learning
Expert Opinion: This is a seminal paper for IRL. It has not only become a standard way to think about IRL, but the observation model for a demonstration given the reward has propagated to many other related areas, like goal inference, human prediction, etc.

robotic grasping of novel objects using vision

Author(s): Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2008
Keywords: neural networks, dynamical systems, visual perception, learning from demonstration, manipulation, planning
Expert Opinion: This is one of the first works in literature that utilized machine learning for the robotic manipulation problem. The proposed framework is still useful to design similar robot learning solutions. The particular importance of this work is to identify local features that are related to manipulation planning

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: It introduced end-to-end training with impressive results going from pixels to torques on several interesting tasks.

pilco: a model-based and data-efficient approach to policy search

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen
Venue: International Conference of Machine Learning
Year Published: 2011
Keywords: state estimation, reinforcement learning, probabilistic models, gaussians, dynamical systems, visual perception, policy gradients
Expert Opinion: This paper showed in an impressive way how to leverage modern probabilistic methods and model-based reinforcement learning to enable fast policy search. It has become THE reference modeling and inference in nondeterministic tasks. The authors use analytical gradients for efficient policy updates, thereby eschewing the typical problems related to sampling methods. The result is an approach that can learn the cart-pole swing up on a real device in about 20 seconds. If you are doing anything related to reinforcement learning with probabilistic methods, this is a must-read.

a survey on policy search for robotics

Author(s): Marc Peter Deisenroth, Gerhard Neumann, Jan Peters
Venue: Book
Year Published: 2013
Keywords: survey, reinforcement learning
Expert Opinion: For learning optimal robot behavior, reinforcement learning is an essential tool. Whereas the standard textbook by Sutton & Barto mainly covers value-function based methods, this survey covers policy-based method that are very popular in robotics application, with a specific focus on a robotics context.

dynamical movement primitives: learning attractor models for motor behaviors

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, Stefan Schaal
Venue: Neural Computation (Volume 25, Issue 2)
Year Published: 2013
Keywords: planning, learning from demonstration, dynamical systems, nonlinear systems
Expert Opinion: Dynamic Movement Primitives (DMPs) specify a way to model goal-directed behaviours as non-linear dynamical system with a learnable attractor behaviour. In this way, the movement trajectory can be of almost arbitrary complexity but remains well-behaved and stable. DMPs are interesting for robot learning as they provide a simple way to learn from demonstrations. The forcing term that shapes the movement trajectory is linear in a set of learnable weights. Any function approximator can be used to learn these parameters. Locally weighted regression has been of particular interest as it is a very easy one-shot learning procedure.

movement imitation with nonlinear dynamical systems in humanoid robots

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2002
Keywords: probabilistic models, nonlinear systems, dynamical systems, learning from demonstration, humanoid robotics
Expert Opinion: First work that proproses practical movement primitive representation for robotics. Very concise paper: shows how much can be packed into 6 pages.

a reduction of imitation learning and structured prediction to no-regret online learning

Author(s): Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell
Venue: 14th International Conference on Artificial Intelligence and Statistics
Year Published: 2011
Keywords: neural networks, learning from demonstration, dynamical systems
Expert Opinion: Dagger points to a problem that keeps popping up in everyone's research. Every robot learning person should know about it.

supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours

Author(s): Lerrel Pinto, Abhinav Gupta
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2015
Keywords: manipulation, reinforcement learning, neural networks
Expert Opinion: Pinto et al., were the first paper to exploit deep learning techniques to process large amounts of data collected by a robot running 24x7 for significantly improving the grasping accuracy without making any object specific assumptions or requiring 3D models of objects. This paper inspired several works in using large scale data to learn intuitive physics, manipulation of deformable objects and also impressive grasping works such as Google's arm farm and DexNet.

probabilistic movement primitives

Author(s): Alexandros Paraschos, Christian Daniel, Jan Peters, and Gerhard Neumann
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2013
Keywords: manipulation, probabilistic models, gaussians, planning, learning from demonstration
Expert Opinion: This work proposes a probabilistic movement primitive representation that can be trained through least squares regression from demonstrations. The most important feature of this model is its ability to model coupled systems. Thus, through exploiting the learned covariance between limbs or other dimensions whole body motion can be completed and predicted. Also the approach provides a closed form solution of optimal feedback controller in each time step assuming local Gaussian models.

reinforcement learning: an introduction

Author(s): Richard S. Sutton and Andrew G. Barto
Venue: Book
Year Published: 2018
Keywords: mobile robots, reinforcement learning, unsupervised learning, optimal control, genetic algorithms
Expert Opinion: Reinforcement learning is the branch of machine learning that is concerned with decision making under uncertainty, and can be treated as sitting at the intersection of stochastic optimal control theory and machine learning. As such, it is one of the primary tools that is used for learning on robots, where it has appeared in many forms from mobile robots learning to navigate, to manipulators learning to handle different kinds of objects. This book is really the primary text on reinforcement learning, and covers everything from the basic concepts in the field to more recent developments. It is a must-read for anyone interested in robot learning.

Next