Found 37 results.


Next

learning object affordances: from sensory - motor coordination to imitation

Author(s): Luis Montesano, Manuel Lopes, Alexandre Bernardino, Member, IEEE, Jose Santos-Victor
Venue: IEEE Transactions on Robotics (Volume 24, Issue 1)
Year Published: 2008
Keywords: humanoid robotics, learning from demonstration, planning
Expert Opinion: Affordances have been very influential in robotics in the last decade and this study laid the foundations of affordance-based robot learning research that emphasizes importance of exploration; and learning the relation between objects, actions and the observed effects. They showed how learned affordances can be used for goal-oriented action execution and for imitation.

a survey of iterative learning control

Author(s): D.A. Bristow, M. Tharayil, A.G. Alleyne
Venue: IEEE Control Systems Magazine (Volume 26, Issue 3)
Year Published: 2006
Keywords: learning from demonstration, survey, nonlinear systems, gaussians
Expert Opinion: The content of the paper provides the reader with a broad perspective of the important ideas, potential, and limitations of iterative learning control - ILC. Besides the design techniques, it discusses problems in stability, performance, learning transient behavior, and robustness.

agnostic system identification for model-based reinforcement learning

Author(s): Stephane Ross, J. Andrew Bagnell
Venue: International Conference on Machine Learning
Year Published: 2012
Keywords: reinforcement learning, optimal control, learning from demonstration
Expert Opinion: Formalizes the practice of performing system id, followed by attempting the task with an optimal controller, and repeating until the system works. Powerful technique that works and provides performance guarantees even when the true system model cannot be realized.

an introduction to deep reinforcement learning.

Author(s): Vincent Francois-Lavet, Peter Henderson, Riashat Islam, Marc G. Bellemare, Joelle Pineau
Venue: Foundations and Trends in Machine Learning
Year Published: 2018
Keywords: neural networks, reinforcement learning, policy gradients, learning from demonstration
Expert Opinion: There have been astounding achievements in Deep Reinforcement Learning in recent years with complex decision-making problems suddenly becoming solvable. This book is written by experts in the field and on top of that, it is free!

robotic grasping of novel objects using vision

Author(s): Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2008
Keywords: neural networks, dynamical systems, visual perception, learning from demonstration, manipulation, planning
Expert Opinion: A key paper in grasp learning. The approach is relatively simple in nature (hack to get grasp orientation, simple features as anchors), but it was at the time a very clear example of an approach that starkly contrasted with mainstream grasping.

a robot controller using learning by imitation

Author(s): Gillian Hayes, Yiannis Demiris
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 1995
Keywords: reinforcement learning, learning from demonstration, dynamical systems
Expert Opinion: This paper introduced learning from imitation. This has proved useful in and of its own (i.e., as a means for non-expert users to program robots), and also as the means for initializing robot controllers (most prominently by Schaal and later Peters) to a reasonable policy that is later refined by learning. Schaal's work on this was probably more influential but it was preceded, and possibly inspired by, Gillian Hayes.

dynamical movement primitives: learning attractor models for motor behaviors

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, Stefan Schaal
Venue: Neural Computation (Volume 25, Issue 2)
Year Published: 2013
Keywords: planning, learning from demonstration, dynamical systems, nonlinear systems
Expert Opinion: Not the first paper on Dynamical movement primitives, but a great update on DMP.

a reduction of imitation learning and structured prediction to no-regret online learning

Author(s): Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell
Venue: 14th International Conference on Artificial Intelligence and Statistics
Year Published: 2011
Keywords: neural networks, learning from demonstration, dynamical systems
Expert Opinion: Introduces Data Aggregation and the general approach of viewing policy optimization as online learning. Formalizes the notion of interaction with an expert as surrogate objective to the usual policy optimization objective.

intrinsically motivated goal exploration processes with automatic curriculum learning

Author(s): Sebastien Forestier, Yoan Mollard, Pierre-Yves Oudeyer
Venue: arXiv
Year Published: 2017
Keywords: learning from demonstration
Expert Opinion: The paper shows how an agent/robot can find out by itself how to manipulate its environment from a simple intrinsic motivation. In my eyes, this is the first practical demonstration of Schmidhuber's idea on learning progress maximization which is probably one of the most powerful generic drives. The paper shows how an agent can discover more and more complex interactions with an environment without a specific task in mind. I believe these kinds of studies are important steps towards an intelligently learning robot.

maximum entropy inverse reinforcement learning

Author(s): Brian D. Ziebart, Andrew Maas, J.Andrew Bagnell, and Anind K. Dey
Venue: AAAI Conference on Artificial Intelligence
Year Published: 2008
Keywords: probabilistic models, learning from demonstration, reinforcement learning
Expert Opinion: This work is one of the first to connect probabilistic inference with robot policy learning. Maximum Entropy Inverse Reinforcement Learning poses the classical Inverse Reinforcement Learning problem, well-studied for several years before this work, as maximizing the likelihood of observing a state distributing given a noisily optimal agent w.r.t an unknown reward function. The inference method, model, and general principles not only inspired future IRL works (such as RelEnt-IRL, GP-IRL, and Guided Cost Learning), they also have been applied in Human Robot Interaction and general policy search algorithms.

algorithms for inverse reinforcement learning

Author(s): Andrew Y. Ng, Stuart Russell
Venue: International Conference on Machine Learning
Year Published: 2000
Keywords: reinforcement learning, optimal control, learning from demonstration
Expert Opinion: Another influential work that gives a new and useful perspective on inverse optimal control, and has many interesting followups including the PhD work of Pieter Abbeel.

movement imitation with nonlinear dynamical systems in humanoid robots

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2002
Keywords: probabilistic models, nonlinear systems, dynamical systems, learning from demonstration, humanoid robotics
Expert Opinion: First work that proproses practical movement primitive representation for robotics. Very concise paper: shows how much can be packed into 6 pages.

robot programming by demonstration

Author(s): Aude Billard and Sylvain Calinon, Ruediger Dillmann, Stefan Schaal
Venue: Book
Year Published: 2008
Keywords: humanoid robotics, learning from demonstration, dynamical systems
Expert Opinion: Provides a clear presentation of robot learning from demonstration from the authors who made the approach popular

autonomous helicopter control using reinforcement learning policy search methods

Author(s): J. Andrew Bagnell, Jeff G. Schneider
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2001
Keywords: learning from demonstration, reinforcement learning, dynamic programming
Expert Opinion: One of the first real demonstrations of RL on an actual robot performing a complex control problem.

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: This work has drawn attention to end-to-end learning with neural networks, which I think was the beginning of the big boom of the deep learning in robotics.

reinforcement learning in robotics: a survey

Author(s): Jens Kober, J. Andrew Bagnell, Jan Peters
Venue: International Journal of Robotics Research
Year Published: 2014
Keywords: survey, reinforcement learning, learning from demonstration, optimal control, mobile robots
Expert Opinion: This survey was published at a time when there was still a significant gap between reinforcement learning and its practical employment on real robot hardware. For the majority of real world domains, rollouts are impractical to perform on actual hardware---because, for example, the state/action spaces are continuous, exploration can be dangerous, and rollouts take much longer when physically executed---plus often simulators are too dissimilar to the real world and hardware for what is learned to transfer well. To get reinforcement learning to be effective on a real hardware system, therefore, the devil is in the details, and this article addresses just that. Today the gap is narrowing, in part because of advances in computation, but also because of implementation "tricks‚" becoming codified. This article is a bit of a one-stop-shop for pulling together a lot of these tricks, and putting some theoretical rigor and thought behind why and when they work.

one-shot imitation learning

Author(s): Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2017
Keywords: learning from demonstration, reinforcement learning, neural networks
Expert Opinion: This paper focuses on a very challenging problem in robot learning, which is obtaining a generalized policy of a task using a very limited user supervision. The presented framework has a great potential to design robot learning algorithms with realistic data expectations.

alvinn: an autonomous land vehicle in a neural network

Author(s): Dean A. Pomerleau
Venue: MITP
Year Published: 1989
Keywords: mobile robots, learning from demonstration, neural networks
Expert Opinion: This was probably the first real learning-based controller for an autonomous vehicle. It pioneered techniques such as data-augmentation to handle the problem of reaching states on which it wasn't trained.

Next