Found 27 results.


Next

learning grasping points with shape context

Author(s): Jeannette Bohg, Danica Kragic
Venue: International Conference on Advanced Robotics
Year Published: 2009
Keywords: planning, manipulation, visual perception
Expert Opinion: This is one of the first works in literature that utilized machine learning for the robotic manipulation problem. The proposed framework is still useful to design similar robot learning solutions. The particular importance of this work is to use a global representation of a target object (goal) for manipulation planning

discovery of complex behaviors through contact-invariant optimization

Author(s): Igor Mordatch, Emanuel Todorov, Zoran Popovic
Venue: ACM Transactions on Graphics
Year Published: 2012
Keywords: planning, contact dynamics, trajectory optimization, locomotion, reinforcement learning
Expert Opinion: The paper demonstrates that with an accurate internal model, planning of complex behaviors including contacts and dynamic interaction with the environment is possible from scratch. I see it as an important result supporting the need for good internal representations, which in the case of real-world interactions need to be at least partially learned.

probabilistic movement primitives

Author(s): Alexandros Paraschos, Christian Daniel, Jan Peters, and Gerhard Neumann
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2013
Keywords: manipulation, probabilistic models, gaussians, planning, learning from demonstration
Expert Opinion: This work proposes a probabilistic movement primitive representation that can be trained through least squares regression from demonstrations. The most important feature of this model is its ability to model coupled systems. Thus, through exploiting the learned covariance between limbs or other dimensions whole body motion can be completed and predicted. Also the approach provides a closed form solution of optimal feedback controller in each time step assuming local Gaussian models.

dynamical movement primitives: learning attractor models for motor behaviors

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, Stefan Schaal
Venue: Neural Computation (Volume 25, Issue 2)
Year Published: 2013
Keywords: planning, learning from demonstration, dynamical systems, nonlinear systems
Expert Opinion: Dynamic Movement Primitives (DMPs) specify a way to model goal-directed behaviours as non-linear dynamical system with a learnable attractor behaviour. In this way, the movement trajectory can be of almost arbitrary complexity but remains well-behaved and stable. DMPs are interesting for robot learning as they provide a simple way to learn from demonstrations. The forcing term that shapes the movement trajectory is linear in a set of learnable weights. Any function approximator can be used to learn these parameters. Locally weighted regression has been of particular interest as it is a very easy one-shot learning procedure.

robotic grasping of novel objects using vision

Author(s): Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2008
Keywords: neural networks, dynamical systems, visual perception, learning from demonstration, manipulation, planning
Expert Opinion: One of the first papers using general visual features for grasping

everyday robotic action: lessons from human action control

Author(s): Roy de Kleijn, George Kachergis, Bernhard Hommel
Venue: Frontiers in NeuroRobotics
Year Published: 2014
Keywords: planning, manipulation
Expert Opinion: Roboticists are not the only researchers working on motion representation and generation. Researchers on human motor control approach the problem from a different angle, and their work has often served as an inspiration for me. This paper provides a very nice, easy to understand overview of some topics in the field of human action control, with many interesting citations to follow up.

adaptive representation of dynamics during learning a motor task

Author(s): Reza Shadmehr and Ferdinando A. Mussa-lvaldi
Venue: The Journal of Neuroscience
Year Published: 1994
Keywords: dynamical systems, visual perception, planning
Expert Opinion: The reason why I picked these articles and books is because I think that robot learning cannot be separated from the cognitive architecture supporting the learning processes. The first two reference highlight the importance and role of embodiment (in humans and robots) and the fact that in physical systems part of the learning process is embedded in the morphology and material.

learning attractor landscapes for learning motor primitives

Author(s): Auke Jan Ijspeert, Jun Nakanishi, and Stefan Schaal
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2003
Keywords: manipulation, planning, learning from demonstration, reinforcement learning, humanoid robotics
Expert Opinion: Dynamical Movement Primitives have definitely been very influential in mainly learning from demonstration studies. They encode the demonstrated trajectory as a set of differential equations, and offers advantages such as one-shot learning of non-linear movements, real-time stability and robustness under perturbations with guarantees in reaching the goal state, generalization of the movement for different goals, and linear combination of parameters. DMPs can be easily extended with additional terms: e.g. memorized force and tactile profiles can be utilized in modulating learned movement primitives in difficult manipulation tasks that contain high degrees of noise in perception or actuation. DMPs are further intuitive, easy to understand and implement; and therefore have been widely used.

guided policy search

Author(s): Sergey Levine, Vladlen Koltun
Venue: International Conference on Machine Learning
Year Published: 2013
Keywords: planning, trajectory optimization, reinforcement learning, neural networks
Expert Opinion: This paper, as well as its successors, try to make learning complex behaviors from experience more tractable where only little data is available (which is of course a common situation for learning robots). In particular, I like that the paper combines well-established planning methods that have long been studied in AI and robotics with learning methods to establish a new procedure that combines advantages from both worlds.

belief space planning assuming maximum likelihood observations

Author(s): Robert Platt Jr., Russ Tedrake, Leslie Kaelbling, Tomas Lozano-Perez
Venue: Robotics: Science and Systems VI
Year Published: 2010
Keywords: manipulation, dynamical systems, planning, gaussians
Expert Opinion: This isn't a learning paper, but a planning paper. Nonetheless, I feel the need to include it, as it strongly influenced my thinking about manipulation learning. It was the first work that I had seen make POMDPs work for real robotics problems. It teaches students that reasoning about uncertainty is important, but that you need to make the right assumptions in order to make it work for any reasonably sized problem. Whereas PILCO aims to reduce model uncertainty, this work assumes a correct model and leverages information-gathering actions to reduce state uncertainty. Thus, these papers are complementary when discussing uncertainty.

learning and generalization of motor skills by learning from demonstration

Author(s): Peter Pastor, Heiko Hoffmann, Tamim Asfour, and Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2009
Keywords: planning, learning from demonstration
Expert Opinion: DMPs (Dynamic Movement Primitives) are a good representation for learning robot movements from demonstration, as well as for doing reinforcement learning based on demonstrations. This paper explains a variant of the original DMP formulation that makes them stable when generalizing movements to accommodate new goals, or obstacles in the robot's path. It then shows how the new DMPs can be used for one-shot learning of tasks such as pick-and-place operations or water serving. More robust than just a trajectory, and less complex than learning with many trials, this is a nice tool to have in your robot learning toolkit.

iterative linearization methods for approximately optimal control and estimation of non-linear stochastic system

Author(s): W. LI, E. TODOROV
Venue: International Journal of Control
Year Published: 2007
Keywords: planning, nonlinear systems, optimal control, dynamical systems, state estimation
Expert Opinion: This paper presents one of the effective and fundamental optimal control framework for nonlinear systems. This framework (including DDP) and its extensions have been widely applied to in motion planning and generation in complex robotic systems. This work is quite influential in the field of motion generation and control of robotic systems. After the publication of this paper, there have been many follow-up studies on the application of this kind of optimal control approaches to robot motion control.

a bayesian view on motor control and planning

Author(s): Marc Toussaint, Christian Goerick
Venue: Studies in Computational Intelligence (SCI, volume 264)
Year Published: 2010
Keywords: planning, probabilistic models
Expert Opinion: This paper nicely introduces the relation between classical robot control algorithms and probabilistic inference. Not all of the introduced concepts have been novel but I think this paper contains nice examples and a good overview and helped me to start to think about robot control in a new way.

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: This work has shown a robot performing a variety of contact-rich manipulation tasks with learned controllers that close the loop around RGB images. This work spawned a flurry of research in reinforcement and representation learning.

from skills to symbols: learning symbolic representations for abstract high-level planning

Author(s): George Konidaris, Leslie Pack Kaelbling, Tomas Lozano-Perez
Venue: Journal of Artificial Intelligence Research
Year Published: 2018
Keywords: probabilistic models, planning
Expert Opinion: Abstraction is an important aspect of robot learning. This paper addresses the issue of learning state abstractions for efficient high-level planning. Importantly, the state abstraction should be induced from the set of skills/options that the robot is capable of executing. The resulting abstraction can then be used to determine if any plan is feasible. The paper addresses both deterministic and probabilistic planning. It is also a great example of learning the preconditions and effects of skills for planning complex tasks.

square root sam: simultaneous localization and mapping via square root information smoothing

Author(s): Frank Dellaert, Michael Kaess
Venue: Intl. J. of Robotics Research
Year Published: 2006
Keywords: manipulation, planning, mobile robots, state estimation, visual perception, probabilistic models
Expert Opinion: This paper, as well as the follow up iSAM2 paper, focuses on treating localization and mapping problems as nonlinear least squares and then optimizing as efficiently as possible. This is a powerful technique that serves as the backbone for many SAM solvers, and, can be applied more generally to all sorts of inference problems in robotics. Techniques building on this work have been used in planning, manipulation, and control.

value iteration networks

Author(s): Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel
Venue: Advances in Neural Information Processing Systems
Year Published: 2017
Keywords: planning, trajectory optimization
Expert Opinion: To my knowledge, this is the first paper that embeds a planner into a deep neural net framework, combining learning and planning in a more seamless manner. The paper provides some semantic on the convolution operation and shows the potential of such an approach. I think many useful methods in robotics could be derived from this work.

an algorithmic perspective on imitation learning

Author(s): Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters
Venue: Foundations and Trends in Robotics
Year Published: 2018
Keywords: survey, learning from demonstration, reinforcement learning, planning
Expert Opinion: A focused overview of imitation learning and perspectives from some of the leaders in the field. Not a complete review but an excellent highlighting of important contributions in the field and perspective on future challenges.

an evolutionary approach to gait learning for four-legged robots

Author(s): Sonia Chernova, Manuela Veloso
Venue: International Conference on Intelligent Robots and Systems
Year Published: 2004
Keywords: planning, mobile robots, evolution, legged robots, genetic algorithms, locomotion
Expert Opinion: This paper presents a clear and concrete mapping of genetic algorithms to a compelling hardware domain: Sony AIBO walking gait and the RoboCup soccer competition. The AIBO was an example of a platform where parameter tuning by hand is particularly tedious (54 parameters), plus the platform was safe to have "practice‚" on its own overnight (i.e. without human supervision, a rarity for mobile robots)---offering an opportunity for fully autonomous and on-hardware optimization-based learning, where it was feasible for each generation to be evaluated (according to the fitness function) on the actual robot platform without human supervision or intervention. The learned walk that resulted outperformed all hand-tuned and learned walks that participated in (including that which won) the RoboCup 2003 competition. ** This recommendation is getting a bit into the weeds of specific algorithms---not quite sure if the list is planning to go that deep. It's a work I would present in a class, as a great example of a CS algorithm being translated for use on real robot hardware. Again, not quite sure if that sort of categorization fits the bill.

learning object affordances: from sensory - motor coordination to imitation

Author(s): Luis Montesano, Manuel Lopes, Alexandre Bernardino, Member, IEEE, Jose Santos-Victor
Venue: IEEE Transactions on Robotics (Volume 24, Issue 1)
Year Published: 2008
Keywords: humanoid robotics, learning from demonstration, planning
Expert Opinion: Affordances have been very influential in robotics in the last decade and this study laid the foundations of affordance-based robot learning research that emphasizes importance of exploration; and learning the relation between objects, actions and the observed effects. They showed how learned affordances can be used for goal-oriented action execution and for imitation.

Next