Found 7 results.

discovery of complex behaviors through contact-invariant optimization

Author(s): Igor Mordatch, Emanuel Todorov, Zoran Popovic
Venue: ACM Transactions on Graphics
Year Published: 2012
Keywords: planning, contact dynamics, trajectory optimization, locomotion, reinforcement learning
Expert Opinion: The paper demonstrates that with an accurate internal model, planning of complex behaviors including contacts and dynamic interaction with the environment is possible from scratch. I see it as an important result supporting the need for good internal representations, which in the case of real-world interactions need to be at least partially learned.

value iteration networks

Author(s): Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel
Venue: Advances in Neural Information Processing Systems
Year Published: 2017
Keywords: planning, trajectory optimization
Expert Opinion: To my knowledge, this is the first paper that embeds a planner into a deep neural net framework, combining learning and planning in a more seamless manner. The paper provides some semantic on the convolution operation and shows the potential of such an approach. I think many useful methods in robotics could be derived from this work.

learning control in robotics

Author(s): Stefan Schaal, Christopher G. Atkeson
Venue: IEEE Robotics & Automation Magazine
Year Published: 2010
Keywords: survey, reinforcement learning, policy gradients, optimal control, trajectory optimization
Expert Opinion: This review from Schaal and Atkeson does an excellent job of concisely covering the many approaches to learning control in robotics. It is useful not only as an overview of this subtype of robot learning, but also as a jumping off point for further research, as the works cited are extensive. This paper is also of note because it considers the problem of robot learning from a control perspective, rather than the more common computer science or statistical perspectives. The authors also discuss the practical aspects of learning control, such as the robustness of learned control policies to unexpected perturbation.

robot trajectory optimization using approximate inference

Author(s): Marc Toussaint
Venue: International Conference on Machine Learning
Year Published: 2009
Keywords: probabilistic models, trajectory optimization, optimal control
Expert Opinion: In this study a direct link between optimal control and the general framework of probabilistic inference is draw. The inference framework formulation allows to exchange the inference algorithms or solver and enables the prioritization of multiple concurrent objectives in a principled way.

motion planning under uncertainty using iterative local optimization in belief space

Author(s): Jur van den Berg, Sachin Patil, Ron Alterovitz
Venue: International Journal of Robotics Research
Year Published: 2012
Keywords: planning, trajectory optimization, dynamical systems, gaussians
Expert Opinion: This paper presents on of the first efficient solutions for continuous PoMDPs. Many follow up papers for solving PoMDPs used similar ideas (i.e., trajectory optimization in belief state). While the application of this algorithm was mainly limited to simulation, solving continuous MDPs is a very important topic for robotics. which I expect to also have much more impact in the future.

guided policy search

Author(s): Sergey Levine, Vladlen Koltun
Venue: International Conference on Machine Learning
Year Published: 2013
Keywords: planning, trajectory optimization, reinforcement learning, neural networks
Expert Opinion: This paper, as well as its successors, try to make learning complex behaviors from experience more tractable where only little data is available (which is of course a common situation for learning robots). In particular, I like that the paper combines well-established planning methods that have long been studied in AI and robotics with learning methods to establish a new procedure that combines advantages from both worlds.