Found 22 results.


Next

the coordination of arm movements: an experimentally confirmed mathematical model

Author(s): Tamar Flash, Neville Hogans
Venue: Journal of Neuroscience
Year Published: 1985
Keywords: optimal control, cognitive sciences, dynamical systems
Expert Opinion: This paper is part of a set of papers that outlines important points about synergy formation in neuroscience and robotics along a fil-rouge during the last 40 years: a) the coordination of multiple joints in goal directed movement, b) the characterization of "biological motion‚", c) the equilibrium point hypothesis, d) the role of force fields for motor coordination, e) the extension of the equilibrium point hypothesis from real (overt) movements to covert (real) movements, f) the characterization of synergy formation as the simulation of an internal body model. In particular, this paper explained the bell-shape of the speed profile in human arm reaching movements in terms of optimal control. The observation had been first made in Morasso P (1981) Spatial control of arm movements. Experimental Brain Research.

a new approach to linear filtering and prediction problems

Author(s): R. E. Kalman
Venue: Transactions of the ASME–Journal of Basic Engineering
Year Published: 1960
Keywords: probabilistic models, optimal control, dynamical systems, state estimation
Expert Opinion: The paper that introduced the Kalman Filter: probably the most used inference algorithm in science and engineering. The paper *also* introduced the linear quadratic regulator as a bonus: probably the most used optimal control algorithm. It is clearly written and easy to understand.

reinforcement learning: an introduction

Author(s): Richard S. Sutton and Andrew G. Barto
Venue: Book
Year Published: 2018
Keywords: mobile robots, reinforcement learning, unsupervised learning, optimal control, genetic algorithms
Expert Opinion: Somewhat repeating myself from the last suggestion: for learning robot behavior, reinforcement learning is an essential tool. While Sutton & Barto do not focus specifically on the case of robotics, their book is a very accessible text that nevertheless manages to cover many aspects, techniques, and challenges in reinforcement learning.

applied nonlinear control

Author(s): Jean-Jacques E Slotine, Weiping Li
Venue: Book
Year Published: 2001
Keywords: nonlinear systems, optimal control
Expert Opinion: It laid the basis for adaptive nonlinear control commonly used in robotic control.

deep reinforcement learning in a handful of trials using probabilistic dynamics models

Author(s): Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2018
Keywords: reinforcement learning, dynamical systems, probabilistic models, optimal control
Expert Opinion: Model-based reinforcement learning had the image of not performing well in comparison to model-free methods on complicated problems. However, conceptually, model-based methods have many advantages when it comes to data-efficiency and transfer between tasks. This paper shows that with ensemble models state-of-the-art robotic learning benchmarks (Gym-Mujoco environments) can be solved with high performance in significantly fewer steps. The data-efficiency is particularly important for real robot applications.

algorithms for inverse reinforcement learning

Author(s): Andrew Y. Ng, Stuart Russell
Venue: International Conference on Machine Learning
Year Published: 2000
Keywords: reinforcement learning, optimal control, learning from demonstration
Expert Opinion: Another influential work that gives a new and useful perspective on inverse optimal control, and has many interesting followups including the PhD work of Pieter Abbeel.

learning control in robotics

Author(s): Stefan Schaal, Christopher G. Atkeson
Venue: IEEE Robotics & Automation Magazine
Year Published: 2010
Keywords: survey, reinforcement learning, policy gradients, optimal control, trajectory optimization
Expert Opinion: This review from Schaal and Atkeson does an excellent job of concisely covering the many approaches to learning control in robotics. It is useful not only as an overview of this subtype of robot learning, but also as a jumping off point for further research, as the works cited are extensive. This paper is also of note because it considers the problem of robot learning from a control perspective, rather than the more common computer science or statistical perspectives. The authors also discuss the practical aspects of learning control, such as the robustness of learned control policies to unexpected perturbation.

iterative linearization methods for approximately optimal control and estimation of non-linear stochastic system

Author(s): W. LI, E. TODOROV
Venue: International Journal of Control
Year Published: 2007
Keywords: planning, nonlinear systems, optimal control, dynamical systems, state estimation
Expert Opinion: This paper presents one of the effective and fundamental optimal control framework for nonlinear systems. This framework (including DDP) and its extensions have been widely applied to in motion planning and generation in complex robotic systems. This work is quite influential in the field of motion generation and control of robotic systems. After the publication of this paper, there have been many follow-up studies on the application of this kind of optimal control approaches to robot motion control.

embed to control: a locally linear latent dynamics model for control from raw images

Author(s): Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2015
Keywords: neural networks, optimal control, dynamical systems
Expert Opinion: This work shows how the idea of representation learning fundamental in deep learning can take advantage of the robotics context. The learned representation is specifically constrained to be efficient in an optimal control setting and is made to produce simple, locally linear models from vision that made control efficient. This is also an excellent example on how to integrate learning while taking advantage of known efficient algorithms (optimal control) instead of going to monolithic end-to-end approaches.

optimality principles in sensorimotor control

Author(s): Emanuel Todorov
Venue: Nature Neuroscience
Year Published: 2004
Keywords: evolution, learning from demonstration, optimal control, dynamical systems
Expert Opinion: From the paper's abstract: "The sensorimotor system is a product of evolution, development, learning and adaptation-which work on different time scales to improve behavioral performance. Consequently, many theories of motor function are based on 'optimal performance': they quantify task goals as cost functions, and apply the sophisticated tools of optimal control theory to obtain detailed behavioral predictions. The resulting models, although not without limitations, have explained more empirical phenomena than any other class.‚" This paper provides a solid theoretical perspective on how to think about control principally in terms of objectives. It makes a very good case for sensory feedback, utilizing which is a key aspect of robot learning works.

model learning for robot control: a survey

Author(s): Duy Nguyen-Tuong, Jan Peters
Venue: Cognitive Science
Year Published: 2011
Keywords: gaussians, survey, dynamical systems, optimal control, unsupervised learning, reinforcement learning
Expert Opinion: The only non-RL paper on my list :). Modelling of robots is part of both very classical control approaches as well as modern learning approaches. There are many excellent papers, I chose this one for providing a wide overview. One of my favourite papers on this topic, by the same authors, included in this survey combines insights from analytic modelling (allowing fast identification of a small set of parameters) with Gaussian process modelling (allowing precise and flexible modelling, but at the cost of requiring more data). I chose this survey instead, as it provides a wider overview and is thus something I would be more likely to suggest to a student or mentee to get a wider overview.

autonomous helicopter aerobatics through apprenticeship learning

Author(s): Pieter Abbeel, Adam Coates and Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2010
Keywords: learning from demonstration, optimal control, dynamical systems
Expert Opinion: This paper presents a beautiful and compelling demonstration of the strength of learning dynamical models and using optimal control to learn complex tasks on intrinsically unstable systems even if the learned models rather crude and the optimal controllers are based on linearization, both strong approximations of reality. Furthermore, it addresses the problem of learning from demonstrations and improving from such demonstrations to beat human performance. To the best of my knowledge, on of the first paper demonstrating the use of learning by demonstration, model learning and optimal control together to achieve acrobatic tasks.

optimization-based iterative learning for precise quadrocopter trajectory tracking

Author(s): Angela Schoellig. Raffaello D'Andrea
Venue: Autonomous Robots Journal
Year Published: 2012
Keywords: state estimation, optimal control
Expert Opinion: The authors propose an iterative approach to improve the flight controller in a quadcopter during repetitive task execution. While at a high-level, the paper has the same general setting as recent work in policy learning and robotics, it takes a very different approach that is grounded in control theory and state estimation. In my opinion, this paper is one of the best examples of "model-based" learning in robotics from both an algorithmic and a systems perspective. The task studied is dynamic, has non-trivial dynamic disturbances, and the proposed control technique is theoretically justified while being simple enough to analyze---also it worked.

optimal control and estimation

Author(s): Robert Stengel
Venue: Book
Year Published: 1994
Keywords: optimal control, state estimation
Expert Opinion: Robotics Learning Practitioners must be aware of and understand Optimal Control. :)

dynamic programming and optimal control (vol. i+ii)

Author(s): D.P. Bertsekas
Venue: Book
Year Published: 2017
Keywords: optimal control, dynamic programming
Expert Opinion: The optimal control formulation and the dynamic programming algorithm are the theoretical foundation of many approaches on learning for control and reinforcement learning (RL). In brief, many RL problems can be understood as optimal control, but without a-priori knowledge of a model. Thus, many algorithms and understanding in RL and robot learning build on optimal control. The series of books by Bertsekas provide an excellent introduction and reference into this field. While the first volume addresses primarily classical (model-based) optimal control, the second volume treats approximate dynamic programming, which includes addressing optimal control/dynamic programming problems with sampling-based methods. While these books do not directly target (machine) learning techniques, the underlying principles are key for addressing learning in robotics, and I thus consider these books as absolutely fundamental for this area. (Coincidentally, the new edition of Bertsekas' textbooks, which is announced for this year, will be called "Reinforcement Learning and Optimal Control‚".)

reinforcement learning and optimal control

Author(s): Dimitri P. Bertsekas
Venue: Book
Year Published: 2019
Keywords: reinforcement learning, optimal control, dynamic programming, neural networks
Expert Opinion: an accessible take on reinforcement learning that pairs well with the classic and influential book(s) on Dynamic Programming by Bertsekas

reinforcement learning in robotics: a survey

Author(s): Jens Kober, J. Andrew Bagnell, Jan Peters
Venue: International Journal of Robotics Research
Year Published: 2014
Keywords: survey, reinforcement learning, learning from demonstration, optimal control, mobile robots
Expert Opinion: This survey was published at a time when there was still a significant gap between reinforcement learning and its practical employment on real robot hardware. For the majority of real world domains, rollouts are impractical to perform on actual hardware---because, for example, the state/action spaces are continuous, exploration can be dangerous, and rollouts take much longer when physically executed---plus often simulators are too dissimilar to the real world and hardware for what is learned to transfer well. To get reinforcement learning to be effective on a real hardware system, therefore, the devil is in the details, and this article addresses just that. Today the gap is narrowing, in part because of advances in computation, but also because of implementation "tricks‚" becoming codified. This article is a bit of a one-stop-shop for pulling together a lot of these tricks, and putting some theoretical rigor and thought behind why and when they work.

Next