Found 22 results.


Next

a new approach to linear filtering and prediction problems

Author(s): R. E. Kalman
Venue: Transactions of the ASME–Journal of Basic Engineering
Year Published: 1960
Keywords: probabilistic models, optimal control, dynamical systems, state estimation
Expert Opinion: The paper that introduced the Kalman Filter: probably the most used inference algorithm in science and engineering. The paper *also* introduced the linear quadratic regulator as a bonus: probably the most used optimal control algorithm. It is clearly written and easy to understand.

robot trajectory optimization using approximate inference

Author(s): Marc Toussaint
Venue: International Conference on Machine Learning
Year Published: 2009
Keywords: probabilistic models, trajectory optimization, optimal control
Expert Opinion: In this study a direct link between optimal control and the general framework of probabilistic inference is draw. The inference framework formulation allows to exchange the inference algorithms or solver and enables the prioritization of multiple concurrent objectives in a principled way.

probabilistic robotics

Author(s): Sebastian Thrun, Wolfram Burgard, Dieter Fox
Venue: Book
Year Published: 2005
Keywords: probabilistic models
Expert Opinion: Probabilistic Robotics is a tour de force, replete with material for students and practitioners alike.

pilco: a model-based and data-efficient approach to policy search

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen
Venue: International Conference of Machine Learning
Year Published: 2011
Keywords: state estimation, reinforcement learning, probabilistic models, gaussians, dynamical systems, visual perception, policy gradients
Expert Opinion: The paper shows how data-efficient model-based RL control methods can actually get. PILCO is an idea that in different variations is still around.

from skills to symbols: learning symbolic representations for abstract high-level planning

Author(s): George Konidaris, Leslie Pack Kaelbling, Tomas Lozano-Perez
Venue: Journal of Artificial Intelligence Research
Year Published: 2018
Keywords: probabilistic models, planning
Expert Opinion: Abstraction is an important aspect of robot learning. This paper addresses the issue of learning state abstractions for efficient high-level planning. Importantly, the state abstraction should be induced from the set of skills/options that the robot is capable of executing. The resulting abstraction can then be used to determine if any plan is feasible. The paper addresses both deterministic and probabilistic planning. It is also a great example of learning the preconditions and effects of skills for planning complex tasks.

robots that can adapt like animals

Author(s): Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret
Venue: Nature
Year Published: 2015
Keywords: gaussians, probabilistic models, locomotion
Expert Opinion: because it shows how you can leverage models in simulation for learning how to recover from damages, without necessarily re-learning the damaged model. Also, they learn in very few trials on the real robot, which is fundamental when working with real robots and experiments are expensive

probabilistic movement primitives

Author(s): Alexandros Paraschos, Christian Daniel, Jan Peters, and Gerhard Neumann
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2013
Keywords: manipulation, probabilistic models, gaussians, planning, learning from demonstration
Expert Opinion: This work proposes a probabilistic movement primitive representation that can be trained through least squares regression from demonstrations. The most important feature of this model is its ability to model coupled systems. Thus, through exploiting the learned covariance between limbs or other dimensions whole body motion can be completed and predicted. Also the approach provides a closed form solution of optimal feedback controller in each time step assuming local Gaussian models.

a review of robot learning for manipulation: challenges, representations, and algorithms

Author(s): Oliver Kroemer, Scott Niekum, George Konidaris
Venue: arXiv
Year Published: 2019
Keywords: survey, probabilistic models, manipulation, reinforcement learning
Expert Opinion: This paper present an incredibly extensive recent survey on learning in robot manipulation (440 citations!!). Surveys are always use especially for new grad students. This one presents a single framework to formalise the robot manipulation problem.

relative entropy policy search

Author(s): Jan Peters, Katharina Mülling, Yasemin Altün
Venue: AAAI Conference on Artificial Intelligence
Year Published: 2010
Keywords: policy gradients, reinforcement learning, probabilistic models
Expert Opinion: This work proposes an information theoretic gradient based policy learning learning algorithm with adaptive step sizes. This adaptive step sizes or learning rates are essential for real robot implementations where large jumps in policy updates might damage a real system.

movement imitation with nonlinear dynamical systems in humanoid robots

Author(s): Auke Jan Ijspeert, Jun Nakanishi, Stefan Schaal
Venue: IEEE International Conference on Robotics and Automation (ICRA)
Year Published: 2002
Keywords: probabilistic models, nonlinear systems, dynamical systems, learning from demonstration, humanoid robotics
Expert Opinion: In this work, a robust and scaleable movement primitive learning approach is proposed. The key insight is the embedding of motion trajectories in a 2nd order dynamical system. Goal attractors enable the generalization to different targets and simplify the learning of the model parameters from rewards. Complex motion can be learned through least squares regression from demonstrations.

reinforcement learning: a survey

Author(s): Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore
Venue: Journal of Artificial Intelligence Research
Year Published: 1996
Keywords: neural networks, survey, reinforcement learning, probabilistic models
Expert Opinion: This work provides a relatively short and easy to understand introduction to Reinforcement Learning. Although rather old and therefore does not cover the new approaches to reinforcement learning, it covers the problem of RL very well. I usually ask beginning students interested in reinforcement learning to read this paper together with the more recent "Reinforcement Learning in Robotics: A Survey" by Jens Kober, Andrew Bagnell, and Jan Peters, and deep learning approaches to reinforcement learning.

policy search for motor primitives in robotics

Author(s): Jens Kober, Jan Peters
Venue: Machine Learning Journal
Year Published: 2009
Keywords: policy gradients, reinforcement learning, learning from demonstration, probabilistic models
Expert Opinion: This work was published before the recent AI boom. It presents impressive results of imitation and reinforcement learning, which are still remarkable now.

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: This paper uses deep reinforcement learning to get a PR2 to pretty robustly hang a coat hanger on a clothes rack, insert a block into a shape sorting cube, fit the claw of a toy hammer under a nail, and screw on a bottle cap. There were no prior demonstrations used, the resulting network takes in raw camera images and outputs robot motor torques directly, and each task took less than 300 learning trials to train. Especially given how wobbly/inaccurate PR2 arms are, that is quite impressive.

maximum entropy inverse reinforcement learning

Author(s): Brian D. Ziebart, Andrew Maas, J.Andrew Bagnell, and Anind K. Dey
Venue: AAAI Conference on Artificial Intelligence
Year Published: 2008
Keywords: probabilistic models, learning from demonstration, reinforcement learning
Expert Opinion: This work is one of the first to connect probabilistic inference with robot policy learning. Maximum Entropy Inverse Reinforcement Learning poses the classical Inverse Reinforcement Learning problem, well-studied for several years before this work, as maximizing the likelihood of observing a state distributing given a noisily optimal agent w.r.t an unknown reward function. The inference method, model, and general principles not only inspired future IRL works (such as RelEnt-IRL, GP-IRL, and Guided Cost Learning), they also have been applied in Human Robot Interaction and general policy search algorithms.

gaussian processes for data-efficient learning in robotics and control

Author(s): Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen
Venue: IEEE Transactions on Pattern Analysis and Machine Intelligence
Year Published: 2017
Keywords: gaussians, dynamical systems, probabilistic models, reinforcement learning
Expert Opinion: This paper shows the power of model-based reinforcement learning for robot control. It nicely illustrates the power of Gaussian Processes to capture the uncertainty and demonstrates how to leverage it in a highly data-efficient reinforcement learning algorithm. Overall, PILCO (the algorithm described in this paper) might be the most data-efficient algorithm I know. Please note that conference versions of this paper were published at ICML (2011 - PILCO: A model-based and data-efficient approach to policy search) and RSS (2011 - Learning to Control a Low-Cost Manipulator using Data-Efficient Reinforcement Learning).

square root sam: simultaneous localization and mapping via square root information smoothing

Author(s): Frank Dellaert, Michael Kaess
Venue: Intl. J. of Robotics Research
Year Published: 2006
Keywords: manipulation, planning, mobile robots, state estimation, visual perception, probabilistic models
Expert Opinion: This paper, as well as the follow up iSAM2 paper, focuses on treating localization and mapping problems as nonlinear least squares and then optimizing as efficiently as possible. This is a powerful technique that serves as the backbone for many SAM solvers, and, can be applied more generally to all sorts of inference problems in robotics. Techniques building on this work have been used in planning, manipulation, and control.

a bayesian view on motor control and planning

Author(s): Marc Toussaint, Christian Goerick
Venue: Studies in Computational Intelligence (SCI, volume 264)
Year Published: 2010
Keywords: planning, probabilistic models
Expert Opinion: This paper nicely introduces the relation between classical robot control algorithms and probabilistic inference. Not all of the introduced concepts have been novel but I think this paper contains nice examples and a good overview and helped me to start to think about robot control in a new way.

planning and acting in partially observable stochastic domains

Author(s): Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra
Venue: Artificial Intelligence 101
Year Published: 1998
Keywords: probabilistic models, planning, state estimation
Expert Opinion: This paper provides an easy to understand introduction to MDP and POMDP, which is the basis for understanding Reinforcement Learning and Bayesian Reinforcement Learning ---two learning techniques commonly used to combine planning and learning in robotics. I usually ask beginning research students (honours / 1st year MPhil/PhD) to read this paper.

deep reinforcement learning in a handful of trials using probabilistic dynamics models

Author(s): Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine
Venue: Neural Information Processing Systems Conference (NeurIPS)
Year Published: 2018
Keywords: reinforcement learning, dynamical systems, probabilistic models, optimal control
Expert Opinion: Model-based reinforcement learning had the image of not performing well in comparison to model-free methods on complicated problems. However, conceptually, model-based methods have many advantages when it comes to data-efficiency and transfer between tasks. This paper shows that with ensemble models state-of-the-art robotic learning benchmarks (Gym-Mujoco environments) can be solved with high performance in significantly fewer steps. The data-efficiency is particularly important for real robot applications.

Next