Found 14 results.




planning and acting in partially observable stochastic domains

Author(s): Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra
Venue: Artificial Intelligence 101
Year Published: 1998
Keywords: probabilistic models, planning, state estimation
Expert Opinion: This paper provides an easy to understand introduction to MDP and POMDP, which is the basis for understanding Reinforcement Learning and Bayesian Reinforcement Learning ---two learning techniques commonly used to combine planning and learning in robotics. I usually ask beginning research students (honours / 1st year MPhil/PhD) to read this paper.

robot learning from demonstration

Author(s): Atkeson and Schaal
Venue: International Conference of Machine Learning
Year Published: 1997
Keywords: learning from demonstration, state estimation
Expert Opinion: Some the key ideas in learning for robotics.

stanley: the robot that won the darpa grand challenge

Author(s): Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia Oakley, Mark Palatucci, Vaughan Pratt, and Pascal Stang
Venue: Journal of Robotic Systems
Year Published: 2006
Keywords: gaussians, state estimation
Expert Opinion: There would not be this much focus on robotics and learning if not for self-driving cars. Self-driving cars would not be a thing without Stanley.

a large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation

Author(s): Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox
Venue: IEEE Conference on Computer Vision and Pattern Recognition
Year Published: 2015
Keywords: state estimation, neural networks, visual perception
Expert Opinion: Deep Learning has transformed the Robot Learning field completely in the last year. Whereas e.g. Computer Vision applications have access to large volumes of data (necessary to train DNNs), Robotics applications generally suffer from the problem of collecting enough data for learning tasks. A very useful approach is to generate synthetic data. The FlowNet dataset is such an example.

inverted autonomous helicopter flight via reinforcement learning

Author(s): Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, and Shankar Sastry
Venue: International Symposium on Experimental Robotics
Year Published: 2003
Keywords: learning from demonstration, reinforcement learning, state estimation
Expert Opinion: totally transformativer

iterative linearization methods for approximately optimal control and estimation of non-linear stochastic system

Author(s): W. LI, E. TODOROV
Venue: International Journal of Control
Year Published: 2007
Keywords: planning, nonlinear systems, optimal control, dynamical systems, state estimation
Expert Opinion: This paper presents one of the effective and fundamental optimal control framework for nonlinear systems. This framework (including DDP) and its extensions have been widely applied to in motion planning and generation in complex robotic systems. This work is quite influential in the field of motion generation and control of robotic systems. After the publication of this paper, there have been many follow-up studies on the application of this kind of optimal control approaches to robot motion control.

square root sam: simultaneous localization and mapping via square root information smoothing

Author(s): Frank Dellaert, Michael Kaess
Venue: Intl. J. of Robotics Research
Year Published: 2006
Keywords: manipulation, planning, mobile robots, state estimation, visual perception, probabilistic models
Expert Opinion: This paper, as well as the follow up iSAM2 paper, focuses on treating localization and mapping problems as nonlinear least squares and then optimizing as efficiently as possible. This is a powerful technique that serves as the backbone for many SAM solvers, and, can be applied more generally to all sorts of inference problems in robotics. Techniques building on this work have been used in planning, manipulation, and control.

pilco: a model-based and data-efficient approach to policy search

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen
Venue: International Conference of Machine Learning
Year Published: 2011
Keywords: state estimation, reinforcement learning, probabilistic models, gaussians, dynamical systems, visual perception, policy gradients
Expert Opinion: One of the first papers to really take uncertainty seriously in the RL + robotics space. Probably the first paper to convince me that model-based RL is worthwhile to think about, even in hard-to-model robotics domains.

optimization-based iterative learning for precise quadrocopter trajectory tracking

Author(s): Angela Schoellig. Raffaello D'Andrea
Venue: Autonomous Robots Journal
Year Published: 2012
Keywords: state estimation, optimal control
Expert Opinion: The authors propose an iterative approach to improve the flight controller in a quadcopter during repetitive task execution. While at a high-level, the paper has the same general setting as recent work in policy learning and robotics, it takes a very different approach that is grounded in control theory and state estimation. In my opinion, this paper is one of the best examples of "model-based" learning in robotics from both an algorithmic and a systems perspective. The task studied is dynamic, has non-trivial dynamic disturbances, and the proposed control technique is theoretically justified while being simple enough to analyze---also it worked.

particle filter networks with application to visual localization

Author(s): Peter Karkus, David Hsu, Wee Sun Lee
Venue: Proceedings of The 2nd Conference on Robot Learning
Year Published: 2018
Keywords: state estimation, neural networks, mobile robots
Expert Opinion: makes clear how the algorithmic ideas from before and end-to-end learning can be combined

a new approach to linear filtering and prediction problems

Author(s): R. E. Kalman
Venue: Transactions of the ASME–Journal of Basic Engineering
Year Published: 1960
Keywords: probabilistic models, optimal control, dynamical systems, state estimation
Expert Opinion: Important to point out that this is a Bayesian (probabilistic) approach, long before Bayesian approaches became popular in ML.

optimal control and estimation

Author(s): Robert Stengel
Venue: Book
Year Published: 1994
Keywords: optimal control, state estimation
Expert Opinion: Robotics Learning Practitioners must be aware of and understand Optimal Control. :)