Found 17 results.




grounding semantic categories in behavioral interactions: experiments with 100 objects

Author(s): Jivko Sinapov, Connor Schenck, Kerrick Staley, Vladimir Sukhoy, Alexander Stoytchev
Venue: Robotics and Autonomous Systems
Year Published: 2012
Keywords: visual perception, manipulation
Expert Opinion: Interactive perception and multimodal sensing are fundamental aspects of robotics and robot learning. The ability to execute actions to interact with the environment provides robots with a rich source of information, especially when combined with haptic, vision, and audio feedback. Grounding the object representations in the robot's actions provides the robot with representations that are not only well suited for future manipulation tasks, but that the robot can estimate through autonomous experimentation. The paper also touches on actively selecting actions to quickly reduce uncertainty.

robotics, vision and control - fundamental algorithms in matlab

Author(s): Peter Corke
Venue: Book
Year Published: 2015
Keywords: visual perception
Expert Opinion: I've encouraged countless students to read this book. It provides a broad overview of robotics, with inline code samples that instantly generate interactive demos using the author's robotics toolbox for Matlab. I personally dislike Matlab, but the quality of the author's toolbox and its illustrative power were worth the Matlab pain.

pilco: a model-based and data-efficient approach to policy search

Author(s): Marc Peter Deisenroth, Carl Edward Rasmussen
Venue: International Conference of Machine Learning
Year Published: 2011
Keywords: state estimation, reinforcement learning, probabilistic models, gaussians, dynamical systems, visual perception, policy gradients
Expert Opinion: The paper shows how data-efficient model-based RL control methods can actually get. PILCO is an idea that in different variations is still around.

robotic grasping of novel objects using vision

Author(s): Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
Venue: International Journal of Robotics Research
Year Published: 2008
Keywords: neural networks, dynamical systems, visual perception, learning from demonstration, manipulation, planning
Expert Opinion: One of the first papers using general visual features for grasping

domain randomization for transferring deep neural networks from simulation to the real world

Author(s): Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel
Venue: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Year Published: 2017
Keywords: visual perception, dynamical systems, neural networks, reinforcement learning
Expert Opinion: The work focuses on one of the most important problems related to utilizing CNNs for robotics problems: transferring policies from simulations to real world. Effective solutions are presented together with promising results.

data-driven grasp synthesis-a survey

Author(s): Jeannette Bohg, Antonio Morales, Tamim Asfour, Member, Danica Kragic
Venue: IEEE Transactions on Robotics (Volume 30, Issue 2)
Year Published: 2016
Keywords: survey, visual perception, manipulation
Expert Opinion: A must read for anyone interested in grasping. Great survey on data-driving methods for grasping of, both known and unknown, objects; plus a review and connections to 'traditional' analytical methods!

learning grasping points with shape context

Author(s): Jeannette Bohg, Danica Kragic
Venue: International Conference on Advanced Robotics
Year Published: 2009
Keywords: planning, manipulation, visual perception
Expert Opinion: This is one of the first works in literature that utilized machine learning for the robotic manipulation problem. The proposed framework is still useful to design similar robot learning solutions. The particular importance of this work is to use a global representation of a target object (goal) for manipulation planning

deep residual learning for image recognition

Author(s): Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Venue:  IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Year Published: 2015
Keywords: neural networks, visual perception
Expert Opinion: Because it introduces away to train substantially deeper networks and thus provides substantially better results.

affordances in psychology, neuroscience and robotics: a survey

Author(s): Lorenzo Jamone, Emre Ugur, Angelo Cangelosi, Luciano Fadiga, Alexandre Bernardino, Justus Piater and Jose Santos-Victor
Venue: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2018
Keywords: survey, visual perception, mobile robots, reinforcement learning
Expert Opinion: Affordances is an important term for robot learning, but also one that tends to be overloaded and can lead to confusion. If an object allows an agent to perform an action, then the object is said to afford the action to that agent. Affordances can generally be learned autonomously and are thus a fundamental aspect of self-supervised learning for autonomous robots. The nuances of the term however are still widely discussed in robotics and other fields. As a result, one should be aware of the ambiguity and different perspectives regarding the term when talking about affordances. This survey paper discusses some of the nuanced interpretations of the term affordances.

cognitive developmental robotics: a survey

Author(s): Minoru Asada, Koh Hosoda, Yasuo Kuniyoshi, Hiroshi Ishiguro, Toshio Inui, Yuichiro Yoshikawa, Masaki Ogino, and Chisato Yoshida
Venue: IEEE Transactions on Autonomous Mental Development
Year Published: 2009
Keywords: survey, humanoid robotics, cognitive sciences, visual perception
Expert Opinion: I really like the overview of cognitive robotics, where learning can be applied and what the required parts are essential to learning and cognitive (artificial) systems.

end-to-end training of deep visuomotor policies

Author(s): Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Venue: Journal of Machine Learning Research
Year Published: 2016
Keywords: manipulation, probabilistic models, planning, locomotion, learning from demonstration, reinforcement learning, neural networks, visual perception
Expert Opinion: It introduced end-to-end training with impressive results going from pixels to torques on several interesting tasks.

imagenet classification with deep convolutional neural networks

Author(s): Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
Venue: Communications of the ACM
Year Published: 2012
Keywords: neural networks, visual perception
Expert Opinion: Because it significantly boosts perception and deep learning for robot vision these days is heavily relying on this work.

the cityscapes dataset for semantic urban scene understanding

Author(s): Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele
Venue: IEEE Conference on Computer Vision and Pattern Recognition
Year Published: 2016
Keywords: visual perception, neural networks, mobile robots
Expert Opinion: Deep Learning has transformed the Robot Learning field completely in the last year. Whereas e.g. Computer Vision applications have access to large volumes of data (necessary to train DNNs), Robotics applications generally suffer from the problem of collecting enough data for learning tasks. The KITTI dataset is one example of a great effort to provide relevant data for Robot (or autonomous vehicle) Learning.

a large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation

Author(s): Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox
Venue: IEEE Conference on Computer Vision and Pattern Recognition
Year Published: 2015
Keywords: state estimation, neural networks, visual perception
Expert Opinion: Deep Learning has transformed the Robot Learning field completely in the last year. Whereas e.g. Computer Vision applications have access to large volumes of data (necessary to train DNNs), Robotics applications generally suffer from the problem of collecting enough data for learning tasks. A very useful approach is to generate synthetic data. The FlowNet dataset is such an example.

assessing grasp stability based on learning and haptic data

Author(s): Yasemin Bekiroglu, Janne Laaksonen, Jimmy Alison Jurgensen, Ville Kyrki and Danica Kragic
Venue: IEEE Transactions on Robotics
Year Published: 2011
Keywords: manipulation, visual perception, contact dynamics
Expert Opinion: "Learning to grasp" can actually imply a lot of different learning problems. We often think about grasp synthesis, i.e., the problem of determining where to place the hand to achieve a stable grasp (I strongly recommend reading Data-Driven Grasp Synthesis- a Survey for more on this topic). This paper focuses on the important problem of using multiple sensor modalities to determine if an executed grasp attempt resulted in a stable grasp. As robot learning researchers, it is important to consider how problems can be approached from different directions and how different information sources can be incorporated and change the problem. One should also think about robustness and consider how learning factors into monitoring skill executions for errors.

adaptive representation of dynamics during learning a motor task

Author(s): Reza Shadmehr and Ferdinando A. Mussa-lvaldi
Venue: The Journal of Neuroscience
Year Published: 1994
Keywords: dynamical systems, visual perception, planning
Expert Opinion: The reason why I picked these articles and books is because I think that robot learning cannot be separated from the cognitive architecture supporting the learning processes. The first two reference highlight the importance and role of embodiment (in humans and robots) and the fact that in physical systems part of the learning process is embedded in the morphology and material.

square root sam: simultaneous localization and mapping via square root information smoothing

Author(s): Frank Dellaert, Michael Kaess
Venue: Intl. J. of Robotics Research
Year Published: 2006
Keywords: manipulation, planning, mobile robots, state estimation, visual perception, probabilistic models
Expert Opinion: This paper, as well as the follow up iSAM2 paper, focuses on treating localization and mapping problems as nonlinear least squares and then optimizing as efficiently as possible. This is a powerful technique that serves as the backbone for many SAM solvers, and, can be applied more generally to all sorts of inference problems in robotics. Techniques building on this work have been used in planning, manipulation, and control.