Progress and Open Problems in Motion Planning and Navigation for Humanoids

Motivation and Objectives

Humanoid robots have become more and more popular, which is illustrated by the increasing number of available platforms and the huge number of high-quality publications in the research area of navigation and motion planning for humanoids. Recently, a lot of progress has been made in the areas of 3D perception, efficient environment representation, fast collision checking, as well as motion planning for navigation and manipulation with humanoids, also under uncertainty and real-time constraints. All these techniques work well for their independent application scenario, however, currently no system exists that combines the individual approaches. Thus, we are still far from the deployment of a humanoid robot in the real world. The goal of this workshop is to identify gaps in the research directions and to discuss which aspects need to be considered for combining the different approaches so as to enable humanoids to reliably act and navigate in real environments for an extended period of time. This objective is in line with some of the target scenarios of the new DARPA Robotics Challenge program indicating the relevance of this research area.

The workshop will feature excellent talks from researchers, live demonstrations, a poster session, and a panel discussion with the aim to identify gaps between the research foci, formulate future research directions, and possibly establish new collaborations.

Schedule

Note that the abstracts of all invited presentations can be found below.

8:45Maren Bennewitz/Olivier StasseIntroduction
9:00Koiichi Nishiwaki, AIST, JapanAutonomous navigation of a full-size humanoid over unknown rough terrains
9:30Armin Hornung, University of Freiburg, GermanySearch-Based Footstep Planning
10:00Coffee break
10:30Daniel Maier, University of Freiburg, GermanyDepth-Camera-Based Navigation Techniques for Humanoid Robots
11:00Jean-Paul Laumond, LAAS-CNRS, Toulouse, FranceIs it better to walk with the head than with the feet?
11:30Dimitry Berenson, Worcester Polytechnic Institute, USAManipulation Planning for Humanoids: Finding Constrained Goal Configurations and Planning Low-Cost Feasible Paths
12:00Eiichi Yoshida, AIST, JapanReactive Humanoid Motion Planning and Control for Low-level Autonomy
12:30lunch
14:00Interactive presentations of all workshops
14:00 Matthew Klingensmith, Thomas Galluzzo, Christopher Dellin, Moslem Kazemi, J. Andrew Bagnell, Nancy Pollard, CMU, USAInteractive presentation: Closed-loop Servoing using Real-time Markerless Arm Tracking
15:30Coffee break
16:00Sebastien Dalibard, Aldebaran Robotics, FranceDynamic Walking and Whole-Body Motion Planning for Humanoid Robots: An Integrated Approach
16:30Nicolas Perrin, IIT, Genova, ItalyTowards Continuous Approaches for Multi-Contact Planning
17:00Chonhyon Park, Jia Pan, and Dinesh Manocha, Univ. of North Carolina at Chapel Hill, USAHierarchical Optimization-based Planning for High-DOF Robots
17:20Yajia Zhang, Jingru Luo, and Kris Hauser, Indiana University Bloomington, USAPlanner-aided Design of Ladder Climbing Capabilities for a DARPA Robotics Challenge Humanoid
17:40Oscar E. Ramos, Mauricio Garcia, Nicolas Mansard, Olivier Stasse, Jean-Bernard Hayet, and Philippe Soueres, LAAS-CNRS, France / Guanajuato, Mexico Towards Reactive Vision-guided Walking on Rough Terrain: An Inverse-Dynamics Based Approach
18:00AllSummary

Invited Speakers

We have an impressive list of invited speakers:

Presentation Titles and Abstracts

Abderrahmane Kheddar, CNRS LIRMM, Montpellier: Planning of Human-Humanoid Haptic Joint Actions: Lessons learned from a beam transportation case study (canceled!)

This talk reports on our recent developments in programming a humanoid robot (HRP-2) in jointly performing a beam transportation task with a human partner. This talk reviews our recent achievements and results in our attempt to programme a humanoid robot to jointly transport a beam with a human partner. We show that good performances and interactive rates are obtained using a proactive behavior based on constant velocity phases primitives that we observed from monitoring human dyads performing the same tasks. We will then discuss our first trials in combining vision and force to achieve the task with height stabilization and discuss some developments that are in the pipe to reach a full accomplishment of human-humanoid physical interaction tasks.

Jean-Paul Laumond, LAAS-CNRS, Toulouse: Is it better to walk with the head than with the feet?

This talk will report on a current research tending to promote a top-down approach to bipedal walking inspired by neurophysiology models, as opposed to the bottom-up approaches based on ZMP preview control. The approach starts from passive walking studies. Based on the metastability notion introduced by Tedrake et al, we show how an actuated head contributes to walking stabilization when facing terrain perturbation. This research is conducted in collaboration with M. Benallegue and A. Berthoz from LPPA, Paris.

Dmitry Berenson, Worcester Polytechnic Institute: Manipulation Planning for Humanoids: Finding Constrained Goal Configurations and Planning Low-Cost Feasible Paths (PDF of extended abstract)

This talk addresses to key problems in motion planning for humanoids: 1) Finding a feasible goal configuration for cluttered environments that obeys humanoid constraints (e.g. balance, closed-chain kinematics, etc), and 2) Planning with soft constraints, which can enable a wide range of humanoid behavior. I will present our work in these two areas and discuss prospects for future exploration.

Koichi Nishiwaki, AIST: Autonomous navigation of a full-size humanoid over unknown rough terrains

Autonomous navigation of a humanoid over unknown rough terrain as an integration of online terrain-shape-map generation, online footstep planning, and robust walking control is presented. Terrain shape measurement is carried out by using a scanning-type laser range sensor mounted on the torso of the robot through swing mechanism. Terrain height map around the robot is generated by using the sensor, and accuracy of a few centi-meter is achieved. Sequence of footprints which lead to the goal is planned by applying A* search to the foot-transition set. Inclination, roughness, etc are evaluated for each stepping-position candidate so that feasibility of stepping on is judged and the cost of the stepping position is assigned. Map region under the transition of the free-leg foot is also evaluated for judging the feasibility of the transition and assigning the cost of the transition. Online planning is realized by the capability of the planner which can evaluate more than 25000 candidates during a single step which is usual cycle of the update of the plan. Walking control consists of repetitive pattern generation at 40-ms cycle and 1-ms cycle ground reaction force control. Pattern generator uses the actual robot motion which is estimated by mainly using IMU as initial conditions of the pattern, and generates dynamically balanced walking pattern using ZMP criteria. Ground reaction force controller tries to realize desired ground reaction force in the absolute coordinate system which is calculated from the generated pattern. Since the lower-level controller does not control the position but force, the actual motion diverges from the generated one. This is compensated by repetitive online pattern generation. This framework makes the walking controller robust to the error of the terrain shape from the supposed one. It is hard to judge automatically if a stepping position is safe to step, or allowed to step only from the shape information. We developed a interface to give a guide path to the system by using mixed reality technology. Footstep planner is also extended to search a sequence of footprints to the goal which stays near the guide curve. Experiments using the full-size humanoid HRP-2 will be shown as the demonstration of the system, and future topics will be discussed.

Eiichi Yoshida, AIST: Reactive Humanoid Motion Planning and Control for Low-level Autonomy

This talk addresses a reactive motion planning and control framework with low-level autonomy that enables a humanoid robot to perform whole-body tasks in complex environments or reactive interactions with unknown objects or humans. We have developed an efficient method to plan or replan a feasible whole-body reaching path within a second when necessary, as well as a framework for reactive control whole-body control assuming multi-modal skin sensors integrating force, tactile and proximity sensors. This low-level autonomy improves interoperability and adaptability for teleoperation or direct interactions with human or environments. The reactive planning and control scheme allows the humanoid to react against unpredicted obstacles, to handle unknown objects or to interact with humans with multiple contacts safely. We validate the proposed methods in such scenarios like a cluttered plant environment with moving object and manipulate unknown objects adaptively, before addressing future issues.

Sebastien Dalibard, Aldebaran Robotics: Dynamic Walking and Whole-Body Motion Planning for Humanoid Robots: An Integrated Approach (PDF of extended abstract)

In this abstract, we present an overview of recent results on dynamic walking and whole-body motion planning for humanoid robots. First, we present the field of randomized algorithms used for constrained motion planning, and their application to humanoid whole-body motion planning. Further on, we introduce humanoid small-space controllability, a theoretical property relying on dynamic walking. Such a property leads to a sound method that extends whole-body motion planning algorithms to whole-body and walk planning. We illustrate this method by use exemples on the HRP-2 platform.

Nicolas Perrin, IIT, Genova: Towards Continuous Approaches for Multi-Contact Planning

When a contact between a humanoid robot and the environment is established, the constraints of the equations of motion are changed in an abrupt fashion. This discreteness of contacts gives a hybrid nature to humanoid robot motion. Often, the sequences of contacts and the continuous motion of the robot are treated separately: first, a sequence of contacts is determined, and only then is planned a continuous motion following these contacts. However, in some cases, contacts and the motion itself can be considered together and planned with a continuous approach. There exist several such approaches for the problem of planning walking motions, which is a subproblem of multi-contact planning. The first part of this talk will consist in a short review of four recent techniques that plan walking motions and deal with the footsteps, i.e. the contacts, in a special way. In the first one by Kanoun et al., planning footsteps is seen as an inverse kinematics problem. The second one by Dalibard et al. proposes a variant of small-time controllability to decompose the planning into two phases, the first one being entirely continuous. In the third technique by Herdt et al., variables describing the footsteps are integrated in an optimization problem that computes the robot motion. In the fourth one by Perrin et al., a new notion of collision is defined to transform the discrete problem of footstep planning into a continuous one. In the second part of the talk, we will discuss the limitations of these approaches, and we will try to understand why it is difficult to extend them to more general problems of multi-contact planning, but also how they could pave the way towards new algorithms that would better handle the specific hybrid nature of multi-contact planning.

Armin Hornung, University of Freiburg: Search-Based Footstep Planning (PDF of corresponding paper)

Efficient footstep planning for humanoid navigation through cluttered environments is still a challenging problem. Often, obstacles create local minima in the search space, forcing heuristic planners such as A* to expand large areas. Furthermore, planning longer footstep paths often takes a long time to compute. In this work, we introduce and discuss several solutions to these problems. For navigation, finding the optimal path initially is often not needed as it can be improved while walking. Thus, anytime search-based planning based on the anytime repairing A* or randomized A* search provides promising functionality. It allows to obtain efficient paths with provable suboptimality within short planning times. Opposed to completely randomized methods, anytime search-based planners generate paths that are goal-directed and guaranteed to be no more than a certain factor longer than the optimal solution. By adding new stepping capabilities and accounting for the whole body of the robot in the collision check, we extend the footstep planning approach to 3D. This enables a humanoid to step over clutter and climb onto obstacles. We thoroughly evaluated the performance of search-based planning in cluttered environments and for longer paths. We furthermore provide solutions to efficiently plan long trajectories using an adaptive level-of-detail planning approach.

Daniel Maier, University of Freiburg: Depth-Camera-Based Navigation Techniques for Humanoid Robots

Perception is a fundamental prerequisite for autonomous, collision-free navigation. It allows a robot to sense obstacles in its vicinity, estimate its pose from the sensor readings, and plan actions based on an environment representation maintained from the observations. In this talk, I will present navigation methods based depth camera data. The presented techniques include 6D localization in map models, scan matching for reducing drift, construction of 3D and 2.5 D obstacle maps, as well as path planning. All the techniques have been evaluated and demonstrated on a Nao humanoid with an ASUS Xtion Pro Live consumer-grade depth camera.

Organizers

  Maren Bennewitz,
University of Freiburg, Germany
  Olivier Stasse,
LAAS - CNRS, France
 

Submission

We invite the submission of papers and extended abstracts. Authors of accepted contributions will either present their work during oral presentations or an interactive poster session. All accepted contributions will be published on the workshop webpage.

Furthermore, selected papers will be considered for publication in a special issue of the International Journal of Humanoid Robotics (IJHR). So, we encourage you to submit new, original work.

Submissions need to be formated conforming to the ICRA paper format. Extended abstracts can have 2-4 pages, papers up to 6 pages.
Please submit contributions by Email to: icra13humanoids@informatik.uni-freiburg.de

Submission: 31 March 2013 (NEW DEADLINE!)
Notification: 12 April 2013
Final version: 28 April 2013

Topics of interest include (but are not limited to):
- Navigation in complex indoor or outdoor environments
- Perception for humanoids, environment modeling
- Gait and step planning
- Whole-body motion planning
- Multi-contact planning
- Planning for manipulation
- 3D collision avoidance
- ...