Wiki

(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

Running the object pickup pipeline

Description:

Keywords: grasp, grasping, pickup, manipulation, tabletop

Tutorial Level: ADVANCED

Robot bring-up and set up

Nothing special here; just bring up the robot as you normally would with prX.launch.

Position the robot so that it's looking at the objects on the table. Make sure objects are within arm's reach. Do leave some space between robot and table though so that the robot can go through its calibration routine.

IMPORTANT: the robot must be calibrated (laser <-> stereo <-> arm). Talk to Vijay if unsure about which robots are calibrated.

Arm navigation and Perception

Start the file object_grasping.launch from arm_navigation_pr2:

roscd arm_navigation_pr2
roslaunch object_grasping.launch

This will start the planning pipeline: collision maps, FK and IK, planning, move_arm, etc. Note that in the launch files that get called from here you can change various topics and parameters, such as padding for the collision map, etc.

This should also start the perception pipeline, using both laser and narrow stereo.

Narrow Stereo

Note that we are using narrow_stereo_textured so if you want to see points in rviz you need to be listening on /narrow_stereo_textured/points.

If the projector is not started, you can start it with reconfigure_gui:

roscd dynamic_reconfigure
scripts/reconfigure_gui

Select /camera_synchronizer_node from the drop-down list, then set projector_mode to ProjectorOn. Also make sure that narrow_stereo_trig_mode is set to WithProjector. You should be able to see a nice point cloud in rviz.

You will need to see the table well for anything to work; if you can't see the table in stereo, put a sheet of paper (preferably A3 or larger) under your objects.

Tabletop object detector

This provides the object detection services. Start it with

roscd tabletop_object_detector
roslaunch launch/tabletop_node.launch

The tabletop_node has two modes of operation:

You can set the operation mode in the launch file.

The information in both cases comes out as model_database/DatabaseModelPose.msg:

int32 model_id
geometry_msgs/PoseStamped pose

The model_id is the unique identifier for the detected model in the model database, and the pose is it's location. By default, tabletop_node publishes model poses in the same frame and with the same stamp that the incoming point clouds are coming in.

The tabletop node can also publish lots of debug markers, by default, the topic is /tabletop_detector_markers. You can enable or disable the kinds of markers it publishes by editing the launch file.

Note that for each cluster of points, the tabletop_node will fit a model, but only publish as a result those fits that exceed a quality threshold. If you look at the model fit markers it publishes, those shown in blue are the "good ones" that get published or returned as results; the ones shown in red are known to be bad so are not published back.

Interpolated IK server

This generates a path from pre-grasp to grasp by using interpolated IK. Start it with:

roscd fingertip_reactive_grasp
src/fingertip_reactive_grasp/interpolated_ik_motion_planner.py r

Object pickup

The object pickup node puts stuff together:

It has a new and improved console-based interface.

Start it with:

roscd object_pickup
roslaunch launch/object_pickup_node.launch

Then, do the following:

To try again, you can just start from the beginning of this list; no need to kill object_pickup and restart it.

Wiki: arm_navigation/Tutorials/Running the object pickup pipeline (last edited 2010-02-04 00:23:43 by TimField)