Note: This tutorial assumes that you have completed the previous tutorials: Quick start guide, Recording and playing back data with rosbag. |
Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |
Recording and playing back Kinect data
Description: This tutorial shows how to efficiently record Kinect data into the ROS bag file format, and play it back into the same ROS processing graph.Tutorial Level: BEGINNER
Contents
Motivating use case: We are writing algorithms that process point clouds, and wish to record some benchmark datasets for testing. We can use rosbag to record and play back ROS topics, but how can we best use it to capture the firehose of Kinect data?
(The approach in this tutorial works for any Kinect data, not only point clouds.)
Recording data
Which topics do I need?
Since our end goal is to process point clouds, the obvious solution is to record either the camera/depth/points (XYZ) or camera/depth_registered/points (XYZRGB) topic. Unfortunately the point cloud messages (especially XYZRGB) are extremely memory-inefficent. Let's compare the size of the point cloud and raw image (from which point clouds are produced) topics:
Topic |
Bytes per pixel |
Notes |
camera/depth/image_raw |
2 |
uint16 in mm |
camera/rgb/image_raw |
1 |
uint8 Bayer format |
camera/depth/points |
16 |
3*sizeof(float) + 4 bytes padding |
camera/depth_registered/points |
32 |
Additional 16 bytes (mostly padding) for RGB |
For XYZRGB, the point cloud (32 Bpp) is more than ten times the size of the raw data (3 Bpp). Not only will this waste disk space, it impairs our ability to record point clouds at all. Given an XYZRGB point cloud topic streaming 30fps at size 640x480, and a typical hard disk (data transfer rate 1030 Mbits/s), we can record an upper bound of 13 point clouds per second to disk.
To save both bandwidth and disk space, and for flexibility in later processing, we will save the raw topics (and their associated camera parameters).
If you do not need color (XYZ-only point clouds), record:
camera/depth/image_raw
camera/depth/camera_info
If you want XYZRGB point clouds, make sure that OpenNI registration is enabled and record:
camera/depth_registered/image_raw
camera/depth_registered/camera_info
camera/rgb/image_raw
camera/rgb/camera_info
The point clouds will be reconstructed on-the-fly in playback.
Registered or unregistered
If you have fully calibrated your Kinect (intrinsics and extrinsics), you may instead record:
camera/depth/image_raw
camera/depth/camera_info
camera/rgb/image_raw
camera/rgb/camera_info
/tf
This is more flexible, as you retain the original (unregistered) depth images. The depth-to-RGB registration performed by openni_launch can do as well or better than OpenNI, but does require extra calibration effort from the user.
/tf is needed for the relative transform between the depth and RGB cameras, queried through tf. If you record this, use the publish_tf:=false argument (new in Fuerte) to openni_launch during playback.
Start the device
First, start your device using openni_launch. For this tutorial we'll enable OpenNI registration and record XYZRGB data:
roslaunch openni_launch openni.launch depth_registration:=true
Since we will record only the raw topics, you could alternatively start only the openni_camera driver (configured to use the camera namespace):
## Electric ROS_NAMESPACE=camera rosrun openni_camera openni_node_unstable _depth_registration:=true _rgb_frame_id:=/camera_rgb_optical_frame _depth_frame_id:=/camera_rgb_optical_frame ## Fuerte ROS_NAMESPACE=camera rosrun openni_camera openni_node _depth_registration:=true _rgb_frame_id:=/camera_rgb_optical_frame _depth_frame_id:=/camera_rgb_optical_frame
This is ideal for low-resource systems. For recording purposes, you need not have the relatively heavyweight openni_launch stack installed at all!
Record the bag
Record the XYZRGB topics listed earlier:
rosbag record camera/depth_registered/image_raw camera/depth_registered/camera_info camera/rgb/image_raw camera/rgb/camera_info --limit=60 -O kinect
This records ~2s of Kinect data to kinect.bag. See rosbag for all command-line options.
Checking the contents of your bag, you should see something like:
$ rosbag info kinect.bag path: kinect.bag version: 2.0 duration: 2.0s start: Apr 25 2012 16:36:43.05 (1335397003.05) end: Apr 25 2012 16:36:45.01 (1335397005.01) size: 52.9 MB messages: 240 compression: none [60/60 chunks] types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214] sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] topics: camera/depth_registered/camera_info 60 msgs : sensor_msgs/CameraInfo camera/depth_registered/image_raw 60 msgs : sensor_msgs/Image camera/rgb/camera_info 60 msgs : sensor_msgs/CameraInfo camera/rgb/image_raw 60 msgs : sensor_msgs/Image
Playing back data
Close (Ctrl-C) the ROS OpenNI driver you started earlier. We will bring up openni_launch again, but without the device driver. The bag will provide the raw data streams instead.
First, make sure that the Master (and Parameter server) is already running. If not, use the following command:
roscore
Before starting any nodes, do the magic invocation:
rosparam set /use_sim_time true
See Clock for the gory details. This basically tells nodes on startup to use simulated time (ticked here by rosbag) instead of wall-clock time (as in a live system). It avoids confusing time-dependent components like tf, which otherwise would wonder why messages are arriving with timestamps far in the past.
New in ROS Fuerte Bring up the openni_launch processing graph, using the load_driver launch file argument to suppress loading the driver nodelet:
roslaunch openni_launch openni.launch load_driver:=false
In Electric, you can instead feed the driver an invalid device ID. This is a hack, and the driver will complain (a lot), but it works:
roslaunch openni_launch openni.launch device_id:=invalid
Open rviz and create a PointCloud2 display of /camera/depth_registered/points, with fixed frame /camera_link. Or, if you don't need visuals, just monitor the topic:
rostopic hz /camera/depth_registered/points
Finally, play back the recorded data through rosbag:
rosbag play --clock kinect.bag