Contents
- Which data types does tf use?
- How do I visualize tf?
- Frames are missing in my tf tree. Why?
- Can I transform a Point Cloud?
- What is the meaning of this error message
- Can I declare static transforms?
- Do transforms expire?
- Can I set different expiration limits for specific transforms?
- How does tf deal with interpolation and extrapolation?
- How can tf be used in a 3D-mapping framework?
- What is the tf threading model?
- How can I use Euler angles with tf data types?
- Why don't frame_ids use ROS namespaces?
- Why do I see negative average delay in tf_monitor?
Which data types does tf use?
tf can deal with data objects describing poses, vectors, points, etc. It is designed to support datatypes via a templated API. More information is available at on tf2#Supported_Datatypes
How do I visualize tf?
tf comes with a large set of visualization and debugging tools. You can find a list of the tools here. TODO port to tf2
Frames are missing in my tf tree. Why?
You expected some frames to be in your tf tree, but view_frames shows you they are not there. There are several different causes:
- No-one is publishing these frames. To look at the raw publishing data, type:
$ rostopic echo /tf
The frames are published incorrectly. tf expects the frames to form a tree. This means that each frame can only have one parent. If you publish for example both the transforms:
- from "A (parent) to B (child)" and
- from "C (parent) to B (child)".
This means that B has two parents: A and C. tf cannot deal with this.
Can I transform a Point Cloud?
Yes it's in tf package right now it will be moving to a point cloud package soon. You can use the generic transform plugin with tf2_sensor_msgs package providing the bridge. Or you can use any other point cloud datatype for which the conversion methods and doTransform method has been implemented.
What is the meaning of this error message
Take a look at the error explained page.
Can I declare static transforms?
Yes support for static transforms was added in tf2. Migration Notes
TODO link to static transform publisher documentation.
Do transforms expire?
Some code (such as amcl_player.cc) refers to an expiration time for transforms. How does the concept of transforms that expire interact with the transform library's attempts at interpolation/extrapolation? Is this documented anywhere?
The concept of expiring transforms is not actually supported. What amcl and a few others do is date their transforms in the future. This is to get around the problem that a chain of transforms is only as current as the oldest link of the chain. The future dating of a transform allows the slow transform to be looked up in the intervening period between broadcasts. This technique is dangerous in general, for it will cause tf to lag actual information, for tf will assume that information is correct until its timestamp in the future passes. However for transforms which change only very slowly this lag will not be observable, which is why it is valid for amcl to use this. It can safely be used for static transforms. I highly recommend being careful with future dating any measurements. Another technique available is to allow extrapolation. And either technique can creep up on you and cause unusual behavior which is untraceable.
Can I set different expiration limits for specific transforms?
After all, I would expect different transforms to update at rates that are potentially orders of magnitude different.
For the current implementation, it is simply how far you future date your data for that transform. Remember, this will cause the transform to lag by however much you future date the values. I do not recommend this for any value which is expected to change at a rate more than a slight drift correction (ala amcl_player).
How does tf deal with interpolation and extrapolation?
Our experimentation has shown that interpolation is fine, but extrapolation almost always ends up becoming more of a problem than a solution. If you are having trouble with data being ready before transforms are available I suggest using the tf::MessageFilter class in tf. It will queue incoming data until transforms are available. Having tried allowing "just a little" extrapolation, waiting for accurate data to be available has proved a much better approach.
How can tf be used in a 3D-mapping framework?
An example serves to illustrate this better. Given 4 sensing devices:
Hokuyo -- hokuyo_node package
SwissRanger SR4000 -- swissranger package
Videre STOC -- stoc_driver package
Thermal FLIR camera -- flir_driver package
The first 3 produce point clouds, and the last one images. After calibration, you get the necessary rotation matrices that you need to use to transform the SwissRanger point cloud into the left camera on the STOC, respectively the thermal image into left STOC, etc. While collecting data, you constantly need to apply these rotation matrices to the point clouds generated by some sensors, to annotate your 3D points with texture (RGB), thermal information, etc
This is exactly where you want to use tf::TransformListener. The point cloud aggregator will be recieving messages over ROS from all of the sensors, in their respective date types. However, say the Hokuyo is mounted on the base, the Videre stereo camera is on the head, and the swiss ranger is on the left arm. The aggregating process has no idea where all these coordinate frames actually are in space. However the mechanism control process knows where the head and arms are in relationship to the base. And the localization process knows where the base is in relationship to the world.
Now if they are all publishing transforms using their own broadcaster instances, all the aggregator node needs to do is instantiate the TransformListener and it will be able to relate the data from the Videre to the data from the hokuyo and data from the swiss ranger. The TransformListener class will extract all that information from the ROS network and provide it automatically to the Transformer base class. The Transformer class then can provide the aggregator with any transform they wish. The goal is that the end user doesn't have to worry about collecting any transforms, and they are automatically cached in time and can provide interpolated or extrapolated results if desired. The way to make this easy for the the developer/user is the use of frame ids. The frame id's are simply strings which uniquely identify coordinate frames. When the system is operating, if you have a point cloud arrive from the Videre it will be in the "videre_camera_frame" to use it in whatever frame you want simply transform it to the frame id of the frame you want and use it.
What is the tf threading model?
A tf2_ros::TransformListener class is continuously listening for incoming coordinate transforms from tf2_ros::TransformBroadcasters. Any blocking call would severely disrupt the flow of information to the listener. Therefore, a transform listener, has the option to spin its own thread. This particularly helps with calls with timeouts, otherwise the timeout will always elapse since new data is not received.
All tf2 public API calls use mutex locks to be threadsafe.
How can I use Euler angles with tf data types?
Why don't frame_ids use ROS namespaces?
There is already a hierarchy defined by the tf graph in the child/parent relationships, vs. ROS names where the hierarchy is defined by the name.
namespaces are used to determine how communication channels are connected. Whereas tf frame_ids are related to how coordinate frames are physically connected. Thus if you push down a set of nodes relating to the 2d navigation they will work in their own namespace and not have collisions on topics or services. However, the base is still connected to the body in the same way, despite the namespace push down. Thus pushing down doesn't make sense.
There are a few other considerations, tf frames are not necessarily connected to any specific node. For example on the pr2 the base_laser frame is published by the robot_state_publisher based on the URDF, but the hokuyo_node publishes scans in the base_laser frame. And then rviz transforms the data into some arbitrary frame for viewing. If any of them disagree about the full name of "base_laser" this will not work, for the tree in tf will not be connected anymore.
Why do I see negative average delay in tf_monitor?
Negative average delay comes from the transform being timestamped in the future when sent or by network delays when running in simulation.
In simulation time is sent over the /clock topic and may arrive after transforms sent from a different node with the same time.
A few nodes stamp data in the future, most noteably AMCL. This is valid for AMCL and other similar nodes publishing a small correction transform which is expected to remain approximately static and never have inherent velocity. The value is that AMCL can update at a slow rate but does not prevent transforms through it's correction to be used at a high rate.