The self_see_filter inherits from the FilterBase class, and implements the functionality of removing or marking point cloud points that belong to a robot's body. There are two modes of operation - the first actually filters points out, discarding those points in the cloud that are determined to be in or near robot links or which are shadowed by the robot links. The second mode is not to remove points that lie on or near robot links but instead to mark them using an annotation channel in the point cloud. The self_see_filter additionally allows a sensor frame to be set to enable filtering of points that lie behind the robot's body from the view of the sensor. The code in this package works for links defined as spheres, boxes, cylinders or meshes.
- The ROS API is stable.
- The C++ API is stable.
The filter does not directly subscribe to or publish any values other than tf transforms, but does use a number of parameters to control internal operation. It will attempt to set the parameters from a node handle passed into to the contructor, so all parameters will be drawn from the namespace <namespace> used to create the node handle.
~<namespace>/min_sensor_dist (double, default: ".02")
- Points closer than this from the sensor frame are discarded
- The default padding to associate with links if none is specified in the link description
- The default scale to associate with links if none is specified in the link description
- If the annotate option is not empty, self_see_filter will not filter points from point clouds but will instead add an annotation channel with the name specified by the "annotate" parameter that will contain, for each point, a -1 if the point is inside a robot link, a 0 if the point is shadowed by a robot link, or a 1 otherwise.
- The self_see_links are an array of structures associated with individual links.
The entries for self_see_links are best explained in yaml format:
- name: base_link padding: .03 scale: 1.0
The name (required) specifies the name of the link to be considered in the self filter, the padding (optional) the distance (in meters) from the link to include in collision filtering, and the scale (optional) to consider the link. If the padding or scale are not set the defaults will be used.
Finally, C++ calls exist to set the sensor frame for shadow detection if desired.
To determine whether a point is inside, outside or a shadow, the following computation is performed for every link we are testing against:
If sensor_frame is not specified:
- check if point is inside the scaled and/or padded link. If so, the point is inside. Otherwise, the point is outside. There are no shadow points.
If sensor_frame is specified
Assume the point is outside. Check if point is inside the unscaled and unpadded link or too close to the sensor. If it is not, check it if is behind the scaled and/or padded link (a shadow point), if we look along a ray starting at the sensor's origin. If the point is not a shadow point either, check if it is inside the scaled and/or padded object. If the point is still not inside the link, it is considered outside. The reason the process is more complex when the sensor's location is known is that we need to consider points behind the arm as shadow points, even if these points fall in the padding/scaling of the link because this padding/scaling should only affect points that hit the top surface of links. (As an example use case, the collision_map that accounts for self occlusion uses this information; if the arm is right on top of a table, shadowing some points, those points should not be considered inside, even if they are within the padding of the end effector).