Only released in EOL distros:  

3d_interaction: blob3d | external_camera_localizer | projected_lightswitch_interface | projector_calibration | projector_interface | rcommander_interface | world_intersect

Package Summary

Provides a node for calibrating a projector to a camera with a known TF frame.

Nodes

grid

This node takes in a camera image and computes the homography between a detect grid and the idealized grid that it is projecting.

Subscribed Topics

image (sensor_msgs/Image)
  • A camera image containing the projected grid to be detected.

Published Topics

grid (sensor_msgs/Image)
  • The grid which is being projected
homography (projector_calibration/Homography)
  • The homography, mapping points in the camera image to projector pixels.

Parameters

~grid_size (string)
  • Defaults to 5x7

Using the Calibration

Given a homography, H (provided by projector_calibration/grid), and camera calibration K (from a sensor_msgs/CameraInfo), a 3D point, p in the world can be projected onto as follows:

  1. Transform p into the same frame as the camera used to find H (call the new point pimg).

  2. Project pimg onto the image plane (call the new point pproj).

  3. Transform p2d to projector pixels using H

In python this looks something like

   1 import image_geometry, tf, cv2
   2 
   3 # get camera_info_msg and H from somewhere
   4 # create a tf listener
   5 
   6 model = image_geometry.PinholeCameraModel(camera_info_msg)
   7 p_img = tf_listener.transformPoint(model.tf_frame, p)
   8 p_2d = model.project3dToPixel((p_img.x, p_img.y, p_img.z))
   9 p_proj = cv2.perspectiveTransform(H, [p_2d])

Wiki: projector_calibration (last edited 2013-05-02 19:10:24 by DanLazewatsky)