Size: 3144
Comment:
|
← Revision 6 as of 2013-01-17 14:57:31 ⇥
Size: 3250
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 29: | Line 29: |
Run cob_object_detection: | Run cob_object_detection |
Line 33: | Line 33: |
cob_object_detection will record the incoming topics from the bagfiles | [[cob_object_detection]] will assume the presence of the following topics within the bagfiles |
Line 36: | Line 36: |
* cobject pose <cob_object_detection_msgs::PoseRT> | * object pose <cob_object_detection_msgs::PoseRT> |
Line 40: | Line 40: |
Befor you run the bagfile you have to name the dataset. Either with dynamic reconfigure |
Before you play the bagfile you have to assign an object name to the dataset, either by using dynamic reconfigure |
Line 46: | Line 44: |
or with a ROS servicecall | or by using a ROS servicecall |
Line 51: | Line 49: |
Now, you are ready to play the bagfiles, with "-r" option to lower the speed in order to prevent topics from being dropped. | |
Line 52: | Line 51: |
cob_object_detection synchronises the images and save the training data for cob_object_detection file format Play the bagfile: |
|
Line 58: | Line 54: |
'''Hint: "-r" is for lower speed so that no topic must be dropped''' | [[cob_object_detection]] synchronises the images and converts the training data to the [[cob_object_detection]] specific file format. |
Line 61: | Line 58: |
After recording the Data you can build an object model with cob_object_detetcion | After recording the data you can build an object model by calling |
Line 67: | Line 64: |
For Detection start cob_object_detection as well. | In order to test the detection of the trained models on the bagfiles, launch [[cob_object_detection]] (if not already running) and make that all objects are loaded. |
Line 71: | Line 68: |
Make sure you have loaded all object you want to detect. | |
Line 73: | Line 69: |
Play the bagfile you want to test. | Then play the bagfile. |
Line 78: | Line 74: |
cob_object_detection will synchronise the topics: | [[cob_object_detection]] will synchronise with the topics: |
Line 84: | Line 80: |
You can compare the cob_object_detection detection results with the object mapping from the bagefile. Each Object is positioned in a predefined slot there it can be. | You can compare the detection results with the object mapping from the bagefile. Each Object is positioned in a predefined slot. |
Line 87: | Line 83: |
You can find some Test Data at [[http://opencv.willowgarage.com/wiki/http%3A/opencv.willowgarage.com/wiki/SolutionsInPerceptionChallenge|solution in perception challenge website]], or directly [[http://vault.willowgarage.com/wgdata1/vol1/solutions_in_perception/|here]] | Test data is available at [[http://opencv.willowgarage.com/wiki/http%3A/opencv.willowgarage.com/wiki/SolutionsInPerceptionChallenge|solution in perception challenge website]], or directly [[http://vault.willowgarage.com/wgdata1/vol1/solutions_in_perception/|here]] |
![]() |
Train objects from bagfiles
Description: Learn how to train object models form bagfiles given by the 'solution in perception challenge' datasetKeywords: object training, Care-O-bot
Tutorial Level: INTERMEDIATE
Train from bagfiles
Record Image Process
Run cob_object_detection
roslaunch cob_object_detection object_detection.launch
cob_object_detection will assume the presence of the following topics within the bagfiles
color image <sensor_msgs::Image>
point cloud <sensor_msgs::PointCloud2>
object pose <cob_object_detection_msgs::PoseRT>
color image mask <sensor_msgs::Image>
camera info <sensor_msgs::CameraInfo>
Before you play the bagfile you have to assign an object name to the dataset, either by using dynamic reconfigure
rosrun dynamic_reconfigure reconfigure_gui
or by using a ROS servicecall
rosservice call /object_detection/train_object_rename_bagfile ["obj_01"]
Now, you are ready to play the bagfiles, with "-r" option to lower the speed in order to prevent topics from being dropped.
rosbag play <file> -r 0.3
cob_object_detection synchronises the images and converts the training data to the cob_object_detection specific file format.
Build models
After recording the data you can build an object model by calling
rosservice call /object_detection/train_object_bagfile ["<object name>"]
Detect from bagfiles
In order to test the detection of the trained models on the bagfiles, launch cob_object_detection (if not already running) and make that all objects are loaded.
roslaunch cob_object_detection object_detection.launch
Then play the bagfile.
rosbag play <file>
cob_object_detection will synchronise with the topics:
color image <sensor_msgs::Image>
point cloud <sensor_msgs::PointCloud2>
pose <geometry_msgs::PoseStamped>
Camera Info <sensor_msgs::CameraInfo>
You can compare the detection results with the object mapping from the bagefile. Each Object is positioned in a predefined slot.
Test Data
Test data is available at solution in perception challenge website, or directly here