使用Kinect v1和v2处理桌面场景的“机器人感知”脚本和方法。
Collection of scripts and methods used to process table top scenes for object recognition/detection/segmentation using the Kinect sensors v1 (Xbox 360) and v2 (One) in ROS.
Main Dependencies |
---|
PCL library |
[OpenCV] (http://www.opencv.org) |
When using Kinect v1 need to install: openni_launch and/or freenect.
When using Kinect v2 need to install: iai_kinect2
My own Kinect2_receiver class which processes pre-registered depth and color images from kinect2-bridge
“Robot-aware” table-top scene processing.
Extracts and streams default object features (from segmented point cloud)
Start-up Kinect2 driver :
$ roslaunch kinect2_bridge kinect2_bridge.launch
Generate PointCloud on local machine from depth/color images sent through Network:
$ rosrun kinect2_receiver publisher_node
Run Zucchini segmentation node (publishes zucchini and cutting board point clouds):
$ roslaunch process_table_scene bimanual_scene.launch
Generate Object Features Online (Publishes Observed Zucchini Color Features):
$ rosrun object_feature_generator feature_generator_node