项目作者: nbfigueroa

项目描述 :
使用Kinect v1和v2处理桌面场景的“机器人感知”脚本和方法。
高级语言: C++
项目地址: git://github.com/nbfigueroa/kinect-process-scene.git
创建时间: 2016-06-12T19:40:05Z
项目社区:https://github.com/nbfigueroa/kinect-process-scene

开源协议:

下载


kinect-process-scene

Collection of scripts and methods used to process table top scenes for object recognition/detection/segmentation using the Kinect sensors v1 (Xbox 360) and v2 (One) in ROS.

Main Dependencies
PCL library
[OpenCV] (http://www.opencv.org)

When using Kinect v1 need to install: openni_launch and/or freenect.

When using Kinect v2 need to install: iai_kinect2


kinect2_receiver

My own Kinect2_receiver class which processes pre-registered depth and color images from kinect2-bridge

process_table_scene

“Robot-aware” table-top scene processing.

  • Filters points generated by robots/tools.
  • Segments table and desired object.

object_feature_generator

Extracts and streams default object features (from segmented point cloud)

  • mean & std of r,g,b values (as a wrench message)

Bimanual Peeling Zucchini Segmentor Instructios

On lasapc18

Start-up Kinect2 driver :

  1. $ roslaunch kinect2_bridge kinect2_bridge.launch

On lasa-beast

Generate PointCloud on local machine from depth/color images sent through Network:

  1. $ rosrun kinect2_receiver publisher_node

Run Zucchini segmentation node (publishes zucchini and cutting board point clouds):

  1. $ roslaunch process_table_scene bimanual_scene.launch

Generate Object Features Online (Publishes Observed Zucchini Color Features):

  1. $ rosrun object_feature_generator feature_generator_node