1811.06964.pdf


立即下载 trpnest
2025-02-19
robot learning representations Google houcke 机器 repre-sentation allo experience effectively
6.4 MB

Grasp2Vec: Learning Object Representations
from Self-Supervised Grasping
Eric Jang *,1, Coline Devin *,2,†, Vincent Vanhoucke1, and Sergey Levine1,2
*Both authors contributed equally
1Google Brain
2Department of Electrical Engineering and Computer Sciences, UC Berkeley
†Work done while author was interning at Google Brain
{ejang, vanhoucke, slevine}@google.com
1coline@eecs.berkeley.edu
Abstract: Well structured visual representations can make robot learning faster
and can improve generalization. In this paper, we study how we can acquire effec-
tive object-centric representations for robotic manipulation tasks without human
labeling by using autonomous robot interaction with the environment. Such rep-
resentation learning methods can benefit from continuous refinement of the repre-
sentation as the robot collects more experience, allowing them to scale effectively
without human intervention. Our representation learning approach is based on ob-
ject persistence: when a r


robot/learning/representations/Google/houcke/机器/repre-sentation/allo/experience/effectively/ robot/learning/representations/Google/houcke/机器/repre-sentation/allo/experience/effectively/
-1 条回复
登录 后才能参与评论
-->