Grasp2Vec: Learning Object Representations
from Self-Supervised Grasping
Eric Jang *,1, Coline Devin *,2,†, Vincent Vanhoucke1, and Sergey Levine1,2
*Both authors contributed equally
1Google Brain
2Department of Electrical Engineering and Computer Sciences, UC Berkeley
†Work done while author was interning at Google Brain
{ejang, vanhoucke, slevine}@google.com
1coline@eecs.berkeley.edu
Abstract: Well structured visual representations can make robot learning faster
and can improve generalization. In this paper, we study how we can acquire effec-
tive object-centric representations for robotic manipulation tasks without human
labeling by using autonomous robot interaction with the environment. Such rep-
resentation learning methods can benefit from continuous refinement of the repre-
sentation as the robot collects more experience, allowing them to scale effectively
without human intervention. Our representation learning approach is based on ob-
ject persistence: when a r
robot/learning/representations/Google/houcke/机器/repre-sentation/allo/experience/effectively/
robot/learning/representations/Google/houcke/机器/repre-sentation/allo/experience/effectively/
-->