The robotic claw mechanically moves downward into a bin filled with various home and office knick-knacks. It picks up a tape dispenser, scissors, a stapler, and various writing utensils, placing all the objects in a separate nearby bin.
For humans, grasping objects is a largely unthinking action. But for robots, object manipulation is contingent on advanced planning and analysis. Researchers from Google, however, are using deep learning to teach robots hand-eye coordination. Through trial and error, and constant feedback, the robots learned how to grasp novel objects in their environment.
“We collected about 800,000 grasp attempts over the course of two months, using between six and 14 robots at any given point in time, without any manual annotation or supervision,” the researchers wrote in a paper posted to ArXiv.org. “The only human intervention into the data collection process was to replace the object in the bins in front of the robots and turn on the system.”
The robot is comprised of a lightweight arm with a two-finger gripper. Additionally, the system is outfitted with monocular camera.
Interestingly, the researchers found that their robots utilized two different methods for grasping hard and soft objects. “Our system preferred to grasp softer objects by embedding the finger into the center of the object, while harder objects were grasped by placing the fingers on either side,” the researchers wrote.
The researchers reported that the robot had a failure rate between 10 percent and 20 percent when encountering new objects. If it failed, the robot tried the task again.
According to Popular Science, Google’s robot doesn’t perform as well as Cornell Univ.’s DeGrasping robot project, which is similar and had a failure rate around 16 percent with hard objects.
In the future, the researchers hope to test the robot in various training setups to see how it works in other environments.