Robots See like Humans in 3-D
Zygmunt Pizlo, Purdue professor in the Department of Psychological Sciences, stands between postdoctoral research assistants Tadamasa Sawada and Yunfeng Li, as he adjusts the vision of a robot named Capek. The researchers will move around the dance floor in Pizlo’s laboratory while the robot “watches.” The goal is to simulate visual perception in the robot so it can “see” more like humans. Courtesy of Purdue Research Foundation |
Zygmunt Pizlo and his research team glide across a parquet dance floor — not in some club for a night on the town, but in his Purdue University Visual Perception Lab as part of critical research for a technology that is ready to be licensed and commercialized.
They’re moving so a robot named Capek can “watch” them and conceptualize the research team’s actions as members move around objects like desks and chairs. The goal is to simulate visual perception in the robot so it can “see” more like humans.
“Enabling robots and other machines to see the world in 3-D like humans is one of the biggest challenges in robotics and artificial intelligence,” said Pizlo, a professor in the Purdue Department of Psychological Sciences. “Research in the field of robotic vision has typically focused on recording and analyzing 2-D images, but really it is about 3-D visual perception — being able to understand the 3-D scene in front of the robot so that it can decide what needs to be done with an object that is in its field of view. Should the robot walk around it? Pick it up?”
Pizlo has been working in the field of visual perception for 30 years.
“We believe there is a fundamental principle for human vision, and that is we rely on a prior knowledge about a physical environment, so we’re trying to program this knowledge of the physical environment into a robot’s artificial intelligence,” he said.
Pizlo said work in the lab is to develop a model of decision-making to mimic the human mind.
“We’re developing a technology that will allow robotic machines to map their environment using two cameras for eyes,” he said. “This process eliminates the need for additional range sensors currently used for robotic vision and reduces the time and complexity of robotic sight.”
Postdoctoral research assistants Tadamasa Sawada and Yunfeng Li are working with Pizlo. Sawada said humans have the ability for cognitive functions that are computationally difficult, and it is a challenge to incorporate that ability into a robot.
“We quickly and easily perceive the physical world — a 3-D shape and figure-ground organization,” Sawada said. “Figure-ground organization is key to seeing an object in 3-D instead of 2-D.”
Figure-ground perception is the tendency of a human’s visual system to simplify a scene or photo into a main object and cognitively “move” everything else into the background.
“The question is: how do we have a robot solve figure-ground organization like we do?” he said. “We do this by incorporating visual mechanisms and a prior knowledge about the physical environment into a robot.”
Conventional robotic vision technology uses multiple cameras with laser range finders and other sensors to detect objects around them. While the current systems allow for basic object recognition, they do not replicate the 3-D capabilities that are possible for humans, according to Pizlo.
“The key element in solving the puzzle of enabling 3-D vision for robots is to realize that our visual system uses a prior knowledge about the physical world. We are implementing this knowledge, as well as the visual mechanisms (algorithms) in a robot,” said Pizlo, who also is a professor of electrical and computer engineering. “The physical world is not completely random. Most natural objects are symmetrical, all objects have volume, gravity is always present, ground surfaces are approximately horizontal. There is enough prior knowledge in the human visual system that we automatically see everything in 3-D. Combining a prior knowledge with visual information helps robots see the same way we do.”
Enabling the human/robot connection will be a key factor in bringing robots into everyday life.
“Right now, robots are used in a number of ways, including manufacturing, space research, agriculture and even cleaning our floors, but they can’t bring us coffee in the morning,” he said. “Until they can see like us, they can’t truly interact with us. Once they can interact with us, they can begin doing all types of tasks, such as drive a car, help surgeons in hospitals, assist the elderly, provide sight for the blind, replace people in high-risk situations like making repairs in a nuclear plant and, yes, bring us coffee in the morning.”
Funding for Pizlo’ s research came from the National Science Foundation, U.S. Department of Defense, Air Force Office of Scientic Research, U.S. Department of Energy and other sources.
About the Purdue Office of Technology Commercialization
Video: http://bit.ly/KmPGXy