When it comes to movement, humans have an advantage over robots in certain respects. By craning or turning the neck in a certain direction, a human can navigate their surroundings and know where they’re placing their extremities.
At this week’s IEEE International Conference on Robotics and Automation in Stockholm, Sweden, researchers from Carnegie Mellon University’s Robotics Institute will present their findings regarding limb location for robots.
The research is based on a technique called simultaneous localization and mapping (SLAM). With it, a robot can leverage information from a variety of sensors, such as cameras and laser radars, to create a 3D map and ascertain its location on that map.
They named their system Articulated Robot Motion for SLAM, and it utilizes a small-depth camera attached to a Kinova Mico, a lightweight robotic arm.
By making the arm a sensor, the robot uses the angle of its joints to determine the camera’s pose.
“Automatically tracking the joint angles enables the system to produce a high-quality map even if the camera is moving very fast or if some of the sensor data is missing or misleading,” said researcher Matthew Klingensmith in a statement.
The team used its technology to develop a 3D model of a bookshelf. According to Carnegie Mellon University, the team’s results were either on par, or better than other mapping techniques.
The research received funding from Toyota, the U.S. Office of Naval Research, and the National Science Foundation.
“We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation,” said researcher Siddhartha Srinivasa in a statement.
R&D 100 AWARD ENTRIES NOW OPEN:
Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! Learn more.