Researchers at MIT's Computer Science and Artificial Intelligence Laboratory developed a system to give robots a greater understanding of the random objects that fill our world and sometimes our closets. Similar technology could eventually help automate work in warehouses packing Amazon boxes, or assisting with cleaning in private homes, working quietly alongside Roombas.
"We want robots to learn by themselves how to very richly and visually understand lots of objects that are useful for lots of tasks," said Pete Florence, the co-lead author of the paper.
Robots already have a huge presence in warehouses and on assembly lines, but they usually can't learn and improvise. They can pick up and move objects, but they have to be in set positions and locations.
General assembly, such as packing random do-dads into boxes at e-commerce warehouses, is the type of work humans do well. But with MIT's technology, a robotic arm could assist: it'll know how to grab a mug by the handle or a shoe by its tongue. The system learns more about the objects over time and eventually put items inside a box in a specific way.
To train the robotic arm, the researchers put groups of similar objects, such as shoes, hats, and mugs, in front of it. The robotic arm moved the products around, examining their features and learning what they had in common. After analyzing different shoes, it learned where the laces should be.
The test was done primarily on men's shoes, crowdsourced from staffers at CSAIL's lab. But if they put a stack of the stilettos in front of the robot, Florence believes the robot could develop a good visual representation of the high heels.
The research builds on previous advances in computers vision, deep learning and robotics, says Florence. The paper will presented at the Conference on Robot Learning in Switzerland in October.
CNNMoney (San Francisco) First published September 10, 2018: 11:55 AM ET