DeepMind proposes new benchmark to improve robots’ object-stacking abilities

That Transform Technology Summits launch on October 13 with Low-Code / No Code: Enabling Enterprise Agility. Register now!


Stacking an object on top of another object is a straightforward task for most people. But even the most complex robots struggle to cope with more than one such task at a time. Stacking requires a variety of motor, perception, and analysis skills, including the ability to interact with different kinds of objects. The level of sophistication has lifted this simple human task into a “major challenge” in robotics and spawned a cottage industry dedicated to developing new techniques and approaches.

A team of researchers at DeepMind believes that it will require a new benchmark to develop the latest technology in robot stacking. In a paper to be presented at the Conference on Robot Learning (CoRL 2021), they introduce RGB-Stacking, which gives a robot the task of learning to grasp different objects and balance them on top of each other. While benchmarks for stacking tasks already exist in the literature, the researchers argue that what sets their research apart is the diversity of objects used and the evaluations performed to validate their findings. The results show that a combination of simulation and real data can be used to learn “manipulation with multiple objects”, suggesting a strong baseline for the problem of generalizing to new objects, the researchers wrote in the newspaper.

“To support other researchers, we are opening a sourcing of a version of our simulated environment and releasing designs to build our real-robot RGB stacking environment along with RGB object models and information for 3D printing of them,” said researchers. “We are also open to a collection of libraries and tools used in our robotics research more widely.”

RGB stacking

With RGB-Stacking, the goal is to train a robotic arm via reinforcement learning to stack objects in various shapes. Reinforcement learning is a type of machine learning technique that enables a system – in this case a robot – to learn by trial and error using feedback from its actions and experiences.

RGB-Stacking places a gripper attached to a robot arm over a basket and three objects in the basket: a red, a green and a blue (hence the name RGB). A robot must stack the red object on top of the blue object within 20 seconds, while the green object acts as an obstacle and distraction.

According to DeepMind researchers, the learning process ensures that a robot acquires generalized skills through training on multiple object sets. RGB Stacking deliberately varies the grips and stake properties that define how a robot can grab and stack each object, forcing the robot to exhibit behaviors that go beyond a simple pick-and-place strategy.

“Our RGB Stacking benchmark includes two task versions with varying degrees of difficulty,” the researchers explain. In ‘Skill Mastery’, our goal is to train a single agent who is skilled at stacking a predefined set of five triplets. In ‘Skill Generalization’, we use the same triplets for evaluation, but train the agent on a large set of training objects – a total of more than a million possible triplets. To test for generalization, these training objects exclude the family of objects from which the test trolleys were selected. In both versions, we decouple our learning pipeline into three phases. ”

The researchers claim that their methods in RGB stacking result in “surprising” stacking strategies and “mastery” of stacking a subset of objects. Yet they admit that they are only scratching the surface of what is possible and that the challenge of generalization remains unresolved.

“While researchers continue to work on solving the open challenge of true generalization in robotics, we hope that this new benchmark, together with the environment, designs and tools we have released, contributes to new ideas and methods that can make manipulation even easier. and robots more capable, ”the researchers added.

As robots become more skilled at stacking and grabbing objects, some experts believe that this type of automation could drive the next US manufacturing boom. In a recent survey by Google Cloud and The Harris Poll, two-thirds of manufacturers said the use of AI in their day-to-day operations is increasing, with 74% claiming to be in line with the changing work environment. Manufacturing companies expect efficiencies over the next five years that can be attributed to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” – automation of traditional industrial practices – at $ 3.7 trillion by 2025.

VentureBeat

VentureBeat’s mission is to be a digital urban space for technical decision makers to gain knowledge about transformative technology and transactions. Our site provides important information about data technologies and strategies to guide you as you lead your organizations. We invite you to join our community to access:

  • updated information on topics that interest you
  • our newsletters
  • gated thought-leader content and discount access to our valued events, such as Transform 2021: Learn more
  • networking features and more

sign up

Leave a Comment