Google DeepMind creates AI robot that beats humans at ping-pong

Written by

Published 12 Aug 2024

Fact checked by

NSFW AI

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Orange Ping Pong Ball Stock Photo

An artificial intelligence (AI)-powered robot that can compete in amateur-level table tennis is the latest achievement at Google DeepMind, according to a paper published on August 7.

The research is currently available on the arXiv preprint server, and it details the process of how Google’s AI lab developed the robot, its performance at different ability levels, and the reactions of human athletes who played with the robot. As of writing, the paper has yet to be peer-reviewed.

Scientific efforts in robotic design have recently focused on AI integration to equip robots with human-level capabilities. “Achieving human-level performance in terms of accuracy, speed, and generality still remains a grand challenge in many domains,” DeepMind researchers wrote in their study.

With this new innovation, they were able to make a small step forward, representing a “milestone in robot learning and control.”

Building the AI ping-pong athlete

In building the AI-powered robot, the researchers first selected a robot arm. The ABB IRB 1100 model, currently used in real-world industries, was chosen because it can quickly control its hand and arm and move side-to-side on a rail.

The robot arm was then trained in table tennis using a two-tiered approach. The first tier involved learning the game’s rules and how to execute each ping-pong move, including forehand serves and backhand spin.

This was accomplished by training the AI system on physics simulations and videos of humans playing the game, which required a relatively small dataset compared to other machine-learning methods.

To refine the robot’s skills, the researchers gathered information on its strengths, weaknesses, and limitations, which they then fed back to the AI. This iterative process provided the system with realistic evaluations of its abilities.

Meanwhile, the second tier focused on strategy. A higher-level algorithm was added to the training to allow the system to pick the necessary type of skill or shot to use each time. A built-in ability to learn from the opponent’s strategies was also added, which allowed the robot to weigh up the strengths and weaknesses of its contender.

No match for pros

To assess its performance, the team tested the AI-powered robot in 29 matches against humans with varying skill levels, ranging from beginner to tournament-level table tennis players.

The robot won 13 out of 29 games, dominating against all its amateur opponents and succeeding against 55% of intermediate players. However, it failed to secure a single victory when faced with advanced players.

According to the paper, this result demonstrated a solid amateur-human level performance for the robot arm.

The testing also revealed some systematic weaknesses. The robot arm particularly struggled when returning fast and high balls, reading spin, and hitting backhand shots. The researchers are already seeking ways to fix these issues and improve the system so that its moves can become less predictable.

They also mentioned that human players mostly enjoyed their matches against the AI-powered robot. “Across all skill groups and win rates, players agreed that playing with the robot was ‘fun’ and ‘engaging.'”

In addition to table tennis, Google DeepMind is confident that its innovation could be useful for a wide range of applications that require quick responses in dynamic physical environments.