Researchers uncover method to steal AI models with 99.91% accuracy

Written by

Published 19 Dec 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free technology sci-fi futuristic illustration

North Carolina State University (NC State) researchers have demonstrated a new method to steal artificial intelligence (AI) model hyperparameters with remarkable accuracy. Dubbed “TPUXtract,” the attack targets Google Edge Tensor Processing Units (TPUs) used in Pixel phones and raises serious questions about AI security. The researchers outlined this technique in their paper, “TPUXtract: An Exhaustive Hyperparameter Extraction Framework.”

How the attack works

The attack setup involved placing an electromagnetic (EM) sensor close to the target device while it was actively running an AI model. This sensor captured EM signals, which varied according to the TPU’s computations. These signals, which act as unique “signatures,” allow researchers to infer the hyperparameters of an AI model layer by layer. By comparing these signatures to a pre-existing database of model behaviors, the team recreated AI models with 99.91% accuracy.

“The electromagnetic data from the sensor essentially gives us a ‘signature’ of the AI processing behavior. Once we’ve reverse-engineered the first layer, that informs which 5,000 signatures we select to compare with the second layer,” Ashley Kurian, the study’s first author, explained. “And this process continues until we’ve reverse-engineered all of the layers and have effectively made a copy of the AI model.”

This method improves on the usual slow and inefficient brute-force attacks. The study tested models like MobileNet V3, ResNet-50, and Inception V3, which typically have dozens or hundreds of layers, on devices like the Coral Dev Board equipped with Edge TPUs.

Why it matters

Hyperparameters are settings or “secret recipes” that shape AI models’ output and performance. Stealing these parameters could allow nefarious actors to clone expensive AI models. This undermines intellectual property rights and opens the door to further exploits.

“AI models are valuable; we don’t want people to steal them,” said Aydin Aysu, an associate professor at NC State and co-author of the paper. When a model is leaked, it becomes vulnerable to attacks because third parties can study the model and identify any weaknesses, he added.

The research highlights vulnerabilities in devices like Google’s Edge TPU, which is widely used for running AI in edge computing scenarios. These scenarios range from smart home devices to autonomous vehicles, meaning the potential impact of such vulnerabilities is vast.

Countermeasures and industry response

The researchers disclosed their findings to Google, which has yet to comment publicly on the issue. They also emphasized the need for countermeasures, including enhanced memory encryption and improved shielding for hardware components to block EM signal leakage. The findings are set to appear in the IACR Transactions on Cryptographic Hardware and Embedded Systems in early 2024.

“Now that we’ve defined and demonstrated this vulnerability, the next step is to develop and implement countermeasures to protect against it,” Aysu added.

Securing AI models is particularly important as AI developers invest billions of dollars in creating these models. Any security lapse could result in massive financial losses and compromised data. “The coverage and accuracy of our approach raise significant concerns about the vulnerability of commercial accelerators like the Edge TPU to model stealing in various real-world scenarios,” the authors stated in the paper.

While Google considers its response, this research serves as a wake-up call for the industry to prioritize AI security and protect the intellectual property embedded in these advanced systems.