Critical flaw in Apple Vision Pro reveals potential remote password theft via eye-tracking

Written by

Published 16 Sep 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

vision pro card 66 vision pro 202401?wid=1172&hei=588&fmt=p jpg&qlt=95&

Image from Apple

Computer scientists recently uncovered a significant security flaw in Apple’s Vision Pro mixed reality headset that allowed hackers to remotely reconstruct passwords and other sensitive information by analyzing eye-tracking data.

The vulnerability, named “GAZEploit,” was detailed in a study by researchers from the University of Florida, Texas Tech University, and the CertiK Skyfall Team.

The attack relies on Vision Pro’s eye-tracking feature, where users control a virtual keyboard by looking at keys and tapping their fingers to type. The attack did not require direct access to the headset but was based on analyzing the 3D avatars during video calls or streaming sessions. Aside from passwords, GAZEploit could intercept messages and website addresses that users wrote while on video calls.

Hanqiu Wang, one of the lead researchers, explained, “Based on the direction of the eye movement, the hacker can determine which key the victim is now typing.” The study demonstrated that the attack could achieve a 77% accuracy rate in identifying letters in passwords within five guesses and a 92% accuracy rate in reconstructing messages.

The vulnerability was disclosed to Apple in April, and the company issued a patch in July with the release of visionOS 1.3. The update suspends the use of 3D avatars when the virtual keyboard is active, effectively mitigating the risk of data leakage, with a description stating, “The issue was addressed by suspending Persona when the virtual keyboard is active.”

The GAZEploit attack involved two primary steps. First, the researchers identified when a user was typing by analyzing patterns in their eye movements. They noted that eye gaze became more concentrated and exhibited a periodic pattern during typing sessions, while the frequency of eye blinking decreased.

Using these patterns, they then trained a recurrent neural network (RNN) to distinguish typing sessions from other activities like watching videos or playing games. The RNN achieved high accuracy in identifying typing sessions and individual keystrokes.

The implications of this research extend beyond the Vision Pro headset. While this attack has not yet been recorded to have been used against real-world users, it should serve as a reminder of the potential risks associated with wearable technology.

As Wang and his colleagues noted, “These technologies … can inadvertently expose critical facial biometrics, including eye-tracking data, through video calls where the user’s virtual avatar mirrors their eye movements.”

As wearable technology becomes smaller, cheaper, and more prevalently integrated into our lifestyle, the amount of personal data these devices may collect grows. This includes not only eye-tracking data but also health information, location data, and other biometric measurements.

Apple’s quick fixes serve as a statement of its commitment to user privacy. However, users must also be aware of the potential risks associated with these devices and take steps to protect their sensitive information, such as avoiding eye-tracking features when entering sensitive information or reviewing privacy settings.