DIY AI weapon raises alarm: Hobbyist creates voice-controlled gun using ChatGPT

Written by

Published 13 Jan 2025

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free ai generated brain mind illustration

A hobbyist engineer has created a gun powered by artificial intelligence (AI) that responds to voice commands, raising fresh worries about the spread of autonomous weapons.

The developer, known online as STS 3D, built a robotic rifle system using ChatGPT to interpret voice instructions and control the weapon’s aim and firing mechanisms.

    In videos shared online, the developer tells the system, “ChatGPT, we’re under attack from the front left and front right.” The rifle immediately swivels and fires blanks in those directions.

    OpenAI, the company behind ChatGPT, quickly shut down the developer’s access to its Realtime API. “We proactively identified this violation of our policies and notified the developer to cease this activity,” OpenAI told Futurism, citing their rules against using their technology for weapons.

    The incident shows how easily AI tools meant for everyday tasks can be turned into weapons. STS 3D, who operates independently of any military or defense organizations, has not commented publicly on the backlash. Yet, the project has drawn comparisons to dystopian scenarios seen in movies like Terminator.

    “There’s at least three movies explaining why this is a bad idea,” one Reddit user responded to STS 3D’s videos.

    This development comes as military contractors are already working on similar systems. Defense company Allen Control Systems (ACS) recently showed off its “Bullfrog” system, an AI-controlled machine gun that can target drones. Unlike STS 3D’s private project, the Bullfrog is being tested by the U.S. Defense Department.

    Military experts and defense contractors argue that AI weapons could make warfare safer and more precise. Steve Simoni, CEO of ACS, says AI-guided weapons target threats with greater accuracy than human soldiers. Bullfrog demonstrates this capability, shooting down small drones with remarkable precision.

    Austrian Foreign Minister Alexander Schallenberg calls the rise of AI weapons “this generation’s Oppenheimer moment,” comparing it to the creation of nuclear weapons. His comment points to growing international concern about lethal AI-powered weapons.

    The United Nations has long been asking for limits on autonomous weapons. However, both military forces and private developers continue to explore these technologies. In December 2024, OpenAI partnered with defense contractor Anduril to develop AI systems for military use. The company revised statements on its policies before the partnership, allowing weapons development.

    Security experts warn that current rules may not be enough to control AI weapon development. While companies like OpenAI can block individual users, the basic tools for creating autonomous weapons are becoming more available. Open-source AI models and 3D printing technology mean determined individuals can build weapon systems outside normal oversight.

    Looking ahead, lawmakers and AI companies face tough choices about controlling AI technology that could be used for both helpful and harmful purposes. The ability of a hobbyist to create an autonomous weapon system suggests current safeguards may need serious updating.