Anthropic warns of coming AI disaster, calls for quick regulation in the next 18 months

Written by

Published 2 Nov 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free ai artificial intelligence sci-fi illustration

Just days before the U.S. presidential election, artificial intelligence (AI) safety company Anthropic has issued a serious warning, urging quick government action to prevent major AI risks. In a Thursday announcement, Anthropic shared urgent suggestions for focused government regulations, along with data showing fast improvements in AI abilities, which the company says make oversight essential.

Anthropic’s call to action highlights the big steps AI models have taken in fields like software development and cyber threats, raising potential risks. The company pointed to the growth of its Claude models, saying, “On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems…to 49%.”

This level of progress, it said, speeds up the timeline for possible cyber security risks. Anthropic’s Frontier Red Team also found that current models now assist with cyber attacks—a concerning trend likely to increase as new models get better at handling complex tasks.

The company also presented proof of improving AI skills in scientific areas. According to Anthropic, AI systems improved their scientific knowledge by nearly 18% from June to September of this year, based on the GPQA test. In the hardest parts of this test, some models performed nearly as well as human experts.

Anthropic suggested a structure for governments, recommending they use a Responsible Scaling Policy (RSP) like Anthropic’s own plan. “Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks,” the blog explained. The company stressed the need for openness, security standards, and simplicity in regulations, urging that these measures be both flexible and focused.

In its advice to other AI companies, Anthropic stressed the importance of making RSPs a central part of product development rather than an extra step. Anthropic emphasized that RSPs should drive ongoing attention to potential threats, even if they remain abstract.

As time for action runs out, Anthropic ended with a call for teamwork. “It is critical over the next year that policymakers, the AI industry, safety advocates, civil society, and lawmakers work together to develop an effective regulatory framework that meets the conditions above,” Anthropic stated.