California Governor Gavin Newsom has vetoed the artificial intelligence (AI) safety bill on Sunday, with concerns over it stifling innovation and business retention in the state. The bill, which aimed to impose some of the first AI regulations in the United States, exposed a contradiction in the tech industry’s stance on regulation, as companies push for oversight but resist state-level laws that could set a precedent across the U.S.
The proposed legislation, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), would have required rigorous safety testing for advanced AI systems, given that those projects cost over $100 million to develop. It also called for a “kill switch” mechanism that could shut down AI models in case of malfunction and introduced stringent oversight on cutting-edge AI models with capabilities beyond the context of current technology.
Governor Newsom’s Concerns
Governor Newsom expressed that the bill’s regulations were overly broad and that the standards proposed can easily be applied to low-risk AI technologies regardless of their intended application.
“[The bill] does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in his veto message. He also emphasized that the legislation could drive companies out of California, a global hub for tech.
State Senator Scott Wiener, who authored the bill, disagreed with the governor’s decision, warning that California would now lack the necessary oversight for an “extremely powerful technology.” He criticized the reliance on voluntary commitments from tech companies, arguing that these seldom provide sufficient protection for the public.
“We cannot afford to wait for a major catastrophe to occur before taking action,” Wiener said.
Tech Industry’s Stance on AI Regulation
For years, tech giants have insisted that artificial intelligence needs regulation to prevent harm; those same companies were among its fiercest opponents of the proposed bill. Tech behemoths like Google, Meta, and OpenAI stated that the legislation would hinder AI development in California, particularly for open-source AI models that rely on publicly available code. Critics argued that the proposed rules could discourage innovation and drive AI research and development to other regions with less restrictive regulations.
Regardless, Newsom affirmed his commitment to AI regulation, announcing plans to collaborate with experts on developing “workable guardrails” for the technology. He also tasked state agencies with assessing the risks associated with AI, particularly in critical infrastructure sectors such as energy and water.
The bill has exposed a contradiction at the heart of the tech industry’s stance on regulation—raising questions about what kind of oversight they want. Companies might prefer regulations they can help shape on a federal level, rather than stringent state laws that could set a precedent across the U.S. Tech firms are keen to avoid a patchwork of state regulations that could complicate compliance and raise operational costs. Federal regulation, on the other hand, would be more uniform.
The debate around SB 1047 is just a chunk of a part of the national discussions surrounding AI governance, as efforts in Congress to introduce federal AI regulations have stalled. Some, like Tesla CEO Elon Musk, supported the bill, arguing that proactive regulation is essential to managing the risks associated with powerful AI systems.
With the veto ensured, California, home to the world’s most advanced AI companies, will undoubtedly remain at the forefront of this evolving conversation.