California’s new artificial intelligence (AI) bill, Senate Bill 1047 (SB 1047), continues to raise concerns, with the latest coming from the “Godmother of AI,” Dr. Fei-Fei Li, who warned about its significant unintended consequences in US communities.
In an August 6 commentary posted on Fortune, the professor and co-director of Standford’s Human-Centered AI Institute called the bill “well-meaning” but cautioned that it will “unnecessarily penalize developers, stifle our open-source community, and hamstring academic AI research, all while failing to address the very real issues it was authored to solve.”
SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener and already passed the California Senate on May 21. It requires frontier AI developers to be legally accountable for the downstream use or modification of their models,
Under this bill, developers must first enforce safeguards to prevent their AI models from providing “hazardous capabilities” before training. Only models that use 1026 integer floating point operations per second (FLOPs) of compute power or cost at least $100 million to train are covered by SB 1047.
Undue punishment for developers
The bill has since sparked fierce debate among the AI community, with its critics claiming that it would place an impossible burden on developers and stifle innovation.
The “Godmother of AI” shared similar sentiments, pointing out that it is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model.
In addition, the vague, overbroad, and impractical standards, requirements, and definitions in SB 1047 cause significant uncertainty for tech companies. This may force developers to pull back and act defensively, limiting progress in AI technology.
Cripple open-source development and academia
Further, the bill’s “kill switch” mandate, which allows a program to be fully shut down at any time, shackles open-source development.
“If developers are concerned that the programs they download and build on will be deleted, they will be much more hesitant to write code and collaborate,” Li wrote.
This requirement does not only affect private AI companies, as it also has far-reaching implications on the public sector, which relies on the open-source community as a source of multiple services like GPS, MRIs, and the Internet.
Li highlighted that academia would suffer from the consequences of the restrictions on open-source development, saying that academic AI research cannot advance without collaboration and access to model data.
“Take computer science students, who study open-weight AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the proper models and data? […] SB-1047 will deal a death knell to academic AI when we should be doubling down on public-sector AI investment,” she explained.
‘Moonshot mentality’
Li also criticized SB 1047 for failing to address the potential harms of AI advancement, including bias and deepfakes.
She added that instead of the “overly and arbitrarily restrictive” bill, California policymakers must adopt a “moonshot mentality” in creating AI policy that empowers open-source development, puts forward uniform and well-reasoned rules, and builds consumer confidence.
True to her moniker, she expressed her willingness to collaborate with Senator Wiener, writing, “Let us work together to craft AI legislation that will truly build the technology-enabled, human-centered society of tomorrow. Indeed, the future of AI depends on it.”