Impacts Analysis of California's SB 1047
Executive Summary
Background
Framework for evaluating AI supervisory processes:
- Is it certain? Does it have high precision and recall?
- Is it efficient? Is it comprehensible to a wide range of people, simple, fast, low-cost?
- Is it adaptable? Can it handle unknown risks and can the process itself be adapted?
- Is it accountable? Does it encourage transparency and is it accountable to the public and scientific community?
- Does it minimize unintended harms and moral hazards?
Analysis of current 1047 proposal:
- While the proposal has good intent, it tries to solve a complex research problem with the legal liability system, which is ill-adapted to the task
- Key terms are uncertain, introducing moral hazard and the potential for regulatory abuse
- It may not even address the right research problems. Other important risks from AI are not covered, including threats from less advanced models
- Unclear how it interacts with scientific, open-source and consumer communities which already provide fast supervision with greater representation
- Concentrates power (even military power) in a small, minimally accountable Frontier Model Division which is a highly attractive target for regulatory capture
- Allocates power to scarce intermediaries - developers of specialized economic models of AI, law, and policy - for which no norms or competitive marketplace exists
- May incentivize geopolitical maneuvering for control of key regulatory positions
- Inflexible to change compared to open scientific processes like peer review and open letters, which have a long track record as supervisory tools for research questions
Suggestions:
- Fund:
- Competitive grant programs to reduce uncertainty over problems and solutions via research and standardization. These are currently underinvested by the community, especially for analyzing deployment of AI models.
- Advise:
- Provide key input to ongoing community processes to develop eval sets into official standards
- Provide key input to ongoing community processes to develop responsible disclosure processes for vulnerabilities
- Legislate:
- Mandate industry adoption of standards proposed by the community which mitigate urgent, near-term risks:
- Pass narrowly-scoped bills which mandate additional context for AI-generated content (e.g. for watermarking, political ads)
Related
- EFF Response
- Software & Information Industry Association Response
- AI and Accountability: Policymakers Risk Halting AI Innovation - Dean W. Ball in the Orange County Register
- Beware the 'Brussels effect' on AI
- Regulating Frontier Models in AI - Will Rinehart in The Dispatch
- r/LocalLLaMA Response (94 Comments)
- X/Twitter Response (37 Comments)
- Open Letter to Assembly Judiciary Committee Re: SB 1047 (110 AI founders & experts)
- stopsb1047.com
- Letter Re: Senate Bill 1047 - August 2nd, 2024
- Y Combinator Founder-Led Statement on SB 1047 (144 YC founders)
- Chamber of Progress Summary
- Business Software Alliance Letter
- Center for American Entrepreneurship Letter
- Words to Fear: I’m From the State Government, and I’m Here to Help with AI Risk (Cato Institute)
- SB 1047: 'safe & secure' really means 'see you in court' (Civil Justice Association of California)
- "It almost seems like science fiction to see an actual bill like this. I support thoughtfully considering safety implications with any AI, but I think this bill is likely naive about what is actually possible to do before a model is even trained (note it says, "before initiating training of that covered model"). I think one consequence might be that researchers feel unsafe trying out ideas because they could be held responsible for even trying to train them, which would push AI research outside California. I'm hoping there are better solutions than that."
...
"My solution would be don't use models for things they haven't been validated for. But don't criminalize the researcher who created the model."
- Kenneth O. Stanley, artificial intelligence researcher, former Professor of Computer Science, founder @ Maven, author of "Why Greatness Cannot Be Planned"
-
"This bill is too broad and would likely have many unintended consequences. In particular, criminalizing model development is a step not to be taken lightly; I'm surprised to see it being seriously discussed in the California legislature. A more useful bill would be narrower in scope and focus on addressing specific near-term harms."
- Ethan Fast, Co-Founder, VCreate (Foundation models for T-cell receptors)
-
"SB-1047 could reduce AI safety, through reducing transparency, collaboration, diversity, and resilience." - Jeremy Howard, co-founder @ Answer.ai and Fast.ai, Digital Fellow @ Stanford, former President and Chief Scientist @ Kaggle. (Full Comment)
-
"California Bill 1047 is an attack on AI innovation." - Martin Casado, General Partner @ a16z (Full Comment)
- "Lots of work for lawyers and bureaucrats, and a hindrance to the builders. The burgeoning bureaucracy is the shoggoth on the face of America." - Steve Jurvetson, Co-founder of Future Ventures and DFJ (Full Comment)
-
"We cannot let this bill pass." - Guillaume Verdon, Founder @ Extropic (Full Comment)
- "This is the most brazen attempt to hurt startups and open source yet." - Brian Chau, Executive Director @ Alliance for the Future (Full Comment)
- "about 12 months ago, the Center For AI Safety's "Statement on AI Risk" warned that AI could cause human extinction and stoked fears of AI taking over. This alarmed leaders in Washington. But many people in AI pointed out that this dystopian science-fiction scenario had little basis in reality." - Andrew Ng, computer science professor @ Stanford, founder @ DeepLearning.ai, AI Fund, Coursera, Landing.ai, co-founder and former head of Google Brain, former Chief Scientist at Baidu ([1], [2], [3])
- "Policy makers should not listen to fringe AI doomers." - Yann LeCun, ACM Turing Award Laureate, Professor at NYU. Chief AI Scientist at Meta. Researcher in AI, Machine Learning, Robotics. (Full Comment)
- "I'm deeply concerned about California's SB-1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. While well intended, this bill will not solve what it is meant to and will deeply harm #AI academia, little tech and the open-source community." - Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence, member of the UN Scientific Advisory Board, Professor of Computer Science @ Stanford, founder of ImageNet and AI4ALL (Full Comment)
- "Wrapping up new AI models in red tape effectively cements the biggest tech players as winners of the AI race" - Todd O'Boyle, Tech Policy Director @ Chamber of Progress ([1] [2])
- "...there's just absolutely no need for it. It looks to me like it's just going to empower people with more resources. My advice to government officials is that if they're so lax in enforcing antitrust laws, this will continue to do the opposite, making AI companies even more powerful." - Ben Recht, engineering and computer science professor @ UC Berkeley (Full Comment)
- "It would slow innovation, thwart advancements in safety and security, and undermine California's economic growth. The bill's technically infeasible requirements will chill innovation in the field of AI and lower access to the field's cutting edge, thereby directly contradicting the bill's stated purpose" - AI Alliance (Full Comment)
- "the bill encases ostensibly reasonable measures within a chillingly vague enforcement regime that risks jeopardizing America's global AI leadership outright." - Samuel Hammond, Senior Economist @ the Foundation for American Innovation (Full Comment)
- "...everyone in San Francisco should be alarmed about SB 1047, which is currently circulating through the California legislature, and could have significant unintended consequences for our City's economy. SB1047 has the potential to kill our nascent AI ecosystem in San Francisco and stop the one kernel of hope our fragile local technology economy has today."- Mark Farrell, former Interim Mayor of San Francisco and current Mayoral candidate (Full Comment)
- "The bill's good intentions are overshadowed by speculative regulations that will chill innovation before it begins." - Aaron Peskin, San Francisco Board of Supervisors President and current Mayoral candidate (Full Comment)
- "It's a totally misguided law. I think about when the printing press was first invented, the Ottoman Empire actually banned it; they made it illegal to have a printing press...At the same time in Europe, the printing press wasn't banned and they had the Renaissance...And you can see the Ottoman Empire isn't around anymore." - Greg Lin Tanaka, Palo Alto City Councilmember (Full Comment)
- "As students and faculty of the University of California-a globally-leading AI research institution-we believe that SB1047 is fundamentally wrongheaded. It attempts to solve questionably real problems using scientifically unfounded methods, and will be detrimental to educational growth, scientific innovation, and economic development in AI for the University of California, the state of California, the United States, and the world." - Open Letter Re: Academic Researcher Concerns Re: SB-1047 (43 CA academics, 21 non-CA academics + 6 industry)
- "This proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development." - Caltech Personnel and Alumni: Opposition to SB 1047
- "As the representative from Silicon Valley, I have been pushing for thoughtful regulation around artificial intelligence to protect workers and address potential risks including misinformation, deepfakes, and an increase in wealth disparity. I agree wholeheartedly that there is a need for legislation and appreciate the intention behind SB 1047 but am concerned that the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California's spirit of innovation." - Representative Ro Khanna (Full Comment)
- "By focusing on hypothetical risks rather than demonstrable risks, the efficacy of this legislation in addressing real societal harms - including those faced by Californians today - is called into question." - Representative Zoe Lofgren, Ranking Member, House Committee on Science, Space, and Technology (Full Comment)
- "It is somewhat unusual for us, as sitting Members of Congress, to provide views on state legislation. However, we have serious concerns about SB 1047...and felt compelled to make those concerns known to California state policymakers...Based on our experience, we are concerned that SB 1047 would create unnecessary risks for California's economy with very little public safety benefit, and because of this, if the bill were to pass the State Assembly in its current form, we would support you vetoing the measure." - Letter to Governor Newsom from Representatives Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters,Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa (Full Comment)
- "While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit." - Nancy Pelosi, Speaker Emerita of the United States House of Representatives (Full Comment)