Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Require AI labs to publish safety evaluations before deploying frontier models
Cast your vote:
For (30)
-
Miles KodamaAI policy researchervotes For and says:Today, frontier AI developers have no legal obligation to disclose anything about their safety and security protocols to government, let alone to the public. When a company releases a new AI system more powerful than any system before, it is entirely... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Teri OlleDirector, Economic Security California Actionvotes For and says:Including safety and transparency protections recommended by Gov. Newsom’s AI commission in SB 53 is an opportunity for California to be on the right side of history and advance commonsense AI regulations while our national leaders dither, In additi... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoff RalstonFounder, Safe AI Fundvotes For and says:Artificial intelligence is one of the most powerful technologies ever developed, and it’s advancing at breakneck speed. Even industry leaders have warned of its potential risks. Ensuring AI is developed safely should not be controversial—it should be... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Economic Security Project ActionPolicy advocacy organizationvotes For and says:Building on the report’s “trust, but verify” approach, the amended bill requires the largest AI companies to publicly disclose their safety and security protocols and report the most critical safety incidents to the California Attorney General. The r... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Steven AdlerAI safety researcher; Lawfare writervotes For and says:Before a new model's release, Al companies commonly (though not always) run safety tests - and release the results in a "System Card." The idea is to see if the model has any extreme abilities (like strong cyberhacking), and then to take an appropri... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark ReddishCenter for AI Policy researchervotes For and says:Requiring companies to publish risk management frameworks does not mean disclosing confidential or proprietary information that could compromise business interests or national security. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Miranda BogenDirector, AI Governance Lab (CDT)votes For and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
OpenAIAI research organizationvotes For and says:The Safety Hub provides public access to safety evaluation results for our models. [...] evaluations of models under our Preparedness Framework prior to their deployment. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chris LehaneOpenAI chief global affairs officervotes For and says:We also publicly share safety evaluations through our Preparedness Framework, which sets out how we measure, monitor, and mitigate large-scale risks. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Thomas WoodsideSecure AI Project co-foundervotes For and says:Those updates should include the results of evaluations for models that haven’t been publicly deployed yet, since those models could also pose serious risks. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dean W. BallAI policy writervotes For and says:There is much to recommend these laws over the Nevada and Illinois bills I discussed above. Unlike those laws, SB 53 and RAISE are technically sophisticated, reflecting a clear understanding (for the most part) about what it is possible for AI develo... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rebecca Bauer-KahanCalifornia State Assembly membervotes For and says:That is fundamentally the basis of this bill, is that they will be defining their own safety protocols. They will be making those public, and then they will have to follow them. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Office of Governor Kathy HochulNew York Governor's press officevotes For and says:Governor Kathy Hochul today signed legislation to require AI frameworks for AI frontier models, setting a nation-leading standard for AI transparency and safety. The agreed-upon chapter amendments to the RAISE Act (S6953B/A6453B) requires large AI de... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Office of Governor Gavin NewsomCalifornia Governor's Officevotes For and says:SB 53 establishes new requirements for frontier AI developers creating stronger: ✅ Transparency: Requires large frontier developers to publicly publish a framework on its website describing how the company has incorporated national standards, intern... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
New York State SenateState legislative chamber, New Yorkvotes For and says:The RAISE Act requires [...] frontier AI developers to write, implement, publish, and comply with plans [...] including how they assess the safety risks of their models. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Partnership on AIAI governance nonprofit, multi-stakeholdervotes For and says:Disclose details such as testing methodologies, evaluation criteria, results, limitations, and gaps for any internal and external evaluations conducted prior to release. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben BrooksHead of public policy, Stability AIvotes For and says:If necessary, we could require frontiers developers to obtain third-party evaluations prior to release, share their findings [...]. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alex BoresNew York State Assemblymembervotes For and says:you design your own safety protocols ahead of time [...] if your models fail your own tests, we want to know about it. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew GounardesNew York State Senatorvotes For and says:It requires companies [...] to create and share [...] protocols before they deploy these models publicly. And then it requires [...] to complete an annual assessment [...]. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Encode AIAI policy advocacy nonprofitvotes For and says:They will require these companies to publish important information regarding their safety protocols and risk evaluations, and report major safety incidents to the Attorney General. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them [...] Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AnthropicAI safety research companyvotes For and says:Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Department for Science, Innovation and TechnologyUK government science departmentvotes For and says:Some of this information can be made available to the public by publishing a transparency report (such as a model card) and providing general overviews of model purpose and risk assessment evaluation results. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter KyleUK shadow technology secretary, Labourvotes For and says:[Companies] have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutvotes For and says:AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:First, it is vital that AI companies–especially those working on the most powerful models–adhere to [...] testing prior to release and publication of evaluation results. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andreas StuhlmüllerCEO of Elicitvotes For and says:SB53's requirements for safety protocols and transparency reports are exactly what we need as AI becomes more powerful and widespread. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jack TitusAI policy writer at FASvotes For and says:For models with risk above the lowest level, both pre- and post-mitigation evaluation results and methods should be public, including any performed mitigations. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Office of Senator Scott WienerOfficial office of CA State Senatorvotes For and says:help California develop workable guardrails for deploying GenAI [...] Companies will be required to publish their safety and security protocols and risk evaluations Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
California State LegislatureState legislative bodyvotes For and says:Before, or concurrently with, deploying a new frontier model [...] publish on its internet website a transparency report [...] summaries of [...] assessments of catastrophic risks. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (5)
-
Neil ChilsonAI policy head, Abundance Institutevotes Against and says:Those compliance costs are merely the beginning. The bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chamber of ProgressTech industry policy associationvotes Against and says:we still have lingering concerns about how the bill [...] fails to protect trade secrets necessary to maintain competitiveness. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul LekasSIIA public policy executivevotes Against and says:These will require companies to publish detailed information and reports that could expose trade secrets and other sensitive information. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Frontier Model ForumFrontier AI industry forumvotes Against and says:[I]f the evaluation results indicate that a particular model has an exploitable vulnerability that may lead to a significant increase in biorisks, this information should not be published. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Aden HizkiasPolicy Manager, Chamber of Progressvotes Against and says:Chamber of Progress [...] opposes SB 53 [...] Developers must formally publish detailed security protocols and transparency reports at or before the time of deployment. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.