Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Require AI labs to publish safety evaluations before deploying frontier models
Cast your vote:
For (8)
-
Department for Science, Innovation and TechnologyUK government science departmentvotes For and says:Some of this information can be made available to the public by publishing a transparency report (such as a model card) and providing general overviews of model purpose and risk assessment evaluation results. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:First, it is vital that AI companies–especially those working on the most powerful models–adhere to [...] testing prior to release and publication of evaluation results. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutvotes For and says:AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter KyleUK shadow technology secretary, Labourvotes For and says:[Companies] have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Office of Senator Scott WienerOfficial office of CA State Senatorvotes For and says:help California develop workable guardrails for deploying GenAI [...] Companies will be required to publish their safety and security protocols and risk evaluations Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jack TitusAI policy writer at FASvotes For and says:For models with risk above the lowest level, both pre- and post-mitigation evaluation results and methods should be public, including any performed mitigations. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andreas StuhlmüllerCEO of Elicitvotes For and says:SB53's requirements for safety protocols and transparency reports are exactly what we need as AI becomes more powerful and widespread. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
California State LegislatureState legislative bodyvotes For and says:Before, or concurrently with, deploying a new frontier model [...] publish on its internet website a transparency report [...] summaries of [...] assessments of catastrophic risks. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (2)
-
Frontier Model ForumFrontier AI industry forumvotes Against and says:[I]f the evaluation results indicate that a particular model has an exploitable vulnerability that may lead to a significant increase in biorisks, this information should not be published. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Aden HizkiasPolicy Manager, Chamber of Progressvotes Against and says:Chamber of Progress [...] opposes SB 53 [...] Developers must formally publish detailed security protocols and transparency reports at or before the time of deployment. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Finding sourced quotes...
More
r/ai-safety
votes