We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Mandate third-party audits for major AI systems
For (46)
-
Gary MarcusProfessor of Psychology and Neural Sciencevotes For and says:OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be import... more Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
António GuterresUN Secretary-Generalvotes For and says:I would be favorable to the idea that we could have an artificial intelligence agency ... inspired by what the international agency of atomic energy is today. Verified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. Verified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes For and says:Black-box audits are insufficient for ensuring that an AI is safe and robust. At a minimum, we need white-box audits where the auditor can look inside the AI and learn about how it reasons. Verified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
California State LegislatureState legislative bodyvotes For and says:(e) (1) Beginning January 1, 2026, a developer of a covered model shall annually retain a third-party auditor that conducts audits consistent with best practices for auditors to perform an independent audit of compliance with the requirements of this... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
New York State AssemblyState legislative chambervotes For and says:later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (c) The third party shall produce... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Accountable TechTech accountability advocacy nonprofitvotes For and says:Today, California Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) — a landmark bill that would have implemented crucial guardrails around AI development, including requiring pre-de... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jake LaperruqueCenter for Democracy & Technology deputy directorvotes For and says:Principles for responsible use of AI technologies should be applied broadly across development and deployment. In particular, Government use of AI should be: (1) Built upon proper training data; (2) Subject to independent testing and high performance... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Suresh VenkatasubramanianBrown CS professor; former OSTP officialvotes For and says:The truth is AI systems are not magic: AI is technology. Like any other piece of technology that has benefited us — drugs, cars, planes — AI needs guardrails so we can be protected from the worst failures, while still benefiting from the progress AI ... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Scott WienerCalifornia state senatorvotes For and says:Requiring companies get third-party safety audits by 2028.[...] “As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 a... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Arvind NarayananPrinceton computer science professorvotes For and says:Let me start with the structural point. Right now, the state of evaluation in AI is like the auto industry before independent safety testing. It’s as if car makers were the only ones evaluating their own products—for crash safety, environmental impac... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kartik HosanagarWharton professor of technology decisionsvotes For and says:Cybersecurity risk provides more than a warning; it also offers an answer. Cybersecurity audits have become a norm for companies today, and the responsibility and liability for cyber risk audits goes all the way up to the board of directors. I believ... more Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Partnership on AIAI governance nonprofit, multi-stakeholdervotes For and says:PAI also welcomes the requirement for external evaluations of some GPAI models. Independent assessment of model capabilities and risks is crucial to ensure that evaluators have the broad range of expertise needed to do their job. It is also needed to... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jack ClarkAnthropic cofounder and policy researchervotes For and says:Frontier AI developers should commit to detailed and independently scrutinized scaling policies. Governments should work on national and international safety standards of AI training and deployment. Governments should require audits of AI systems dur... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
New York State SenateState legislative chamber, New Yorkvotes For and says:2025-S1169A (ACTIVE) - Summary Regulates the development and use of certain artificial intelligence systems to prevent algorithmic discrimination; requires independent audits of high risk AI systems; provides for enforcement by the attorney general ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Frontier FoundationWe're a nonprofit that fights for your privacy and free speech online.votes For and says:We doubt these algorithmic tools are ready for prime time, and the state of California should not have embraced their use before establishing ways to scrutinize them for bias, fairness, and accuracy. They must also be transparent and open to regular... more Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Darrell M. WestBrookings technology policy scholarvotes For and says:Since AI remains in the early stages of testing, it is important to have external monitoring of model results. Large language models should be tested through third parties and results made available to the public. That will help people understand how... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rishi SunakFormer UK prime ministervotes For and says:Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree. Today we’ve reached a historic agreement, with governments and AI c... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter KyleUK shadow technology secretary, Labourvotes For and says:We will move from a voluntary code to a statutory code, so that those companies engaging in that kind of research and development have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and ... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Government of the United KingdomUnited Kingdom central governmentvotes For and says:Participating countries committed, depending on their circumstances, to the development of appropriate state-led evaluation and safety research while participating companies agreed that they would support the next iteration of their models to undergo... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Thomas P. DiNapoliNew York State Comptrollervotes For and says:Establishing good AI governance will require efforts on several fronts. We need a framework that sets clear boundaries for using AI and guidelines and training that help agencies understand and adhere to these standards. Systems must be tested, and n... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Oren EtzioniAI2 founder; UW professorvotes For and says:For both business and technical reasons, automatically generated, high-fidelity explanations of most AI decisions are not currently possible. That’s why we should be pushing for the external audit of AI systems responsible for high-stakes decision ma... more Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lawrence LessigHarvard Law professorvotes For and says:At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deploye... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AlgorithmWatchAlgorithmic accountability nonprofitvotes For and says:Alongside its provisions on improved transparency and data access for researchers, the Commission’s draft calls for two additional layers of oversight for very large platforms: Audits by independent auditors and auditing powers for regulators. We wel... more Unverified source (2020)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutvotes For and says:Sensible safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but als... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rob BontaCalifornia attorney generalvotes For and says:I’m pleased to join a bipartisan coalition of attorneys general to share with the NTIA our recommendations on this critical issue. In the letter, the Attorneys General: Call for independent standards for AI transparency, testing, assessments, and au... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ed MarkeyU.S. senator from Massachusettsvotes For and says:Whether on the Senate floor or around the dining room table, artificial intelligence is the hottest topic of the year. But these complex algorithms have a darker side as well — one that has real consequences for everyday people, especially marginaliz... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan B. DavidsonFormer NTIA administrator, attorneyvotes For and says:We set out to answer an important question: If we want responsible innovation and trustworthy AI, how do we hold AI systems — and the entities and individuals that develop, deploy, and use them — accountable? How do we ensure that they are doing what... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brad SmithMicrosoft Vice Chair & Presidentvotes For and says:Although developers like Microsoft are addressing these risks through rigorous testing and selfimposed standards, the risks involved are too important, and their scale and potential impacts at present too unknowable, to address them through self-regu... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for Research on Foundation Models (CRFM)Stanford foundation models research centervotes For and says:6. External auditing. Providers of high-impact foundation models shall provide sufficient model access, as determined by the AI Office, to approved independent third-party auditors. Note: The AI Office should create and maintain a certification proce... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ada Lovelace InstituteUK AI policy research institutevotes For and says:regularly reviewing and updating guidance to keep pace with technology and strengthen their abilities to oversee new AI capabilities setting procurement requirements to ensure that foundation models developed by private companies for the public sect... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dewey MurdickCSET executive director, AI governancevotes For and says:This approach likens AI models to spacecraft venturing into unexplored celestial territories. Think of each new AI model as a spaceship. Before it “launches,” developers need to provide the plans, clear the airspace for launch, etc. And, after launch... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Privacy Information Center (EPIC)Privacy and civil liberties NGOvotes For and says:NIST should recommend that Congress invest substantial resources in the development and implementation of third-party, independent audits and impact assessments for AI Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:This would require comprehensive evaluation of potential harm through independent audits Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael BennetU.S. Senator from Coloradovotes For and says:AI systems that pose higher risk should undergo regular third-party audits. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Samuel HammondSenior Economist, FAIvotes For and says:LLM scaling laws also suggest the computing resources required to train large models are a reasonable proxy for model power and generality. Consistent with our argument for refining the definition of AI systems, the NTIA should thus consider defining... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alex BoresNew York State Assemblymembervotes For and says:So the RAISE Act asks for companies to do four things. They have to have an SSP. They have to have that SSP audited by a third party, not the government. They actually have to choose their own third party that does that. They have to disclose critica... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professorvotes For and says:This committee has discussed ideas such as third-party testing, a new national agency, and an international coordinating body, all of which I support. Here are some more ways to “move fast and fix things”: Eventually, we will develop forms of AI tha... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Consumer ReportsConsumer advocacy nonprofitvotes For and says:Require companies that make tools used in consequential decisions to undergo independent, third-party testing for bias, accuracy and more pre-deployment, and regularly after deployment. Clear and consistent standards should be developed for testing a... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bruce SchneierSecurity technologist and authorvotes For and says:We need to figure out new ways of auditing and reviewing to make sure the AI-type mistakes don't wreck our work. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Helen TonerAI policy expert and TED speakervotes For and says:And they should have to let in external AI auditors to scrutinize their work so that the companies aren't just grading their own homework. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tristan HarrisCenter for Humane Technology cofoundervotes For and says:I think in the future, we’re going to need to audit technology products in terms of how they actually stimulate our nervous system. Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for AI Policy (CAIP)AI policy nonprofit organizationvotes For and says:CAIP's primary recommendation is to implement mandatory third-party national security audits for advanced AI systems. This measure would enable security agencies to understand potential threats better and verify the safety of leading AI technologies. Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
John HickenlooperU.S. Senator from Coloradovotes For and says:But we can’t let an industry with so many unknowns and potential harms police itself. We need clear rules that we can rely on to prevent AI’s harms. And while those exact rules are still being discussed, what we do know is that in the long-term we c... more Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
National Telecommunications and Information Administration (NTIA)US Commerce Department telecom agencyvotes For and says:Audits and other independent evaluations: Federal agencies should require independent audits and regulatory inspections of high-risk AI model classes and systems – such as those that present a high risk of harming rights or safety. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (17)
-
Kevin FrazierLawfare contributorvotes Against and says:Finally, it’s not clear how well some of the act’s provisions reflect the current state of the AI ecosystem. The act demands that labs hire an independent third party to complete an annual audit of their protocols to ensure compliance with the act. T... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cato InstituteLibertarian think tankvotes Against and says:Mandated transparency may also not actually improve consumer education around their privacy choices. First, many consumers may grow fatigued and frustrated with the constant pop ups and consents as they do with current cookie pop ups. Second, a gover... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
BSA | The Software AllianceSoftware industry trade associationvotes Against and says:Despite the substantial benefits of internal impact assessments, there is a growing chorus of voices in AI policy discussions advocating for mandatory third-party audits. Supporters argue that external safeguards are needed to promote meaningful tran... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Frontier Model ForumFrontier AI industry forumvotes Against and says:Third-party assessments can be conducted on frontier models to confirm evaluations or claims on critical safety capabilities and mitigations. In appropriate contexts, these assessments may help to confirm or build confidence in safety claims, add rob... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
TechNetTech industry trade associationvotes Against and says:TechNet believes it is premature to mandate independent third-party auditing of AI systems. Mandating an independent audit before appropriate technical standards and conformity assessment requirements are established could open AI systems to national... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Aodhan DowneyCCIA state policy managervotes Against and says:We agree with California’s leadership that children’s online safety is of the utmost importance, and our members prioritize advanced tools that reflect that priority. But SB 243 casts too wide a net, applying strict rules to everyday AI tools that we... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cathy O'NeilData scientist and authorvotes Against and says:Most of the people that come to us do so for a reason. They’ve been getting in trouble for biased algorithms and they want us to clear their name. But until this NYC law passed, we didn’t have much hope for companies coming to us unless they have to.... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
DatabricksData and AI software companyvotes Against and says:The AI audit ecosystem is not mature enough to support mandatory third-party audits. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Shelley Moore CapitoU.S. Senator from West Virginiavotes Against and says:This commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I look forward to getting this bill and our AI Research Innovation and Accountability Act passed out... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
U.S. Chamber of Commerce (C_TEC)Largest U.S. business federationvotes Against and says:The Chamber recognizes the purpose of certification, audits, and assessments as a means for organizations to utilize repeatable and standard methodologies to help them improve the explainability of an algorithm. [...] Furthermore, there are strong ap... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Securities Industry and Financial Markets Association (SIFMA)Securities industry trade associationvotes Against and says:AI audits should focus on risk assessment programs rather than individual AI applications. Auditing each application will result in a misallocation of resources, with too much effort spent on low-risk AI (e.g., spam filters, graphics generation for g... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Heather CurryBSA senior state advocacy directorvotes Against and says:“The Business Software Alliance is concerned by the New York State Senate’s passage of the New York AI Act. BSA is committed to helping policymakers develop legislation to address high-risk uses of AI, but this bill advanced through the New York Sena... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daniel CastroDirector, Center for Data Innovationvotes Against and says:The third provision requires organizations to have third parties conduct annual audits of their algorithmic decision-making, including to look for disparate-impact risks, and create and retain an audit trail for at least five years that documents eac... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Adam ThiererR Street Institute senior fellowvotes Against and says:Society cannot wait years or even months for regulators to eventually get around to formally signing off on mandated algorithmic audits or impact assessments Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Aswin PrabhakarPolicy analyst, Data Innovationvotes Against and says:Under the current stipulations, these researchers would be obligated to maintain extensive documentation for their model. Further, the language in Article 28b(2(a)) seems to imply that providers of foundation models would be required to contract thir... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Meredith WhittakerSignal President; AI policy advocatevotes Against and says:While auditing standards and transparency are necessary to answer fundamental questions, they will not address these harms. Unverified source (2020)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European UnionPolitical and economic unionvotes Against and says:2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai-governance
votes
More
policy
votes