We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should third-party audits be mandatory for major AI systems?
Cast your vote:
Results (33):
filter
Quotes (29)
Users (0)
-
Gary MarcusProfessor of Psychology and Neural Scienceagrees and says:OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be important to get independent review before starting to train future systems”. But again, they have not submitted to any such advance reviews so far. We have to stop letting them set all the rules. AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small. But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions. And not just after products are released, but before. We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire. Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation - is a vital first step. (2023) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
António GuterresUN Secretary-Generalagrees and says:I would be favorable to the idea that we could have an artificial intelligence agency ... inspired by what the international agency of atomic energy is today. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanPresident of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many othersstrongly agrees and says:First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researcherstrongly agrees and says:Black-box audits are insufficient for ensuring that an AI is safe and robust. At a minimum, we need white-box audits where the auditor can look inside the AI and learn about how it reasons. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brad SmithMicrosoft Vice Chair & Presidentstrongly agrees and says:Although developers like Microsoft are addressing these risks through rigorous testing and selfimposed standards, the risks involved are too important, and their scale and potential impacts at present too unknowable, to address them through self-regulation alone. We therefore think it is appropriate for Congress to consider legislation that would impose a licensing regime onto developers of this discrete class of highly capable, frontier AI models, and we are pleased to see that the Blumenthal-Hawley regulatory framework seeks to establish such a regime. Although the details of this licensing regime again would benefit from further thought and discussion, and there are critical consequences and details to deeply consider, such as the impact to open source models and the importance of continuing to foster an innovative open source ecosystem, we think it should seek to serve three key goals: First and foremost, any licensing regime must ensure that the development and deployment of highly capable AI models achieve defined safety and security objectives. In concrete terms, this may require licensees of these models, among other things, to engage in the pre-deployment testing that the Blumenthal-Hawley regulatory framework proposes. We agree that highly capable models may need to undertake extensive prerelease testing by internal and external experts. In addition, a licensing regime may require developers of highly capable models to provide advance notification of large training runs; engage in comprehensive risk assessments focused on identifying dangerous or breakthrough capabilities; and implement multiple other checkpoints along the way. Second, it must establish a framework for close coordination and information sharing between licensees and regulators, to ensure that developments material to the achievement of safety and security objectives are shared and acted on in a timely fashion. The Blumenthal-Hawley framework provides that an independent oversight body not only conducts audits but also monitors technological developments, which may be best accomplished in partnership with licensees. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dewey MurdickCSET executive director, AI governancestrongly agrees and says:This approach likens AI models to spacecraft venturing into unexplored celestial territories. Think of each new AI model as a spaceship. Before it “launches,” developers need to provide the plans, clear the airspace for launch, etc. And, after launch, just like NASA keeps us posted on space missions, developers should regularly update us on how their AI models are doing and what they are discovering. Assessment: Require independent third party audits when specific risk criteria are met for new capabilities. This gives governments the authority to request deeper evaluation, especially for powerful proprietary models that are otherwise subject to IP protection. Risk criteria for triggering these assessments will need to be updated periodically based on model advances, risk mitigation techniques and experience. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Heather CurryBSA senior state advocacy directorstrongly disagrees and says:“The Business Software Alliance is concerned by the New York State Senate’s passage of the New York AI Act. BSA is committed to helping policymakers develop legislation to address high-risk uses of AI, but this bill advanced through the New York Senate in a rushed process that failed to take stakeholder feedback — favorable or unfavorable — into account,” BSA senior director of state advocacy Heather Curry said in a June 13 statement. “This legislation, which would go far beyond what has been enacted in California or the European Union, is not ready for serious consideration by the Assembly,” Curry said. “It conflates the roles of different actors along the AI value chain, holding companies legally responsible for actions that may be taken by others. It also establishes an extensive and unworkable third-party audit regime and fragmented enforcement through private lawsuits.” “In a similar vein,” she said, “while the RAISE Act is intended to address worthwhile considerations around AI safety for large frontier models, it relies on a vague and unworkable incident reporting scheme. The bill would also undermine safety protections for frontier models by requiring developers of those models to publish their safety protocols — creating a roadmap for bad actors to exploit.” (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daniel CastroDirector, Center for Data Innovationdisagrees and says:The third provision requires organizations to have third parties conduct annual audits of their algorithmic decision-making, including to look for disparate-impact risks, and create and retain an audit trail for at least five years that documents each type of algorithmic decision-making process, the data used in that process, the data used to train the algorithm, and any test results from evaluating the algorithm as well as the methodology used to test it. Organizations must also provide a detailed report of this information to the DC attorney general’s office. This provision places an enormous and burdensome auditing responsibility not only on those organizations using algorithms for decision-making, but also on service providers who may offer such functionality to others. Many of the auditing requirements would be inappropriate to require service providers to report since they will not necessarily have details about how a particular customer uses their service. Moreover, many businesses and service providers are struggling to comply with the algorithm auditing requirements in New York City, which only apply to AI systems used in hiring. The audit requirements in the proposed Act would apply to a much broader set of activities and present even more challenges. (2022) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for Research on Foundation Models (CRFM)Stanford foundation models research centerstrongly agrees and says:6. External auditing. Providers of high-impact foundation models shall provide sufficient model access, as determined by the AI Office, to approved independent third-party auditors. Note: The AI Office should create and maintain a certification process for determining if auditors are sufficiently independent and qualified to audit high-impact foundation models. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ada Lovelace InstituteUK AI policy research institutestrongly agrees and says:regularly reviewing and updating guidance to keep pace with technology and strengthen their abilities to oversee new AI capabilities setting procurement requirements to ensure that foundation models developed by private companies for the public sector uphold public standards mandating independent third-party audits for all foundation models used in the public sector, whether developed in-house or externally procured source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Shelley Moore CapitoU.S. Senator from West Virginiadisagrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Adam ThiererR Street Institute senior fellowstrongly disagrees and says:Society cannot wait years or even months for regulators to eventually get around to formally signing off on mandated algorithmic audits or impact assessments (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Privacy Information Center (EPIC)Privacy and civil liberties NGOagrees and says:NIST should recommend that Congress invest substantial resources in the development and implementation of third-party, independent audits and impact assessments for AI (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael BennetU.S. Senator from Coloradostrongly agrees and says:AI systems that pose higher risk should undergo regular third-party audits. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:This would require comprehensive evaluation of potential harm through independent audits (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Aswin PrabhakarPolicy analyst, Data Innovationstrongly disagrees and says:Under the current stipulations, these researchers would be obligated to maintain extensive documentation for their model. Further, the language in Article 28b(2(a)) seems to imply that providers of foundation models would be required to contract third-party specialists to audit their systems, which would be costly. However, unlike closed-source AI systems, open-source systems already permit independent experts to scrutinize them by the very nature that they are openly available, making this requirement an unnecessarily costly requirement for open-source AI. As it stands, the EU AI Act would stifle open-source development of AI models, which would, in turn, hinder innovation and competition within the AI industry. Many open-source AI innovations have found their way into several commercial applications. Indeed, more than 50,000 organizations use the open-source models on HuggingFace’s platform. The EU AI Act, in its current form, risks creating a regulatory environment that is not only burdensome and inappropriate for open-source AI developers but also counterproductive to the broader goals of fostering innovation, transparency, and competition in the AI sector. As the EU’s ongoing negotiations over the AI Act continue, particularly around the regulation of foundation models, policymakers need to adequately address these issues. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Consumer ReportsConsumer advocacy nonprofitstrongly agrees and says:Require companies that make tools used in consequential decisions to undergo independent, third-party testing for bias, accuracy and more pre-deployment, and regularly after deployment. Clear and consistent standards should be developed for testing and third-party auditors should be regulated by relevant regulators or bodies. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alex BoresNew York State Assemblymemberstrongly agrees and says:So the RAISE Act asks for companies to do four things. They have to have an SSP. They have to have that SSP audited by a third party, not the government. They actually have to choose their own third party that does that. They have to disclose critical safety incidents, and we define that very specifically in the bill as to what would be something that would have to be disclosed. And they have to not terminate employees or contractors that raise critical risk. It applies only to large companies, so those that have spent $100 million or more in training frontier models. It exempts academia entirely. Obviously, that $100 million threshold, I think, exempts what any startup is currently building in terms of—again, that is just on compute and just on training. And it focuses on making sure that the companies are setting a baseline standard ahead of time through their SSP and not changing it after the fact. (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Samuel HammondSenior Economist, FAIstrongly agrees and says:LLM scaling laws also suggest the computing resources required to train large models are a reasonable proxy for model power and generality. Consistent with our argument for refining the definition of AI systems, the NTIA should thus consider defining a special regulatory threshold based on the computing cost needed to match or surpass the performance of GPT-4 across a robust set of benchmarks. Theoretical insights and hardware improvements that reduce the computing resources needed to match or surpass GPT-4 would require the threshold to be periodically updated. In the meantime, the handful of companies capable of training models that exceed this threshold should have to disclose their intent to do so and submit to external audits that cover both model alignment and operational security. Compute thresholds will eventually cease to be a useful proxy for model capability as training costs continue to fall. Nevertheless, the next five years are likely to be an inflection point in the race to build TAI. The NTIA should thus not shy away from developing a framework that is planned to obsolesce but which may still be useful for triaging AI accountability initiatives in the near term. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professoragrees and says:This committee has discussed ideas such as third-party testing, a new national agency, and an international coordinating body, all of which I support. Here are some more ways to “move fast and fix things”: Eventually, we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tristan HarrisCenter for Humane Technology cofounderagrees and says:I think in the future, we’re going to need to audit technology products in terms of how they actually stimulate our nervous system. (2019) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bruce SchneierSecurity technologist and authoragrees and says:We need to figure out new ways of auditing and reviewing to make sure the AI-type mistakes don't wreck our work. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Meredith WhittakerSignal President; AI policy advocatedisagrees and says:While auditing standards and transparency are necessary to answer fundamental questions, they will not address these harms. (2020) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIagrees and says:I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Helen TonerAI policy expert and TED speakerstrongly agrees and says:And they should have to let in external AI auditors to scrutinize their work so that the companies aren't just grading their own homework. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for AI Policy (CAIP)AI policy nonprofit organizationstrongly agrees and says:CAIP's primary recommendation is to implement mandatory third-party national security audits for advanced AI systems. This measure would enable security agencies to understand potential threats better and verify the safety of leading AI technologies. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
John HickenlooperU.S. Senator from Coloradostrongly agrees and says:But we can’t let an industry with so many unknowns and potential harms police itself. We need clear rules that we can rely on to prevent AI’s harms. And while those exact rules are still being discussed, what we do know is that in the long-term we cannot rely on self-reporting alone from AI companies on compliance. We should build trust, but verify. We need qualified third parties to effectively audit Generative AI systems and verify their claims of compliance with federal laws and regulations. How we get there starts with establishing criteria and a path to certification for third-party auditors. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European UnionPolitical and economic uniondisagrees and says:2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. 3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts. (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
National Telecommunications and Information Administration (NTIA)US Commerce Department telecom agencystrongly agrees and says:Audits and other independent evaluations: Federal agencies should require independent audits and regulatory inspections of high-risk AI model classes and systems – such as those that present a high risk of harming rights or safety. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.