Biased? Add sourced quotes from experts and public figures.

Should third-party audits be mandatory for major AI systems?

Cast your vote:
Results (62):
filter
Quotes (58) Users (0)
  • agrees and says:
    OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be important to get independent review before starting to train future systems”. But again, they have not submitted to any such advance reviews so far. We have to stop letting them set all the rules. AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small. But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions. And not just after products are released, but before. We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire. Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation - is a vital first step. (2023) source Verified
    Comment 1 Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    I would be favorable to the idea that we could have an artificial intelligence agency ... inspired by what the international agency of atomic energy is today. source Verified
    Comment 1 Comment X added 2mo ago
    Info
    Delegate
  • strongly agrees and says:
    First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. source Verified
    Comment 1 Comment X added 2mo ago
    Info
    Delegate
  • strongly agrees and says:
    Black-box audits are insufficient for ensuring that an AI is safe and robust. At a minimum, we need white-box audits where the auditor can look inside the AI and learn about how it reasons. source Verified
    Comment 1 Comment X added 2mo ago
    Info
    Delegate
  • agrees and says:
    Let me start with the structural point. Right now, the state of evaluation in AI is like the auto industry before independent safety testing. It’s as if car makers were the only ones evaluating their own products—for crash safety, environmental impact, and so on. Exactly. I think we need a robust, independent third-party evaluation system. We—and many others—have been trying to build that. So that’s one structural change that would help: changing how evaluations are done. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    The truth is AI systems are not magic: AI is technology. Like any other piece of technology that has benefited us — drugs, cars, planes — AI needs guardrails so we can be protected from the worst failures, while still benefiting from the progress AI offers. Congress should enshrine these ideas in legislation not just for government use of AI, but for private sector uses of AI that have people facing impact. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    PAI also welcomes the requirement for external evaluations of some GPAI models. Independent assessment of model capabilities and risks is crucial to ensure that evaluators have the broad range of expertise needed to do their job. It is also needed to build wider trust that evaluation outcomes are objective. Independent evaluations are a critical plank of a vibrant AI assurance ecosystem. PAI launched a policy research project at the AI Action Summit in France earlier this year to address the core factors needed to build out an assurance ecosystem to create justified trust in AI models and systems. In future iterations of the Code, we would like to see more detailed guidance about external evaluations both pre- and post-deployment, including robust safe harbor provisions for evaluators. source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    Requiring companies get third-party safety audits by 2028.[...] “As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 aims to do just that,” said Senator Wiener. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Cybersecurity risk provides more than a warning; it also offers an answer. Cybersecurity audits have become a norm for companies today, and the responsibility and liability for cyber risk audits goes all the way up to the board of directors. I believe companies using AI models for socially or financially consequential decisions need similar audits as well, and I am not alone. The Algorithmic Accountability Act, proposed by Democratic lawmakers this past Spring, would, if passed, require that large companies formally evaluate their “high-risk automated decision systems” for accuracy and fairness. EU’s GDPR audit process, while mostly focused on regulating the processing of personal data by companies, also covers some aspects of AI such as a consumer’s right to explanation when companies use algorithms to make automated decisions. While the scope of the right to explanation is relatively narrow, the Information Commissioner’s Office (ICO) in the U.K. has recently invited comments for a proposed AI auditing framework that is much broader in scope. But forward-thinking companies should not wait for regulation. High-profile AI failures will reduce consumer trust and only serve to increase future regulatory burdens. These are best avoided through proactive measures today. (2019) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    2025-S1169A (ACTIVE) - Summary Regulates the development and use of certain artificial intelligence systems to prevent algorithmic discrimination; requires independent audits of high risk AI systems; provides for enforcement by the attorney general as well as a private right of action. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Despite the substantial benefits of internal impact assessments, there is a growing chorus of voices in AI policy discussions advocating for mandatory third-party audits. Supporters argue that external safeguards are needed to promote meaningful transparency and accountability of companies developing and using AI. Policymakers have acknowledged these concerns and introduced legislation with third-party audit requirements, including in Congress, California, and Canada. In some cases, companies may decide that investing the time, energy, and resources necessary to undergo a voluntary third-party AI audit that shows compliance with international standards will help meet important business goals or provide a commercial advantage. In the AI context, however, there are several reasons why mandated third-party audits may not be workable now: (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Third-party assessments can be conducted on frontier models to confirm evaluations or claims on critical safety capabilities and mitigations. In appropriate contexts, these assessments may help to confirm or build confidence in safety claims, add robust methodological independence, and supplement expertise. This report outlines practices and approaches among Frontier Model Forum (FMF) firms for implementing, where appropriate, rigorous, secure, and fit-for-purpose third-party assessments. Third-party assessments can complement – but do not replace – internal safety processes and corporate governance mechanisms. Internal teams possess deep knowledge of their systems and development processes, and external assessors can offer methodological independence, specialized expertise, and fresh perspectives that may be able to identify issues or validate critical safety claims. These assessments become particularly valuable for models that are approaching, or have reached, enabling capability thresholds, or when implementing novel frontier mitigations. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Mandated transparency may also not actually improve consumer education around their privacy choices. First, many consumers may grow fatigued and frustrated with the constant pop ups and consents as they do with current cookie pop ups. Second, a government mandate — as opposed to an organic best practice — is more likely to communicate to consumers in ways that are not seamless with products and in terms they see fit for their audience by focusing on compliance instead. Finally, mandated transparency in AI is particularly tricky. Often, the ways data is used are of key value to the company. Government transparency mandates could require the disclosure of intellectual property or other competitively valuable information and concerningly set up platforms for pressure from the government to take or not take certain actions. Many companies will likely provide transparency in response to consumers’ expectations and demands. Those who do not may face industry or consumer pressure to do so. Such an approach allows for more flexibility than a regulatory mandate would and raises less concerns about the potential tradeoffs. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Frontier AI developers should commit to detailed and independently scrutinized scaling policies. Governments should work on national and international safety standards of AI training and deployment. Governments should require audits of AI systems during training. Governments should monitor large compute clusters. Governments may want to establish a licensing system for powerful AI systems, and should empower regulators to pause the further development of an AI system, and should mandate access controls for such frontier AI systems. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    We doubt these algorithmic tools are ready for prime time, and the state of California should not have embraced their use before establishing ways to scrutinize them for bias, fairness, and accuracy. They must also be transparent and open to regular independent audits and future correction. [...] The public must have access to the source code and the materials used to develop these tools, and the results of regular independent audits of the system, to ensure tools are not unfairly detaining innocent people or disproportionately affecting specific classes of people. (2018) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    We agree with California’s leadership that children’s online safety is of the utmost importance, and our members prioritize advanced tools that reflect that priority. But SB 243 casts too wide a net, applying strict rules to everyday AI tools that were never intended to act like human companions. Requiring repeated notices, age verification, and audits would impose significant costs without providing meaningful new protections. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Most of the people that come to us do so for a reason. They’ve been getting in trouble for biased algorithms and they want us to clear their name. But until this NYC law passed, we didn’t have much hope for companies coming to us unless they have to. It’s really not necessary in many cases. Hiring, credit, insurance and housing are already highly regulated industries with a lot of existing anti-discrimination law. It’s not necessary to create new laws, it’s just necessary for the regulators in question to decide to enforce those laws. (2022) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    TechNet believes it is premature to mandate independent third-party auditing of AI systems. Mandating an independent audit before appropriate technical standards and conformity assessment requirements are established could open AI systems to national security threats, trade secrets theft, and inaccurate audit reports. We believe AI auditing standards, ethics, or oversight rules must consider the use-case-specific auditing needs, calibrated to the risk of the specific use case, set to measurable benchmarks, and ensure safe and ethical practices to promote continued innovation while also protecting intellectual property, trade secrets, and security. Any transparency, explainability, or audit requirements imposed on AI systems must account for protecting personal information, and carefully balance the proprietary and trade secret protections regarding the AI system and the technical feasibility of implementing such requirements. It must also not jeopardize the safety systems of AI-driven services. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Since AI remains in the early stages of testing, it is important to have external monitoring of model results. Large language models should be tested through third parties and results made available to the public. That will help people understand how AI is performing and which applications may be especially problematic. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Participating countries committed, depending on their circumstances, to the development of appropriate state-led evaluation and safety research while participating companies agreed that they would support the next iteration of their models to undergo appropriate independent evaluation and testing. Multiple participants suggested that existing voluntary commitments would need to be put on a legal or regulatory footing in due course. It was suggested that there might be certain circumstances in which governments should apply the principle that models must be proven to be safe before they are deployed, with a presumption that they are otherwise dangerous. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree. Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    We will move from a voluntary code to a statutory code, so that those companies engaging in that kind of research and development have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us.[...] The results of the tests would help the newly established UK AI Safety Institute reassure the public that independently, we are scrutinising what is happening in some of the real cutting-edge parts of … artificial intelligence. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    This commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I look forward to getting this bill and our AI Research Innovation and Accountability Act passed out of the Commerce Committee soon. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    The AI audit ecosystem is not mature enough to support mandatory third-party audits. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    For both business and technical reasons, automatically generated, high-fidelity explanations of most AI decisions are not currently possible. That’s why we should be pushing for the external audit of AI systems responsible for high-stakes decision making. Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. […] Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. […] Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces. (2019) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Establishing good AI governance will require efforts on several fronts. We need a framework that sets clear boundaries for using AI and guidelines and training that help agencies understand and adhere to these standards. Systems must be tested, and not just by the vendor selling the technology. Testing and oversight have to be continuous as AI evolves. Regular and independent audits will verify that agencies are living up to standards, checking for vulnerabilities and using what they learn to drive improvements. My office will look at the AI systems used by state agencies to see if they are working and verify that vendors are playing by the rules. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.” The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing. But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Sensible safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but also in promoting our democratic values. We can start with transparency. AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness, limitations on use. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Alongside its provisions on improved transparency and data access for researchers, the Commission’s draft calls for two additional layers of oversight for very large platforms: Audits by independent auditors and auditing powers for regulators. We welcome the Commission’s approach and appreciate its proposal for such a comprehensive auditing and enforcement regime. According to the proposal, large platforms will undergo mandatory audits by independent auditors that have “expertise in the area of risk management,” as well as “technical competence to audit algorithms.” These auditors should ensure that large platforms are in compliance with a long list of obligations, ranging from rules on recommender system transparency to its updated notice and action rules. (2020) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    I’m pleased to join a bipartisan coalition of attorneys general to share with the NTIA our recommendations on this critical issue. In the letter, the Attorneys General: Call for independent standards for AI transparency, testing, assessments, and audits, [...] and by ensuring that companies regularly commit to third-party auditing of its AI systems; [...] Highlight the need for AI legislation that both fosters innovation and protects consumers; and [...] Stress that state attorneys general should have concurrent enforcement authority in any Federal regulatory regime governing AI. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    The Chamber recognizes the purpose of certification, audits, and assessments as a means for organizations to utilize repeatable and standard methodologies to help them improve the explainability of an algorithm. [...] Furthermore, there are strong apprehensions regarding any mandatory third-party audit requirement. Audits and assessments can costs hundreds of thousands of dollars, which can be cost-prohibitive, especially for start-ups and small businesses that may not have the necessary capital and funding for such expenses. Such an inability to meet compliance costs may lead these organizations to merge with other companies or not enter the market, reducing the overall market size and potentially reducing innovation. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    We set out to answer an important question: If we want responsible innovation and trustworthy AI, how do we hold AI systems — and the entities and individuals that develop, deploy, and use them — accountable? How do we ensure that they are doing what they say? For example, if an AI system claims to keep data private, or operate securely, or avoid biased outcomes – how do we ensure those claims are true? The Report calls for improved transparency into AI systems, independent evaluations, and consequences for imposing risks. One key recommendation: The government ought to require independent audits of the highest-risk AI systems – such as those that directly impact physical safety or health, for example. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    Whether on the Senate floor or around the dining room table, artificial intelligence is the hottest topic of the year. But these complex algorithms have a darker side as well — one that has real consequences for everyday people, especially marginalized communities. I am introducing the Artificial Intelligence Civil Rights Act to ensure that the AI Age does not replicate and supercharge the bias and discrimination already prevalent in society today. Make no mistake: we can have an AI revolution in this country while also protecting the civil rights and liberties of everyday Americans, we can support innovation without supercharging bias and discrimination, and we can promote competition while safeguarding people’s rights. [...] I look forward to working with my colleagues to ensure that any AI regulation includes strong and enforceable civil rights protections. Requires developers and deployers of covered algorithms to complete independently audited pre-deployment evaluations and post-deployment impact assessments to identify, evaluate, and mitigate any potential biased use or discriminatory outcomes. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    AI audits should focus on risk assessment programs rather than individual AI applications. Auditing each application will result in a misallocation of resources, with too much effort spent on low-risk AI (e.g., spam filters, graphics generation for games, inventory management, and cybersecurity monitoring) and not enough effort spent on high-risk AI (e.g., hiring, lending, insurance underwriting, and education admissions). Moreover, a cost-heavy universal audit requirement may result in AI usage centralizing among large firms and the exclusion of smaller firms and startups from participating, simply because the costs of auditing each AI application would be too great. Companies should use the same principles applied to AI applications that are developed in-house for identifying risks associated with third-party AI applications and mitigate those risks through diligence, audits, and contractual terms. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    Although developers like Microsoft are addressing these risks through rigorous testing and selfimposed standards, the risks involved are too important, and their scale and potential impacts at present too unknowable, to address them through self-regulation alone. We therefore think it is appropriate for Congress to consider legislation that would impose a licensing regime onto developers of this discrete class of highly capable, frontier AI models, and we are pleased to see that the Blumenthal-Hawley regulatory framework seeks to establish such a regime. Although the details of this licensing regime again would benefit from further thought and discussion, and there are critical consequences and details to deeply consider, such as the impact to open source models and the importance of continuing to foster an innovative open source ecosystem, we think it should seek to serve three key goals: First and foremost, any licensing regime must ensure that the development and deployment of highly capable AI models achieve defined safety and security objectives. In concrete terms, this may require licensees of these models, among other things, to engage in the pre-deployment testing that the Blumenthal-Hawley regulatory framework proposes. We agree that highly capable models may need to undertake extensive prerelease testing by internal and external experts. In addition, a licensing regime may require developers of highly capable models to provide advance notification of large training runs; engage in comprehensive risk assessments focused on identifying dangerous or breakthrough capabilities; and implement multiple other checkpoints along the way. Second, it must establish a framework for close coordination and information sharing between licensees and regulators, to ensure that developments material to the achievement of safety and security objectives are shared and acted on in a timely fashion. The Blumenthal-Hawley framework provides that an independent oversight body not only conducts audits but also monitors technological developments, which may be best accomplished in partnership with licensees. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    This approach likens AI models to spacecraft venturing into unexplored celestial territories. Think of each new AI model as a spaceship. Before it “launches,” developers need to provide the plans, clear the airspace for launch, etc. And, after launch, just like NASA keeps us posted on space missions, developers should regularly update us on how their AI models are doing and what they are discovering. Assessment: Require independent third party audits when specific risk criteria are met for new capabilities. This gives governments the authority to request deeper evaluation, especially for powerful proprietary models that are otherwise subject to IP protection. Risk criteria for triggering these assessments will need to be updated periodically based on model advances, risk mitigation techniques and experience. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • disagrees and says:
    The third provision requires organizations to have third parties conduct annual audits of their algorithmic decision-making, including to look for disparate-impact risks, and create and retain an audit trail for at least five years that documents each type of algorithmic decision-making process, the data used in that process, the data used to train the algorithm, and any test results from evaluating the algorithm as well as the methodology used to test it. Organizations must also provide a detailed report of this information to the DC attorney general’s office. This provision places an enormous and burdensome auditing responsibility not only on those organizations using algorithms for decision-making, but also on service providers who may offer such functionality to others. Many of the auditing requirements would be inappropriate to require service providers to report since they will not necessarily have details about how a particular customer uses their service. Moreover, many businesses and service providers are struggling to comply with the algorithm auditing requirements in New York City, which only apply to AI systems used in hiring. The audit requirements in the proposed Act would apply to a much broader set of activities and present even more challenges. (2022) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    6. External auditing. Providers of high-impact foundation models shall provide sufficient model access, as determined by the AI Office, to approved independent third-party auditors. Note: The AI Office should create and maintain a certification process for determining if auditors are sufficiently independent and qualified to audit high-impact foundation models. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    regularly reviewing and updating guidance to keep pace with technology and strengthen their abilities to oversee new AI capabilities setting procurement requirements to ensure that foundation models developed by private companies for the public sector uphold public standards mandating independent third-party audits for all foundation models used in the public sector, whether developed in-house or externally procured source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    “The Business Software Alliance is concerned by the New York State Senate’s passage of the New York AI Act. BSA is committed to helping policymakers develop legislation to address high-risk uses of AI, but this bill advanced through the New York Senate in a rushed process that failed to take stakeholder feedback — favorable or unfavorable — into account,” BSA senior director of state advocacy Heather Curry said in a June 13 statement. “This legislation, which would go far beyond what has been enacted in California or the European Union, is not ready for serious consideration by the Assembly,” Curry said. “It conflates the roles of different actors along the AI value chain, holding companies legally responsible for actions that may be taken by others. It also establishes an extensive and unworkable third-party audit regime and fragmented enforcement through private lawsuits.” “In a similar vein,” she said, “while the RAISE Act is intended to address worthwhile considerations around AI safety for large frontier models, it relies on a vague and unworkable incident reporting scheme. The bill would also undermine safety protections for frontier models by requiring developers of those models to publish their safety protocols — creating a roadmap for bad actors to exploit.” (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    NIST should recommend that Congress invest substantial resources in the development and implementation of third-party, independent audits and impact assessments for AI (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    AI systems that pose higher risk should undergo regular third-party audits. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    This would require comprehensive evaluation of potential harm through independent audits (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    Society cannot wait years or even months for regulators to eventually get around to formally signing off on mandated algorithmic audits or impact assessments (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    Under the current stipulations, these researchers would be obligated to maintain extensive documentation for their model. Further, the language in Article 28b(2(a)) seems to imply that providers of foundation models would be required to contract third-party specialists to audit their systems, which would be costly. However, unlike closed-source AI systems, open-source systems already permit independent experts to scrutinize them by the very nature that they are openly available, making this requirement an unnecessarily costly requirement for open-source AI. As it stands, the EU AI Act would stifle open-source development of AI models, which would, in turn, hinder innovation and competition within the AI industry. Many open-source AI innovations have found their way into several commercial applications. Indeed, more than 50,000 organizations use the open-source models on HuggingFace’s platform. The EU AI Act, in its current form, risks creating a regulatory environment that is not only burdensome and inappropriate for open-source AI developers but also counterproductive to the broader goals of fostering innovation, transparency, and competition in the AI sector. As the EU’s ongoing negotiations over the AI Act continue, particularly around the regulation of foundation models, policymakers need to adequately address these issues. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    Require companies that make tools used in consequential decisions to undergo independent, third-party testing for bias, accuracy and more pre-deployment, and regularly after deployment. Clear and consistent standards should be developed for testing and third-party auditors should be regulated by relevant regulators or bodies. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    So the RAISE Act asks for companies to do four things. They have to have an SSP. They have to have that SSP audited by a third party, not the government. They actually have to choose their own third party that does that. They have to disclose critical safety incidents, and we define that very specifically in the bill as to what would be something that would have to be disclosed. And they have to not terminate employees or contractors that raise critical risk. It applies only to large companies, so those that have spent $100 million or more in training frontier models. It exempts academia entirely. Obviously, that $100 million threshold, I think, exempts what any startup is currently building in terms of—again, that is just on compute and just on training. And it focuses on making sure that the companies are setting a baseline standard ahead of time through their SSP and not changing it after the fact. (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    LLM scaling laws also suggest the computing resources required to train large models are a reasonable proxy for model power and generality. Consistent with our argument for refining the definition of AI systems, the NTIA should thus consider defining a special regulatory threshold based on the computing cost needed to match or surpass the performance of GPT-4 across a robust set of benchmarks. Theoretical insights and hardware improvements that reduce the computing resources needed to match or surpass GPT-4 would require the threshold to be periodically updated. In the meantime, the handful of companies capable of training models that exceed this threshold should have to disclose their intent to do so and submit to external audits that cover both model alignment and operational security. Compute thresholds will eventually cease to be a useful proxy for model capability as training costs continue to fall. Nevertheless, the next five years are likely to be an inflection point in the race to build TAI. The NTIA should thus not shy away from developing a framework that is planned to obsolesce but which may still be useful for triaging AI accountability initiatives in the near term. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    This committee has discussed ideas such as third-party testing, a new national agency, and an international coordinating body, all of which I support. Here are some more ways to “move fast and fix things”: Eventually, we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    I think in the future, we’re going to need to audit technology products in terms of how they actually stimulate our nervous system. (2019) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • agrees and says:
    We need to figure out new ways of auditing and reviewing to make sure the AI-type mistakes don't wreck our work. (2024) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • disagrees and says:
    While auditing standards and transparency are necessary to answer fundamental questions, they will not address these harms. (2020) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • agrees and says:
    I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. (2024) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • strongly agrees and says:
    And they should have to let in external AI auditors to scrutinize their work so that the companies aren't just grading their own homework. (2024) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • strongly agrees and says:
    CAIP's primary recommendation is to implement mandatory third-party national security audits for advanced AI systems. This measure would enable security agencies to understand potential threats better and verify the safety of leading AI technologies. (2025) source Verified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    But we can’t let an industry with so many unknowns and potential harms police itself. We need clear rules that we can rely on to prevent AI’s harms. And while those exact rules are still being discussed, what we do know is that in the long-term we cannot rely on self-reporting alone from AI companies on compliance. We should build trust, but verify. We need qualified third parties to effectively audit Generative AI systems and verify their claims of compliance with federal laws and regulations. How we get there starts with establishing criteria and a path to certification for third-party auditors. (2024) source Verified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Audits and other independent evaluations: Federal agencies should require independent audits and regulatory inspections of high-risk AI model classes and systems – such as those that present a high risk of harming rights or safety. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • disagrees and says:
    2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. 3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
Terms · Privacy · Contact