We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Cast your vote:
Results (40):
filter
Quotes (33)
Users (0)
-
Dario AmodeiResearch Scientist at OpenAIdisagrees and says:Many of the risks and worries associated with generative AI are ultimately consequences of this opacity, and would be much easier to address if the models were interpretable. [...] governments can use light-touch rules to encourage the development of interpretability research and its application to addressing problems with frontier AI models. Given how nascent and undeveloped the practice of ‘AI MRI’ is, it should be clear why it doesn’t make sense to regulate or mandate that companies conduct them, at least at this stage: it’s not even clear what a prospective law should ask companies to do. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Scienceagrees and says:Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions). To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights. source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MIT Lincoln LaboratoryMIT research lab for national securitydisagrees and says:As autonomous systems and artificial intelligence (AI) become increasingly common in daily life, new methods are emerging to help humans check that these systems are behaving as expected. One method, called formal specifications, uses mathematical formulas that can be translated into natural-language expressions. Some researchers claim that this method can be used to spell out decisions an AI will make in a way that is interpretable to humans. MIT Lincoln Laboratory researchers wanted to check such claims of interpretability. Their findings point to the opposite: formal specifications do not seem to be interpretable by humans. In the team's study, participants were asked to check whether an AI agent's plan would succeed in a virtual game. Presented with the formal specification of the plan, the participants were correct less than half of the time. “The results are bad news for researchers who have been claiming that formal methods lent interpretability to systems. It might be true in some restricted and abstract sense, but not for anything close to practical system validation,” says Hosea Siu, a researcher in the Laboratory's AI Technology Group. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
UNESCOUN agency for education, science, cultureagrees and says:Governments should adopt a regulatory framework that sets out a procedure, particularly for public authorities, to carry out ethical impact assessments on AI systems to predict consequences, mitigate risks, avoid harmful consequences, facilitate citizen participation and address societal challenges. The assessment should also establish appropriate oversight mechanisms, including auditability, traceability and explainability, which enable the assessment of algorithms, data and design processes, as well as include external review of AI systems. Ethical impact assessments should be transparent and open to the public, where appropriate. Such assessments should also be multidisciplinary, multi-stakeholder, multicultural, pluralistic and inclusive. The public authorities should be required to monitor the AI systems implemented and/or deployed by those authorities by introducing appropriate mechanisms and tools. Member States should set clear requirements for AI system transparency and explainability so as to help ensure the trustworthiness of the full AI system life cycle. Such requirements should involve the design and implementation of impact mechanisms that take into consideration the nature of application domain, intended use, target audience and feasibility of each particular AI system. (2021) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
White House Office of Science and Technology Policy (OSTP)U.S. White House science officestrongly agrees and says:You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. In order to guard against potential harms, the American public needs to know if an automated system is being used. Clear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Likewise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a particular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, unaccountable, whether by design or by omission. These factors can make explanations both more challenging and more important, and should not be used as a pretext to avoid explaining important decisions to the people impacted by those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline requirement. Tailored to the level of risk. An assessment should be done to determine the level of risk of the automated system. In settings where the consequences are high as determined by a risk assessment, or extensive oversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should be built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully transparent models should be used), rather than as an after-the-decision interpretation. In other settings, the extent of explanation provided should be tailored to the risk level. (2022) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brookings InstitutionU.S. public policy think tankdisagrees and says:Explainable AI (XAI) is often offered as the answer to the black box problem and is broadly defined as “machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI.” Around the world, explainability has been referenced as a guiding principle for AI development, including in Europe’s General Data Protection Regulation. Explainable AI has also been a major research focus of the Defence Advanced Research Projects Agency (DARPA) since 2016. However, after years of research and application, the XAI field has generally struggled to realize the goals of understandable, trustworthy, and controllable AI in practice. The end goal of explainability depends on the stakeholder and the domain. Explainability enables interactions between people and AI systems by providing information about how decisions and events come about, but developers, domain experts, users, and regulators all have different needs from the explanations of AI models. These differences are not only related to degrees of technical expertise and understanding, but also include domain-specific norms and decision-making mechanisms. For now, users and other external stakeholders are typically afforded little if any insight into the behind-the-scenes workings of the AI systems that impact their lives and opportunities. This asymmetry of knowledge about how an AI system works, and the power to do anything about it, is one of the key dilemmas at the heart of explainability. (2021) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Consumer Financial Protection Bureau (CFPB)U.S. consumer finance regulatoragrees and says:Today, the Consumer Financial Protection Bureau (CFPB) confirmed that federal anti-discrimination law requires companies to explain to applicants the specific reasons for denying an application for credit or taking other adverse actions, even if the creditor is relying on credit models using complex algorithms. The CFPB published a Consumer Financial Protection Circular to remind the public, including those responsible for enforcing federal consumer financial protection law, of creditors’ adverse action notice requirements under the Equal Credit Opportunity Act (ECOA). “Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions,” said CFPB Director Rohit Chopra. “The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.” ECOA protects individuals and businesses against discrimination when seeking, applying for, and using credit. To help ensure a creditor does not discriminate, ECOA requires that a creditor provide a notice when it takes an adverse action against an applicant, which must contain the specific and accurate reasons for that adverse action. Creditors cannot lawfully use technologies in their decision-making processes if using them means that they are unable to provide these required explanations. (2022) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Abraham C. MeltzerCalifornia judge and legal scholarstrongly agrees and says:Black box AI systems "do not explain their predictions in a way that humans can understand." (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Roman V. YampolskiyAI safety researcher, Louisville professorstrongly disagrees and says:Advanced AIs would not be able to accurately explain some of their decisions. (2019) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan F. T. WinfieldProfessor of Robot Ethics, UWE Bristolstrongly agrees and says:Robots should be fitted with an "ethical black box" to keep track of their decisions and enable them to explain their actions when accidents happen. (2017) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Frank PasqualeLaw professor, algorithmic accountability expertagrees and says:Some of the black boxes of reputation, search, and finance simply need to be pried open. (2015) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bruce SchneierSecurity technologist and authordisagrees and says:Forcing an AI to produce a human-understandable explanation is an additional constraint, and it could affect the quality of its decisions. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Suresh VenkatasubramanianBrown CS professor; former OSTP officialstrongly agrees and says:Congress should demand that these systems be explainable. How that plays out will be a matter for innovation. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
James BroughelEconomist focused on regulationdisagrees and says:We should strive to make AI systems interpretable where possible, but not at the cost of the benefits they deliver. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
U.S. Chamber of Commerce (C_TEC)Largest U.S. business federationdisagrees and says:If the RMF defines and deploys these concepts beyond their current development, as it will add unnecessary burdens which could stifle innovation. (2021) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Information Commissioner's Office (ICO)UK data protection authorityagrees and says:Where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system. (2020) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael BennetU.S. Senator from Coloradoagrees and says:AI systems should undergo regular public risk assessments to examine their safety, reliability, security, explainability, and efficacy. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AI Now InstituteNYU institute on AI policystrongly agrees and says:1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards. This would represent a significant shift: our recommendation reflects the major decisions that AI and related systems are already influencing, and the multiple studies providing evidence of bias in the last twelve months (as detailed in our report). Others are also moving in this direction, from the ruling in favor of teachers in Texas, to the current process underway in New York City this month, where the City Council is considering a bill to ensure transparency and testing of algorithmic decision making systems. (2017) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningstrongly disagrees and says:GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster. People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story. Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago. (2017) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lloyd J. Austin IIIU.S. Secretary of Defenseagrees and says:So we have established core principles for Responsible AI. Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable. We’re going to use AI for clearly defined purposes. We’re not going to put up with unintended bias from AI. We’re going to watch out for unintended consequences. And we’re going to immediately adjust, improve, or even disable AI systems that aren’t behaving the way that we intend. (2021) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pedro DomingosProfessor of computer science at UW and author of 'The Master Algorithm'strongly disagrees and says:But it’s potentially disastrous, because there’s often a tradeoff between accuracy and explainability. (2018) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European UnionPolitical and economic unionstrongly agrees and says:ensure that the AI system’s output is interpretable by the provider and the user. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European ParliamentEU legislative bodystrongly agrees and says:Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Federal Trade Commission (FTC)US consumer protection agencyagrees and says:If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked. (2020) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Christina MontgomeryIBM Chief Privacy & Trust Officeragrees and says:AI should augment humans, not replace them, it should be explainable, and fair and private and secure. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter NorvigComputer scientist, AI researcherdisagrees and says:You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation. So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation. source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Reporters Without Borders (RSF)Press freedom nonprofitagrees and says:RSF, which has already called for the AI Act to includemeasures to protect the right to information, is convinced that that these models must be regulated. But some European countries, including France, are advocating self-regulation based on codes of conduct. This will not suffice because these codes of conduct are non-binding and rely solely on the goodwill of AI companies. Under the Paris Charter on AI and Journalism, AI systems must be fully audited to verify their compatibility with journalistic ethics before they can be used. The AI Act must therefore make it a requirement for foundation models to comply with standards of openness, explainability of operation and transparency of systems as well as with measures to protect the right to information. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chuck SchumerU.S. Senate Majority Leaderagrees and says:Finally, explainability, one of the thorniest and most technically complicated issues we face, but perhaps the most important of all. Explainability is about transparency. When you ask an AI system a question and it gives you an answer, perhaps the answer you weren’t expecting, you want to know where the answer came from. You should be able to ask: Why did AI choose this answer over some other answer that also could have been a possibility? And it should be done in a simple way, so all users can understand how these systems come up with answers. But fortunately, the average person does not need to know the inner workings of these algorithms. But we do, we do need to require companies to develop a system where, in simple and understandable terms, users understand why the system produced a particular answer and where that answer came from. This is very complicated but very important work. And here we will need the ingenuity of the experts and companies to come up with a fair solution that Congress can use to break open AI’s black box. Innovation first, but with security, accountability, foundations and explainability. These are the principles that I believe will ensure that AI innovation is safe and responsible and has the appropriate guardrails. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AnthropicAI safety research companyagrees and says:The definition, criteria, and safety measures for each ASL level are described in detail in the main document, but at a high level, ASL-2 measures represent our current safety and security standards and overlap significantly with our recent White House commitments. ASL-3 measures include stricter standards that will require intense research and engineering effort to comply with in time, such as unusually strong security requirements and a commitment not to deploy ASL-3 models if they show any meaningful catastrophic misuse risk under adversarial testing by world-class red-teamers (this is in contrast to merely a commitment to perform red-teaming). Our ASL-4 measures aren’t yet written (our commitment is to write them before we reach ASL-3), but may require methods of assurance that are unsolved research problems today, such as using interpretability methods to demonstrate mechanistically that a model is unlikely to engage in certain catastrophic behaviors. We have designed the ASL system to strike a balance between effectively targeting catastrophic risk and incentivising beneficial applications and safety progress. On the one hand, the ASL system implicitly requires us to temporarily pause training of more powerful models if our AI scaling outstrips our ability to comply with the necessary safety procedures. But it does so in a way that directly incentivizes us to solve the necessary safety issues as a way to unlock further scaling, and allows us to use the most powerful models from the previous ASL level as a tool for developing safety features for the next level. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cynthia RudinDuke professor, interpretable ML advocatestrongly agrees and says:Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision. (2018) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max W. ShenAI researcher; trust and interpretabilitystrongly disagrees and says:The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning. Our processes for evaluating AI trustworthiness have substantial ramifications for ML's impact on science, health, and humanity, yet confusion surrounds foundational concepts. What does it mean to trust an AI, and how do humans assess AI trustworthiness? What are the mechanisms for building trustworthy AI? And what is the role of interpretable ML in trust? Here, we draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework, which distinguishes human-AI trust from human-AI-human trust. Evaluating an AI's contractual trustworthiness involves predicting future model behavior using behavior certificates (BCs) that aggregate behavioral evidence from diverse sources including empirical out-of-distribution and out-of-task evaluation and theoretical proofs linking model architecture to behavior. We clarify the role of interpretability in trust with a ladder of model access. Interpretability (level 3) is not necessary or even sufficient for trust, while the ability to run a black-box model at-will (level 2) is necessary and sufficient. While interpretability can offer benefits for trust, it can also incur costs. We clarify ways interpretability can contribute to trust, while questioning the perceived centrality of interpretability to trust in popular discourse. How can we empower people with tools to evaluate trust? Instead of trying to understand how a model works, we argue for understanding how a model behaves. Instead of opening up black boxes, we should create more behavior certificates that are more correct, relevant, and understandable. We discuss how to build trusted and trustworthy AI responsibly. (2022) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rohit ChopraCFPB Director; U.S. consumer regulatoragrees and says:We’ve also taken action to protect the public from black box credit models – in some cases so complex that the financial firms that rely on them can’t even explain the results. Companies are required to tell you why you were denied for credit – and using a complex algorithm is not a defense against providing specific and accurate explanations. Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers’ civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dragoș TudoracheEU MEP; AI Act co-rapporteuragrees and says:Then the second floor, you have the high-risk applications. And the commission is proposing in an annex, in fact – and I’ll explain why in an annex – identify several sectors where if you develop AI in those sectors, again, because the likelihood of touching upon the interests of individuals is very high, then they could qualify as high risk. And as a result of that, you’ll have to go through certain compliance requirements. And that goes into certain documentation, that it will have to have certain obligations of transparency, you’ll have to put it in a European-wide database, you have to explain the underlying elements of that algorithm to the user. So requirements that will make that high risk, not outside the law. So, it’s not bad AI. But again, because of its impact or potential impact on the rights of individuals, it will have to be better explained, it will have to be more transparent in the way it works, in the way the algorithm actually plays out its effect. Then you have a third smaller floor in the pyramid, which are not high risk, but they are applications that do require a certain level of transparency. For example, deep fakes. That’s the example that actually is being given as a use case by the commission in its proposal. Again, the requirements are lower than for the high risk, but still, it requires certain explainability, certain transparency to comply with. And then you have, as I said, the vast majority of applications that will go in the lower part that will not be regulated at all. source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.