We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Require AI systems above a capability threshold to be interpretable
For (26)
-
Gary MarcusProfessor of Psychology and Neural Sciencevotes For and says:Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] ... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rob StrayerEVP Policy, Information Technology Industry Councilvotes For and says:Transparency is a key means by which to achieve that trust. [...] Transparency is an overarching concept [...]. Transparency requirements should be risk-based. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Victoria EspinelCEO, BSA Software Alliancevotes For and says:My message to Congress is: Do not wait. Focus on high-risk AI uses. Companies should publicly certify that they have met these requirements. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ramayya KrishnanDean, Heinz College, Carnegie Mellonvotes For and says:Congress should require standardized documentation. Congress should require a model validation report. [...] provide the required assurance prior to deployment. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam GregoryExecutive Director, WITNESSvotes For and says:Transparency [...] is a critical area. [...] Embed human rights standards and a rights-based approach in the response to AI. [...] Place firm responsibility on stakeholders [...]. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
White House Office of Science and Technology Policy (OSTP)U.S. White House science officevotes For and says:You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation i... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Consumer Financial Protection Bureau (CFPB)U.S. consumer finance regulatorvotes For and says:Today, the Consumer Financial Protection Bureau (CFPB) confirmed that federal anti-discrimination law requires companies to explain to applicants the specific reasons for denying an application for credit or taking other adverse actions, even if the ... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
UNESCOUN agency for education, science, culturevotes For and says:Governments should adopt a regulatory framework that sets out a procedure, particularly for public authorities, to carry out ethical impact assessments on AI systems to predict consequences, mitigate risks, avoid harmful consequences, facilitate citi... more Unverified source (2021)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Frank PasqualeLaw professor, algorithmic accountability expertvotes For and says:Some of the black boxes of reputation, search, and finance simply need to be pried open. Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan F. T. WinfieldProfessor of Robot Ethics, UWE Bristolvotes For and says:Robots should be fitted with an "ethical black box" to keep track of their decisions and enable them to explain their actions when accidents happen. Unverified source (2017)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Abraham C. MeltzerCalifornia judge and legal scholarvotes For and says:Black box AI systems "do not explain their predictions in a way that humans can understand." Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Suresh VenkatasubramanianBrown CS professor; former OSTP officialvotes For and says:Congress should demand that these systems be explainable. How that plays out will be a matter for innovation. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Information Commissioner's Office (ICO)UK data protection authorityvotes For and says:Where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system. Unverified source (2020)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael BennetU.S. Senator from Coloradovotes For and says:AI systems should undergo regular public risk assessments to examine their safety, reliability, security, explainability, and efficacy. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lloyd J. Austin IIIU.S. Secretary of Defensevotes For and says:So we have established core principles for Responsible AI. Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable. We’re going to use AI for clearly defined purposes. We’re not going to... more Unverified source (2021)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AI Now InstituteNYU institute on AI policyvotes For and says:1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of p... more Unverified source (2017)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European ParliamentEU legislative bodyvotes For and says:Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Christina MontgomeryIBM Chief Privacy & Trust Officervotes For and says:AI should augment humans, not replace them, it should be explainable, and fair and private and secure. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Federal Trade Commission (FTC)US consumer protection agencyvotes For and says:If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked. Unverified source (2020)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European UnionPolitical and economic unionvotes For and says:ensure that the AI system’s output is interpretable by the provider and the user. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chuck SchumerU.S. Senate Majority Leadervotes For and says:Finally, explainability, one of the thorniest and most technically complicated issues we face, but perhaps the most important of all. Explainability is about transparency. When you ask an AI system a question and it gives you an answer, perhaps the a... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AnthropicAI safety research companyvotes For and says:The definition, criteria, and safety measures for each ASL level are described in detail in the main document, but at a high level, ASL-2 measures represent our current safety and security standards and overlap significantly with our recent White Hou... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Reporters Without Borders (RSF)Press freedom nonprofitvotes For and says:RSF, which has already called for the AI Act to includemeasures to protect the right to information, is convinced that that these models must be regulated. But some European countries, including France, are advocating self-regulation based on codes o... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cynthia RudinDuke professor, interpretable ML advocatevotes For and says:Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining thes... more Verified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rohit ChopraCFPB Director; U.S. consumer regulatorvotes For and says:We’ve also taken action to protect the public from black box credit models – in some cases so complex that the financial firms that rely on them can’t even explain the results. Companies are required to tell you why you were denied for credit – and u... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dragoș TudoracheEU MEP; AI Act co-rapporteurvotes For and says:Then the second floor, you have the high-risk applications. And the commission is proposing in an annex, in fact – and I’ll explain why in an annex – identify several sectors where if you develop AI in those sectors, again, because the likelihood of ... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (12)
-
Dario AmodeiResearch Scientist at OpenAIvotes Against and says:Many of the risks and worries associated with generative AI are ultimately consequences of this opacity, and would be much easier to address if the models were interpretable. [...] governments can use light-touch rules to encourage the development o... more Verified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Scott RobbinsAI ethics researchervotes Against and says:[...] principles requiring that AI be explicable are misguided. We should be deciding which decisions require explanations. Automation is still an option; Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brookings InstitutionU.S. public policy think tankvotes Against and says:Explainable AI (XAI) is often offered as the answer to the black box problem and is broadly defined as “machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI.” Around the world... more Unverified source (2021)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MIT Lincoln LaboratoryMIT research lab for national securityvotes Against and says:As autonomous systems and artificial intelligence (AI) become increasingly common in daily life, new methods are emerging to help humans check that these systems are behaving as expected. One method, called formal specifications, uses mathematical fo... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Roman V. YampolskiyAI safety researcher, Louisville professorvotes Against and says:Advanced AIs would not be able to accurately explain some of their decisions. Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bruce SchneierSecurity technologist and authorvotes Against and says:Forcing an AI to produce a human-understandable explanation is an additional constraint, and it could affect the quality of its decisions. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
U.S. Chamber of Commerce (C_TEC)Largest U.S. business federationvotes Against and says:If the RMF defines and deploys these concepts beyond their current development, as it will add unnecessary burdens which could stifle innovation. Unverified source (2021)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
James BroughelEconomist focused on regulationvotes Against and says:We should strive to make AI systems interpretable where possible, but not at the cost of the benefits they deliver. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes Against and says:GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that... more Unverified source (2017)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pedro DomingosProfessor of computer science at UW and author of 'The Master Algorithm'votes Against and says:But it’s potentially disastrous, because there’s often a tradeoff between accuracy and explainability. Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter NorvigComputer scientist, AI researchervotes Against and says:You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and t... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max W. ShenAI researcher; trust and interpretabilityvotes Against and says:The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning. Our processes for evaluating AI trustworthiness have substantial ramifications for ML's impact on science, health, and humanity... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Finding sourced quotes...
More
ai-regulation
votes