Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI companies should be liable for harms caused by their deployed models
Cast your vote:
For (26)
-
Cristian TroutResearcher on AI liability and insurancevotes For and says:Abstract: As AI systems become more autonomous and capable, experts warn of them potentially causing catastrophic losses. [...] developers of frontier AI models should be assigned [...] liability for harms [...]. Mandatory insurance [...] is recommen... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Sciencevotes For and says:Yes, I agree [...] we need a new liability framework. [...] AI systems can produce harm at large scale [...]. Developing a framework for making companies responsible for harms [...] indirectly [...]. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Damon T. HewittCivil rights lawyer; nonprofit leadervotes For and says:Developers [...] be liable if they aren’t. An algorithm is safe if [...] prevent [...] harm [...]. An algorithm is effective if it functions as expected [...]. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Danielle CoffeyNews Media Alliance president and CEOvotes For and says:We can agree more the hypocrisy that there would be a contradiction. So, it's good to hear, then, they're saying that they're the creator of the content because they're saying that in courts over copyright right now. ``We're creating this new express... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gleb TsipurskyBehavioral scientist and authorvotes For and says:However, the problem with these proposals is that they require the coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods o... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Future of Life InstituteNonprofit on existential risksvotes For and says:The Roadmap emphasizes the need to “hold AI developers and deployers accountable if their products or actions cause harm to consumers.” We agree that developers, deployers, and users should all be expected to behave responsibly in the creation, deplo... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Meetali JainTech Justice Law Project foundervotes For and says:A strong product liability law incentivizes companies to consider safety throughout the design and development process of AI products; Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Marc J. PfeifferResearcher on digital product liabilityvotes For and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alix FraserIssue One Vice President of Advocacyvotes For and says:liability is an essential tool for ensuring that Big Tech builds products that are safe for kids, our national security, and our democracy. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Christina MontgomeryIBM Chief Privacy & Trust Officervotes For and says:Yes. In fact, IBM has been publicly advocating to condition liability on a reasonable care standard. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brad SmithMicrosoft vice chair and presidentvotes For and says:this is our legal problem, not yours. If they use our system properly, we’re the ones who are liable, not them. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark MacCarthyTech policy scholarvotes For and says:Meta should be liable if Meta AI, its chatbot, provides advice, guidance, or recommendations that would create liability if provided by a human. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rohit ChopraCFPB Director; U.S. consumer regulatorvotes For and says:Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
David Evan HarrisUC Berkeley public scholar, tech policy advisorvotes For and says:Hold developers of AI systems legally liable for harms caused by their systems, including harms to individuals and harms to society. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Margrethe VestagerEU Commission executive vice-presidentvotes For and says:This should make it easier for consumers to claim compensation for damages caused by such systems. Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rob EleveldTransparency Coalition CEOvotes For and says:To think that product liability protections for US consumers should not apply to AI products [...] is absurd. Of course those laws apply to AI products. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ursula PachlBEUC deputy director generalvotes For and says:It is essential that liability rules catch up with the fact we are increasingly surrounded by digital and AI-driven products and services like home assistants or insurance policies based on personalised pricing. Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gabriel WeilLaw professor, AI governancevotes For and says:Thankfully, just such a policy is available: Make AI companies pay for the harm they cause. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dick DurbinU.S. Senator; Senate Democratic Whipvotes For and says:The AI LEAD Act would establish a federal cause of action against AI companies for harms caused by their systems. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Didier ReyndersEuropean Commissioner for Justice (2019–2024)votes For and says:Current liability rules are not equipped to handle claims for damage caused by AI-enabled products and services, Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Scott WienerCalifornia state senatorvotes For and says:If you develop a model *today* [...] and that model causes harm of any scale, someone can try to sue you [...] potentially recover damages. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
John VillasenorUCLA professor; Brookings senior fellowvotes For and says:If it turns out that [...] renders the product harmful, the company needs to bear responsibility for that as well. Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:However, if liability frameworks are too lax, negative externalities may appear where a company benefits from lack of oversight and regulation at the expense [...] Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Josh HawleyU.S. Senator for Missourivotes For and says:When these new technologies harm innocent people, the companies must be held accountable. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutvotes For and says:and ensuring that AI companies be held liable when their products breach privacy, violate civil rights, endanger the public, [...] Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Woodrow HartzogLaw professor, Boston Universityvotes For and says:and it's no substitute for meaningful liability when AI systems harm the public. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (9)
-
Fei-Fei LiStanford AI professor; HAI co-directorvotes Against and says:SB-1047 holds liable [...] the original developer of that model. It is impossible [...] to predict every possible use. SB-1047 will force developers to pull back [...]. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Matt SchruersTech policy advocate; CCIA presidentvotes Against and says:This [...] legislation could [...] subjecting [...] to lawsuits [...]. The bill is overly broad [...] subject to litigation. [...] could subject online services to costly [...] lawsuits. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jeff JarvisJournalism professor and media commentatorvotes Against and says:I can go to the machine, and I can have it write a horrible poem, as the Senator said, about anyone that's bad. Is that my fault, or is that the machine's fault? I think that the question becomes-- you're going to find newspapers and TV stations that... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cynthia LummisU.S. Senator from Wyomingvotes Against and says:Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy. This legislation doesn’t create blanket immunity for AI – in fact, it requires ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tyler CowenProfessor of Economics, George Mason University & author of Average is Overvotes Against and says:placing full liability on AI providers for all their different kinds of output, and the consequences of those outputs, would probably bankrupt them. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jack SoloweyCato Institute policy analystvotes Against and says:Making AI providers liable for securities violations generally would produce inferior incentives. [...] Neither doctrine suggests universal AI provider liability for resulting harm is appropriate. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anjney Midhaa16z general partnervotes Against and says:The idea of imposing civil and criminal liability on model developers when downstream users do something bad is so misguided and such a dangerous precedent. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:Making technology developers liable for bad uses of products built from their technology will simply stop technology development, Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ari CohnFree Speech Counsel, TechFreedomvotes Against and says:But exposing the tool makers or providers to liability for intentional bad acts that others set out to do is a cure worse than the disease. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
r/ai-safety
votes