We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ban open source AI models capable of creating WMDs
For (27)
-
Scott AlexanderAuthor and psychiatristvotes For and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could... more Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professorvotes For and says:And a second point is about liability. And it's not completely clear where exactly the liability should lie. But to continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone deci... more Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how po... more Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems... more Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Steven AdlerAI safety researcher; Lawfare writervotes For and says:There is no permanent way to apply safety limitations to prevent users from obtaining help from the model with regard to bioweapons-related tasks. Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kevin M. EsveltMIT biosecurity and gene drive researchervotes For and says:Any and all safeguards can and will be removed within days of a large model being open-sourced. Verified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ilya SutskeverAI researcher, co-founder and former former chief scientist at OpenAIvotes For and says:I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise. Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes For and says:Let's open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK. Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jonas B. SandbrinkBiosecurity researcher, University of Oxfordvotes For and says:LLMs, such as GPT-4 and its successors, might provide dual-use information and thus remove some barriers encountered by historical biological weapons efforts. [...] BDTs may enable the creation of pandemic pathogens substantially worse than anything ... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael JacobCouncil on Strategic Risks fellowvotes For and says:Via the Australia Group and the US Department of Commerce, the US Government should explicitly design export controls to limit open sourcing of the riskiest AI-enabled Biological Design Tools (BDTs). Since publishing a tool online can be considered ... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Edouard HarrisGladstone AI co-founder and CTOvotes For and says:outlaw the open-sourcing of advanced AI model weights. [...] If you proliferate an open source model, even if it looks safe, it could still be dangerous. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter S. ParkMIT AI existential safety fellowvotes For and says:Widely releasing the very advanced AI models of the future would be especially problematic, because preventing their misuse would be essentially impossible, he says, adding that they could enable rogue actors and nation-state adversaries to wage cybe... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Sciencevotes For and says:Some of the most recent models maybe can help people make biological weapons. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan Z. RozenshteinLaw professor and legal scholarvotes For and says:SB 1047 would, for example, forbid the release of a frontier model that could be easily induced to output detailed instructions for making a bioweapon. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Leopold AschenbrennerAI investor and policy analystvotes For and says:On the current course, the leading Chinese AGI labs won’t be in Beijing or Shanghai—they’ll be in San Francisco and London. In a few years, it will be clear that the AGI secrets are the United States’ most important national defense secrets—deserving... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writervotes For and says:But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understa... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Vinod KhoslaVenture capitalist; Khosla Ventures foundervotes For and says:And @pmarca would you open source the manhattan project? This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutvotes For and says:On the issue of open source, you each raised the security and safety risk of AI models that are open source or are leaked to the public, the danger. There are some advantages to having open source, as well. It's a complicated issue. I appreciate that... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jason G. MathenyRAND president and CEOvotes For and says:Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to creat... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lawrence LessigHarvard Law professorvotes For and says:You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily. It’s just an obviously fallacious argument. We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world fro... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
OpenAIAI research organizationvotes For and says:Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anna G. EshooU.S. Representative from Californiavotes For and says:Content controls, a free content filter, monitoring of applications, and a code of conduct are several other steps industry and academia, with the coaxing of the Administration and policymakers, could take to encourage responsible science and guard a... more Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dame Wendy HallComputer science professor, UKvotes For and says:The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary. In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company t... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul ScharreCNAS executive and weapons expertvotes For and says:Once the model weights are released, other less responsible actors can easily modify the model to strip away its safeguards. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (4)
-
U.S. Department of Commerce, Bureau of Industry and Security (BIS)U.S. export controls agencyabstains and says:As the Office of the Director of National Intelligence has assessed, AI models have the potential to enable advanced military and intelligence applications; lower the barriers to entry for nonexperts to develop weapons of mass destruction (WMD); supp... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark ZuckerbergMeta CEO; Facebook co-founderabstains and says:We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
United Nations Office for Disarmament AffairsUnited Nations disarmament officeabstains and says:Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Frontier FoundationWe're a nonprofit that fights for your privacy and free speech online.abstains and says:Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop inno... more Unverified source (2016)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Against (14)
-
Yann LeCunComputer scientist, AI researchervotes Against and says:I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key becaus... more Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainvotes Against and says:To try to ban Americans from using open source, open weight, Chinese models or other open models would be handing a monopoly to certain American companies on a platter. Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Matt O’BrienAssociated Press technology journalistvotes Against and says:The NTIA’s report says “current evidence is not sufficient” to warrant restrictions on AI models with “widely available weights.” But it also says U.S. officials must continue to monitor potential dangers and “take steps to ensure that the government... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tal FeldmanLawfare contributorvotes Against and says:Since computational infrastructure is largely open-access, decentralized, and global, regulatory chokepoints are limited. Export controls may delay access to high-performance computing, but they are unlikely to prevent the use of open-source models f... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Marc AndreessenGeneral Partner at a16z (VC), co-founder of Netscapevotes Against and says:there has to be full open source here. [...] if you're worried about AI-generated pathogens, [...] Let's do a Manhattan Project for biological defense. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Arthur MenschMistral AI co-founder and CEOvotes Against and says:But today, going, banning open source, preventing it from happening is really a way, well, to enforce regulatory capture, even though the actors that would benefit from it don’t want it to happen. […] If you actually ban small actors from doing thing... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stella BidermanEleutherAI executive director and scientistvotes Against and says:Encouraging companies to keep the details of their models secret is likely to lead to “serious downstream consequences for transparency, public awareness, and science.” […] Anyone in the world can read them and develop their own models, she says. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for Democracy & TechnologyDigital rights nonprofit organizationvotes Against and says:there is not yet enough evidence of novel risks from open foundation models to warrant new restrictions on their distribution. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick CleggMeta president of global affairsvotes Against and says:My view is that the hype has somewhat run ahead of the technology. I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MetaSocial media and AI companyvotes Against and says:Open source AI has the potential to unlock unprecedented technological progress. It levels the playing field, giving people access to powerful and often expensive technology for free, which enables competition and innovation that produce tools that b... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MozillaOpen-source web nonprofitvotes Against and says:In their report, NTIA rightly notes that “current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted or that restrictions will never be appropriate in the future.” Instead of recomme... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Clément DelangueHugging Face cofounder and CEOvotes Against and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alex EnglerBrookings AI policy scholarvotes Against and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
National Telecommunications and Information Administration (NTIA)US Commerce Department telecom agencyvotes Against and says:refrain from restricting the availability of open model weights for currently available systems. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai-safety
votes
More
ai-regulation
votes
| Ban predictive policing |
| Mandate disclosure of AI-generated political advertising |
| Ban autonomous lethal weapons |
| Restrict the CERN for AI initiative to EU member states |
| Repeal the EU AI Act |