We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
ai-safety
ai-deployment
ai-risk
public-interest-ai
ai-policy
ai-regulation
ai-ethics
existential-risk
ai
ai-governance
Cast your vote:
Results (42):
filter
Quotes (33)
Users (0)
-
Scott AlexanderAuthor and psychiatriststrongly agrees and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professoragrees and says:And a second point is about liability. And it's not completely clear where exactly the liability should lie. But to continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, we say that some liability should reside with the company that decided to sell the enriched uranium. They could put advice on it saying, "Do not use more than," you know, "three ounces of this in one place," or something. But no one's going to say that that absolves them from liability. So, I think those two are really important. And the open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse. (2023) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researcherstrongly disagrees and says:I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainstrongly disagrees and says:To try to ban Americans from using open source, open weight, Chinese models or other open models would be handing a monopoly to certain American companies on a platter. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIagrees and says:From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how powerful a model is. If a model is very powerful, then I don’t want it given to the Chinese by being stolen. I also don’t want it given to the Chinese by being released. If a model is not that powerful, then it’s not concerning either way. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems become more capable, bad actors don't need to have very strong expertise, whether it's in bioweapons or cyber security, in order to take advantage of systems like this. (2023) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Steven AdlerAI safety researcher; Lawfare writeragrees and says:There is no permanent way to apply safety limitations to prevent users from obtaining help from the model with regard to bioweapons-related tasks. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kevin M. EsveltMIT biosecurity and gene drive researcheragrees and says:Any and all safeguards can and will be removed within days of a large model being open-sourced. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ilya SutskeverAI researcher, co-founder and former former chief scientist at OpenAIstrongly agrees and says:I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise. (2023) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningstrongly agrees and says:Let's open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK. (2023) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Scienceagrees and says:Some of the most recent models maybe can help people make biological weapons. (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
United Nations Office for Disarmament AffairsUnited Nations disarmament officedisagrees and says:Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Center for Democracy & TechnologyDigital rights nonprofit organizationstrongly disagrees and says:there is not yet enough evidence of novel risks from open foundation models to warrant new restrictions on their distribution. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan Z. RozenshteinLaw professor and legal scholaragrees and says:SB 1047 would, for example, forbid the release of a frontier model that could be easily induced to output detailed instructions for making a bioweapon. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Frontier FoundationWe're a nonprofit that fights for your privacy and free speech online.strongly disagrees and says:Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted. Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands. (2016) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick CleggMeta president of global affairsdisagrees and says:My view is that the hype has somewhat run ahead of the technology. I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself. The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writerstrongly agrees and says:But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understand that is difficult to control, that where if you could align it, it would take time. You'd have to spend a bunch of time doing it. That is not a place for open source, because then you just have powerful things that just go straight out the gate without anybody having had the time to have them not kill everyone. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Leopold AschenbrennerAI investor and policy analyststrongly agrees and says:On the current course, the leading Chinese AGI labs won’t be in Beijing or Shanghai—they’ll be in San Francisco and London. In a few years, it will be clear that the AGI secrets are the United States’ most important national defense secrets—deserving treatment on par with B-21 bomber or Columbia-class submarine blueprints, let alone the proverbial “nuclear secrets”—but today, we are treating them the way we would random SaaS software. At this rate, we’re basically just handing superintelligence to the CCP. And this won’t just matter years in the future. Sure, who cares if GPT-4 weights are stolen—what really matters in terms of weight security is that we can secure the AGI weights down the line, so we have a few years, you might say. (Though if we’re building AGI in 2027, we really have to get moving!) But the AI labs are developing the algorithmic secrets—the key technical breakthroughs, the blueprints so to speak—for the AGI right now (in particular, the RL/self-play/synthetic data/etc “next paradigm” after LLMs to get past the data wall). AGI-level security for algorithmic secrets is necessary years before AGIlevel security for weights. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Vinod KhoslaVenture capitalist; Khosla Ventures founderagrees and says:And @pmarca would you open source the manhattan project? This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Richard BlumenthalU.S. Senator from Connecticutagrees and says:On the issue of open source, you each raised the security and safety risk of AI models that are open source or are leaked to the public, the danger. There are some advantages to having open source, as well. It's a complicated issue. I appreciate that open source can be an extraordinary resource. But even in the short time that we've had some AI tools and they've been available, they have been abused. For example, I'm aware that a group of people took Stable Diffusion and created a version for the express purpose of creating nonconsensual sexual material. So, on the one hand, access to AI data is a good thing for research, but on the other hand, the same open models can create risks, just because they are open. And I think the comparison is apt. You know, I've been reading the most recent biography of Robert Oppenheimer, and every time I think about AI, the specter of quantum physics, nuclear bombs, but also atomic energy, both peaceful and military purposes, is inescapable. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lawrence LessigHarvard Law professorstrongly agrees and says:You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily. It’s just an obviously fallacious argument. We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world from nuclear annihilation is to give every country nuclear bombs.’ (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tal FeldmanLawfare contributorstrongly disagrees and says:Since computational infrastructure is largely open-access, decentralized, and global, regulatory chokepoints are limited. Export controls may delay access to high-performance computing, but they are unlikely to prevent the use of open-source models fine-tuned on public data. Access restrictions on commercial platforms can be circumvented by running models locally. Even if next-generation safety tools could detect a dangerous protein sequence, models can be modified or fine-tuned in private—especially by well-resourced actors. To be clear, model output alone is not enough. Developing a functional bioweapon still requires access to DNA synthesis services, laboratory infrastructure, and methods of delivery. But those barriers are far lower than they once were—and continue to fall. The threshold for misuse is no longer high. In this environment, prevention cannot be the United States’s only strategy. Open-source PLMs are already circulating globally, making it increasingly easy for malicious actors to create pathogens. What matters is how fast U.S. defense systems can respond—and whether the nation has the infrastructure in place to do so. As with cybersecurity, resilience—not containment—must become the cornerstone of national biosecurity policy. PLMs are both the cause of and solution to this risk. The same models that could be used to design pathogens are already helping scientists discover new drugs. Their ability to generate novel, functional proteins is precisely what makes them indispensable for rapid response. Shutting them down wouldn’t just slow biomedical progress—it would weaken U.S. defenses. If the U.S. can’t stop the technology, it must outrun its weaponization. In the age of generative biology, resilience is the only viable defense. (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jason G. MathenyRAND president and CEOagrees and says:Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world. These concerns are not hypothetical. Such a leak has, in fact, already occurred. In March, an AI model developed by Meta called LLaMA appeared online. LLaMA was not intended to be publicly accessible, but the model was shared with AI researchers, who then requested full access to further their own projects. At least two of them abused Meta’s trust and released the model online, and Meta has been unable to remove LLaMA from the internet. The model can still be accessed by anyone. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anna G. EshooU.S. Representative from Californiaagrees and says:Content controls, a free content filter, monitoring of applications, and a code of conduct are several other steps industry and academia, with the coaxing of the Administration and policymakers, could take to encourage responsible science and guard against the misuse of AI-focused drug discovery. Finally, requiring the use of an API, with code and data available upon request, would greatly enhance security and control over how published models are utilized without adding much hindrance to accessibility. APIs can: (1) block queries that have potentially dual-use applications; (2) screen users, such as requiring an institutional affiliation; and (3) flag suspicious activity. I urge you to explore this and any other viable methods within your authorities to reduce the likelihood of open-source AI models being misused for bioweapons. AI has important applications in biotechnology, healthcare, and pharmaceuticals, however, we should remain vigilant against the potential harm dual-use applications represent for the national security, economic security, and public health of the United States, in the same way we would with physical resources such as molecules or biologics. To mitigate these risks, I urge the Administration to include the governance of dual-use, open-source AI models in its upcoming discussions at the BWC Review Conference and investigate methods of governance such as mandating the use of APIs. (2022) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dame Wendy HallComputer science professor, UKagrees and says:The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary. In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
OpenAIAI research organizationstrongly agrees and says:Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats. If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated. Our Safety Advisory Group, a cross-functional team of internal leaders, partners with internal safety teams to administer the framework. For high-risk launches, they assess any remaining risks, evaluate the strength of our safeguards, and advise OpenAI leadership on whether it’s safe to move forward. Our Board’s Safety and Security Committee provides oversight of these decisions. This can mean delaying a release, limiting who can use the model, or turning off certain features, even if it disappoints users. For a High biology capability model, this would mean putting in place sufficient safeguards that would bar users from gaining expert capabilities given their potential for severe harm. (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MozillaOpen-source web nonprofitstrongly disagrees and says:In their report, NTIA rightly notes that “current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted or that restrictions will never be appropriate in the future.” Instead of recommending restrictions, NTIA suggests that the government “actively monitor…risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.” NTIA’s recommendations for collecting and evaluating relevant evidence include support for external research, increasing transparency, and bolstering federal government expert capabilities. We welcome this approach, and in our comments we called for governments to play a role in “promoting and funding research”; we agree that it will help us all better understand and navigate the AI landscape. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MetaSocial media and AI companystrongly disagrees and says:Open source AI has the potential to unlock unprecedented technological progress. It levels the playing field, giving people access to powerful and often expensive technology for free, which enables competition and innovation that produce tools that benefit individuals, society and the economy. Open sourcing AI is not optional; it is essential for cementing America’s position as a leader in technological innovation, economic growth and national security. Our Frontier AI Framework focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons. By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems, for example: (2025) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alex EnglerBrookings AI policy scholardisagrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Clément DelangueHugging Face cofounder and CEOstrongly disagrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul ScharreCNAS executive and weapons expertagrees and says:Once the model weights are released, other less responsible actors can easily modify the model to strip away its safeguards. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
National Telecommunications and Information Administration (NTIA)US Commerce Department telecom agencydisagrees and says:refrain from restricting the availability of open model weights for currently available systems. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.