Biased? Add sourced quotes from experts and public figures.

Should we ban future open-source AI models that can be used to create weapons of mass destruction?

Cast your vote:
Results (55):
filter
Quotes (45) Users (1)
  • strongly agrees and says:
    Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could easily untrain it. So after models become capable of making nukes or super-Ebola, companies won’t be able to open-source them anymore without some as-yet-undiscovered technology to prevent end users from using these capabilities. Sounds . . . good? I don’t know if even the most committed anti-AI-safetyist wants a provably-super-dangerous model out in the wild. Still, what happens after that? No cutting-edge open-source AIs ever again? I don’t know. In whatever future year foundation models can make nukes and hack the power grid, maybe the CIA will have better AIs capable of preventing nuclear terrorism, and the power company will have better AIs capable of protecting their grid. The law seems to leave open the possibility that in this situation, the AIs wouldn’t technically be capable of doing these things, and could be open-sourced. (2024) source Verified
    Comment 3 Comment X added 1y ago
    Info
    Delegate
  • agrees and says:
    And a second point is about liability. And it's not completely clear where exactly the liability should lie. But to continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, we say that some liability should reside with the company that decided to sell the enriched uranium. They could put advice on it saying, "Do not use more than," you know, "three ounces of this in one place," or something. But no one's going to say that that absolves them from liability. So, I think those two are really important. And the open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse. (2023) source Verified
    Comment 1 Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (2024) source Verified
    Comment 1 Comment X added 1y ago
    Info
    Delegate
  • Andrew Ng
    Baidu; Stanford CS faculty; founded Coursera and Google Brain
    strongly disagrees and says:
    To try to ban Americans from using open source, open weight, Chinese models or other open models would be handing a monopoly to certain American companies on a platter. (2025) source Verified
    Comment 1 Comment X added 29d ago
    Info
    Delegate
  • agrees and says:
    From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how powerful a model is. If a model is very powerful, then I don’t want it given to the Chinese by being stolen. I also don’t want it given to the Chinese by being released. If a model is not that powerful, then it’s not concerning either way. (2025) source Verified
    Comment 1 Comment X added 2mo ago
    Info
    Delegate
  • strongly agrees and says:
    I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems become more capable, bad actors don't need to have very strong expertise, whether it's in bioweapons or cyber security, in order to take advantage of systems like this. (2023) source Verified
    Comment 1 Comment X added 29d ago
    Info
    Delegate
  • agrees and says:
    There is no permanent way to apply safety limitations to prevent users from obtaining help from the model with regard to bioweapons-related tasks. (2025) source Verified
    Comment 1 Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    Any and all safeguards can and will be removed within days of a large model being open-sourced. source Verified
    Comment 1 Comment X added 1mo ago
    Info
    Delegate
  • Ilya Sutskever
    AI researcher, co-founder and former former chief scientist at OpenAI
    strongly agrees and says:
    I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise. (2023) source Verified
    Comment 1 Comment X added 2mo ago
    Info
    Delegate
  • strongly agrees and says:
    Let's open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK. (2023) source Verified
    Comment 1 Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Via the Australia Group and the US Department of Commerce, the US Government should explicitly design export controls to limit open sourcing of the riskiest AI-enabled Biological Design Tools (BDTs). Since publishing a tool online can be considered an “export,” new export-control restrictions would necessarily limit the ability to freely open source a piece of software. This is a feature, not a bug, of the export control process, since open source should not be a loophole allowing for the proliferation of dangerous AI-enabled software. For these export controls to be effective, the United States should consider adding a new, narrow carve-out to the “publicly available” exclusion. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    As the Office of the Director of National Intelligence has assessed, AI models have the potential to enable advanced military and intelligence applications; lower the barriers to entry for nonexperts to develop weapons of mass destruction (WMD); support powerful offensive cyber operations; and assist in human rights violations, such as through mass surveillance. [...] For example, a dual-use AI model trained on data describing the functions and mechanics of chemical compounds or biological sequences could lower barriers to the development of chemical or biological weapons by providing protocols and troubleshooting information that would enable nonexperts to design and produce such weapons at low cost. Additionally, BIS is not imposing controls on the model weights of open-weight models. [...] BIS has also determined that, for now, the economic and social benefits of allowing the model weights of open-weight models to be published without a license currently outweigh the risks posed by those models. [...] Accordingly, BIS and its interagency partners are not today imposing controls on open-weight models. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    LLMs, such as GPT-4 and its successors, might provide dual-use information and thus remove some barriers encountered by historical biological weapons efforts. [...] BDTs may enable the creation of pandemic pathogens substantially worse than anything seen to date and could enable forms of more predictable and targeted biological weapons. In combination, the convergence of LLMs and BDTs could raise the ceiling of harm from biological agents and could make them broadly accessible. A range of interventions would help to manage risks. Independent pre-release evaluations could help understand the capabilities of models and the effectiveness of safeguards. Options for differentiated access to such tools should be carefully weighed with the benefits of openly releasing systems. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    The NTIA’s report says “current evidence is not sufficient” to warrant restrictions on AI models with “widely available weights.” But it also says U.S. officials must continue to monitor potential dangers and “take steps to ensure that the government is prepared to act if heightened risks emerge.” (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    Since computational infrastructure is largely open-access, decentralized, and global, regulatory chokepoints are limited. Export controls may delay access to high-performance computing, but they are unlikely to prevent the use of open-source models fine-tuned on public data. Access restrictions on commercial platforms can be circumvented by running models locally. In this environment, prevention cannot be the United States’s only strategy. Open-source PLMs are already circulating globally, making it increasingly easy for malicious actors to create pathogens. As with cybersecurity, resilience—not containment—must become the cornerstone of national biosecurity policy. (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    there has to be full open source here. [...] if you're worried about AI-generated pathogens, [...] Let's do a Manhattan Project for biological defense. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    We recognize the importance of open systems. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    outlaw the open-sourcing of advanced AI model weights. [...] If you proliferate an open source model, even if it looks safe, it could still be dangerous. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    Encouraging companies to keep the details of their models secret is likely to lead to “serious downstream consequences for transparency, public awareness, and science.” […] Anyone in the world can read them and develop their own models, she says. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    But today, going, banning open source, preventing it from happening is really a way, well, to enforce regulatory capture, even though the actors that would benefit from it don’t want it to happen. […] If you actually ban small actors from doing things, in the most efficient way, which is open source, you do facilitate the life of the larger incumbents. If you train your model on generating… there’s a focus on bio-weapon. So let’s say if we want to prevent models from generating chemical compounds, because we think it’s an enabler of bad behaviors, which I have said, we don’t think it is the case. […] Nothing was observed. No scientific studies in proper form was published. […] And all of a sudden you end up with like 50 papers saying that for sure, bioweapon is going to blow us up. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Widely releasing the very advanced AI models of the future would be especially problematic, because preventing their misuse would be essentially impossible, he says, adding that they could enable rogue actors and nation-state adversaries to wage cyberattacks, election meddling, and bioterrorism with unprecedented ease. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Finally, to avoid that open-source models are accessed and retrofitted for malicious purposes a potential solution is to create self-destruct codes source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    for example ability to give instructions on how to build bioweapons (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    there is not yet enough evidence of novel risks from open foundation models to warrant new restrictions on their distribution. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    SB 1047 would, for example, forbid the release of a frontier model that could be easily induced to output detailed instructions for making a bioweapon. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • disagrees and says:
    My view is that the hype has somewhat run ahead of the technology. I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself. The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted. Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands. (2016) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understand that is difficult to control, that where if you could align it, it would take time. You'd have to spend a bunch of time doing it. That is not a place for open source, because then you just have powerful things that just go straight out the gate without anybody having had the time to have them not kill everyone. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    On the current course, the leading Chinese AGI labs won’t be in Beijing or Shanghai—they’ll be in San Francisco and London. In a few years, it will be clear that the AGI secrets are the United States’ most important national defense secrets—deserving treatment on par with B-21 bomber or Columbia-class submarine blueprints, let alone the proverbial “nuclear secrets”—but today, we are treating them the way we would random SaaS software. At this rate, we’re basically just handing superintelligence to the CCP. And this won’t just matter years in the future. Sure, who cares if GPT-4 weights are stolen—what really matters in terms of weight security is that we can secure the AGI weights down the line, so we have a few years, you might say. (Though if we’re building AGI in 2027, we really have to get moving!) But the AI labs are developing the algorithmic secrets—the key technical breakthroughs, the blueprints so to speak—for the AGI right now (in particular, the RL/self-play/synthetic data/etc “next paradigm” after LLMs to get past the data wall). AGI-level security for algorithmic secrets is necessary years before AGIlevel security for weights. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    And @pmarca would you open source the manhattan project? This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world. These concerns are not hypothetical. Such a leak has, in fact, already occurred. In March, an AI model developed by Meta called LLaMA appeared online. LLaMA was not intended to be publicly accessible, but the model was shared with AI researchers, who then requested full access to further their own projects. At least two of them abused Meta’s trust and released the model online, and Meta has been unable to remove LLaMA from the internet. The model can still be accessed by anyone. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    On the issue of open source, you each raised the security and safety risk of AI models that are open source or are leaked to the public, the danger. There are some advantages to having open source, as well. It's a complicated issue. I appreciate that open source can be an extraordinary resource. But even in the short time that we've had some AI tools and they've been available, they have been abused. For example, I'm aware that a group of people took Stable Diffusion and created a version for the express purpose of creating nonconsensual sexual material. So, on the one hand, access to AI data is a good thing for research, but on the other hand, the same open models can create risks, just because they are open. And I think the comparison is apt. You know, I've been reading the most recent biography of Robert Oppenheimer, and every time I think about AI, the specter of quantum physics, nuclear bombs, but also atomic energy, both peaceful and military purposes, is inescapable. (2023) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily. It’s just an obviously fallacious argument. We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world from nuclear annihilation is to give every country nuclear bombs.’ (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    Content controls, a free content filter, monitoring of applications, and a code of conduct are several other steps industry and academia, with the coaxing of the Administration and policymakers, could take to encourage responsible science and guard against the misuse of AI-focused drug discovery. Finally, requiring the use of an API, with code and data available upon request, would greatly enhance security and control over how published models are utilized without adding much hindrance to accessibility. APIs can: (1) block queries that have potentially dual-use applications; (2) screen users, such as requiring an institutional affiliation; and (3) flag suspicious activity. I urge you to explore this and any other viable methods within your authorities to reduce the likelihood of open-source AI models being misused for bioweapons. AI has important applications in biotechnology, healthcare, and pharmaceuticals, however, we should remain vigilant against the potential harm dual-use applications represent for the national security, economic security, and public health of the United States, in the same way we would with physical resources such as molecules or biologics. To mitigate these risks, I urge the Administration to include the governance of dual-use, open-source AI models in its upcoming discussions at the BWC Review Conference and investigate methods of governance such as mandating the use of APIs. (2022) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    Open source AI has the potential to unlock unprecedented technological progress. It levels the playing field, giving people access to powerful and often expensive technology for free, which enables competition and innovation that produce tools that benefit individuals, society and the economy. Open sourcing AI is not optional; it is essential for cementing America’s position as a leader in technological innovation, economic growth and national security. Our Frontier AI Framework focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons. By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems, for example: (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • agrees and says:
    The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary. In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly agrees and says:
    Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats. If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated. Our Safety Advisory Group, a cross-functional team of internal leaders, partners with internal safety teams to administer the framework. For high-risk launches, they assess any remaining risks, evaluate the strength of our safeguards, and advise OpenAI leadership on whether it’s safe to move forward. Our Board’s Safety and Security Committee provides oversight of these decisions. This can mean delaying a release, limiting who can use the model, or turning off certain features, even if it disappoints users. For a High biology capability model, this would mean putting in place sufficient safeguards that would bar users from gaining expert capabilities given their potential for severe harm. (2025) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    In their report, NTIA rightly notes that “current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted or that restrictions will never be appropriate in the future.” Instead of recommending restrictions, NTIA suggests that the government “actively monitor…risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.” NTIA’s recommendations for collecting and evaluating relevant evidence include support for external research, increasing transparency, and bolstering federal government expert capabilities. We welcome this approach, and in our comments we called for governments to play a role in “promoting and funding research”; we agree that it will help us all better understand and navigate the AI landscape. (2024) source Unverified
    Comment Comment X added 23d ago
    Info
    Delegate
  • strongly disagrees and says:
    open source is not more dangerous than closed source (2024) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • disagrees and says:
    Regulating the open-source release of GPAI models is broadly unnecessary (2022) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • agrees and says:
    Once the model weights are released, other less responsible actors can easily modify the model to strip away its safeguards. (2023) source Unverified
    Comment Comment X added 29d ago
    Info
    Delegate
  • disagrees and says:
    refrain from restricting the availability of open model weights for currently available systems. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    We should never try to create non-ethical tools
    Comment Comment X added 20d ago
    Info
    Delegate
Terms · Privacy · Contact