Biased? Add sourced quotes from experts and public figures.

Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?

Cast your vote:
Results (39):
filter
Quotes (38) Users (0)
  • strongly agrees and says:
    Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future. (2025) source Verified
    Comment 2 Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. The Precautionary Principle was invented to prevent the large‑scale deployment of civilian nuclear power, perhaps the most catastrophic mistake in Western society in my lifetime. The Precautionary Principle continues to inflict enormous unnecessary suffering on our world today. It is deeply immoral, and we must jettison it with extreme prejudice. We believe in accelerationism – the conscious and deliberate propulsion of technological development – to ensure the fulfillment of the Law of Accelerating Returns. To ensure the techno‑capital upward spiral continues forever. (2023) source Verified
    Comment 1 Comment X added 19h ago
    Info
    Delegate
  • Nick Bostrom
    Philosopher; 'Superintelligence' author; FHI founder
    agrees and says:
    Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn’t overlooked some flaw in our reasoning. Unfortunately, we do not have the ability to pause. (2014) source Unverified
    Comment Comment X added 17h ago
    Info
    Delegate
  • disagrees and says:
    Guidance, regulation, reliability, and controls are part of advancing the field—done properly, they can even speed it up. But a blanket prohibition on development until some vague threshold of public buy‑in and universal scientific consensus is reached is not the way forward. We should be cautious about bumper‑sticker slogans that say no regulation because it’s going to slow us down—good regulation can help. Likewise, slogans to “ban” development while we wait for consensus risk freezing the very work that will make systems safer. The path should be continued research and deployment with rigorous safeguards, evaluation, and accountability—not halting progress in hopes that agreement materializes first. We can and should pursue innovation and safety together. (2025) source Unverified
    Comment Comment X added 17h ago
    Info
    Delegate
  • strongly disagrees and says:
    Treating speculative “superintelligence” as the policy target and proposing to freeze development until there’s public buy‑in and scientific consensus distracts from the actual, present‑day harms of AI systems. These systems are already amplifying discrimination, exploiting labor, and enabling surveillance. A ban premised on hypothetical future scenarios centers the agendas of the firms and figures hyping those scenarios and sidelines the communities bearing real costs now. Democratic governance means addressing concrete harms, enforcing existing laws, and creating accountability for how AI is built and deployed. We don’t need to stop the world for a fantasy of control over imagined “superintelligence.” We need to regulate and redirect the industry we have—today. (2025) source Unverified
    Comment Comment X added 17h ago
    Info
    Delegate
  • strongly agrees and says:
    Time is running out. The only thing likely to stop AI companies barreling toward superintelligence is for there to be widespread realization among society at all its levels that this is not actually what we want. That means building public will and scientific clarity first, and only then moving ahead on anything that would concentrate world-altering power in a machine. This isn’t a casual slowdown; it’s an affirmative choice about what kind of future we are consenting to. If there isn’t strong public buy‑in and broad scientific consensus that it can be done safely and controllably, then pressing on would be reckless engineering and reckless governance. The right move is to hold off until those conditions are met, and to make creating those conditions the priority. (2025) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • agrees and says:
    I don’t think they should scale this up more until they have understood whether they can control it. We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but what if we develop machines that are smarter than us? We have no experience dealing with these things. The best I can recommend is that many very smart people try to figure out how to contain the dangers of these things. There’s no use waiting for the AI to outsmart us; we must control it as it develops. If that control and the broad societal confidence in it aren’t there yet, then the responsible course is not to push ahead to ever larger systems. (2023) source Unverified
    Comment Comment X added 17h ago
    Info
    Delegate
  • strongly disagrees and says:
    AI is a foundational technology that every country, every industry, and every person will ultimately rely on. You can’t slam on the brakes and wait for some abstract consensus before building the future. We should not be putting up stop signs to progress; we should be building the infrastructure and the guardrails at the same time, in public, with transparency and accountability. Calls to “ban” development until there’s some undefined level of public buy‑in and a supposed global scientific consensus sound neat in theory but fall apart in practice. They’re not enforceable, they would fragment the world, and they would cede leadership to actors who won’t wait. The right answer is to keep moving, to invest in safety and reliability, and to work with governments on smart regulation—while continuing to advance the state of the art. (2024) source Unverified
    Comment Comment X added 17h ago
    Info
    Delegate
  • Elon Musk
    Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    agrees and says:
    If I could put a pause on AI or really advanced AI, superintelligence, I would. It doesn’t seem that is realistic. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    I don’t think Nick Bostrom or anyone else is going to stop the human race from developing advanced AI because it’s a source of tremendous intellectual curiosity but also of tremendous economic advantage. So if let’s say President Trump decided to ban artificial intelligence research – I don’t think he’s going to but suppose he did. China will keep doing artificial intelligence research. If U.S. and China ban it, you know, Africa will do it. Everywhere around the world has AI textbooks and computers. And everyone now knows you can make people’s lives better and make money from developing more advanced AI. So there’s no possibility in practice to halt AI development. What we can do is try to direct it in the most beneficial direction according to our best judgment. source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Bill Joy
    Sun Microsystems cofounder; computer scientist
    strongly agrees and says:
    We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don’t believe so, but we aren’t trying yet, and the last chance to assert control—the fail-safe point—is rapidly approaching. And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups. The clear conclusion was that we would create additional threats to ourselves by pursuing these weapons, and that we would be more secure if we did not pursue them. (2000) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    I’m not in favor of a six-month pause because it will simply benefit China. What I am in favor of is getting everybody together to discuss what are the appropriate guardrails. So, I’m in favor of letting the industry try to get its act together. This is a case where you don’t rush in unless you understand what you’re doing. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • agrees and says:
    When it comes to scientific moratoriums, we’ve got some examples, such as the moratorium on human cloning and the moratorium on human germline genetic engineering — that’s genetic engineering that’s inherited down to the children, that could lead to splintering into different species. [...] So their approach, I think of it not quite as a pause for a certain amount of time. They also didn’t say, “We can never ever do this, and anyone who does it is evil” or something. Instead, what they were saying is, “Not now. It’s not close to happening. Let’s close the box. Put the box back in the attic. And if in the future the scientific community comes together and decides to lift the moratorium, they’d be welcome to do that. But for the foreseeable future, it’s not happening.” And it seems to me that in the case of AI, that’s kind of where we’re at. [...] So what I would recommend in that case is to go through that step of having that public conversation about should there be a moratorium in a similar way on this. [...] My guess is that there’s something like a 5% to 10% chance that some kind of moratorium like this — perhaps starting from the scientific community effectively saying you would be persona non grata if you were to work on systems that would take us beyond that human level — would work. But I do think that things like this are possible. (2025) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Andrew Ng
    Baidu; Stanford CS faculty; founded Coursera and Google Brain
    disagrees and says:
    The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    We’re calling for a ban on more powerful general AI systems, until we know how to build provably safe AI, and [...] under democratic control. (2025) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • ControlAI
    Campaign to prohibit superintelligence development
    strongly agrees and says:
    prevent the development of artificial superintelligence and keep humanity in control. [...] inform every relevant person in the democratic process. (2025) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    This is a formula for outright stagnation. [...] regulators should let entrepreneurship flourish while efforts to monitor and improve AI safety proceed in parallel. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Tyler Cowen
    Professor of Economics, George Mason University & author of Average is Over
    strongly disagrees and says:
    I disagree. A slowdown would politicize AI development... and could induce a talent drain. [...] The risk is that the delay could be extended indefinitely. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Bill Gates
    Philanthropist. Founder and former CEO of Microsoft.
    disagrees and says:
    I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy. I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. [...] Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right. (2023) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence’. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly disagrees and says:
    My first reaction to [the Pause Giant AI Experiments letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Johnnie Moore
    American evangelical leader and businessman, founder of The Kairos Company and chairman of the Gaza Humanitarian Foundation
    strongly agrees and says:
    We should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control. Creating superintelligent machines is not only unacceptably dangerous and immoral, but also completely unnecessary. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • disagrees and says:
    My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right. [...] I would agree. And, and I don't think it's a realistic thing in the world. The reason I personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe ai rather than just making a bigger version of something we already know to be unreliable. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • agrees and says:
    The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. [...] If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike. (2023) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    AI offers extraordinary promise to advance human rights, tackle inequality, and protect our planet, but the pursuit of superintelligence threatens to undermine the very foundations of our common humanity. We must act with both ambition and responsibility by choosing the path of human-centred AI that serves dignity and justice. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I’m right and you cannot, it’s impossible, then it becomes a permanent ban. But if you’re right, and it’s possible, so as soon as you have those safety capabilities, go ahead. If we create general superintelligences, I don’t see a good outcome long-term for humanity. So there is X‑risk, existential risk, everyone’s dead. There is S‑risk, suffering risks, where everyone wishes they were dead. [...] It’s not obvious what you have to contribute to a world where superintelligence exists. (2024) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • disagrees and says:
    We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. (2023) source Unverified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    To get the most from what AI has to offer mankind, there is simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far. By definition this would result in a power that we could neither understand nor control. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • John Wolfsthal
    American nuclear security expert and former special assistant to the U.S. President for National Security Affairs.
    strongly agrees and says:
    The discussion over AGI should not be cast as a struggle between so called doomers and optimists. AGI presents a common challenge for all of humanity. We must ensure we control technology and it does not control us. Until and unless developers and their funders know that a technology with the capacity to be smarter, faster, stronger and just as lethal as humanity cannot escape human control, it must not be unleashed. Ensuring we can enjoy the benefits of AI and AGI requires us to be responsible in its development. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Joe Allen
    American writer and transhumanism commentator, War Room correspondent.
    strongly agrees and says:
    If superintelligence is achievable and the public buys in, then I'm out. source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Samuel Buteau
    AI researcher and PhD candidate at Mila (Quebec AI Institute).
    strongly agrees and says:
    Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Mark Beall
    President of Government Affairs at the AI Policy Network.
    strongly agrees and says:
    When AI researchers warn of extinction and tech leaders build doomsday bunkers, prudence demands we listen. Superintelligence without proper safeguards could be the ultimate expression of human hubris—power without moral restraint. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Yuval Noah Harari
    Israeli historian and professor at the Hebrew University of Jerusalem
    strongly agrees and says:
    Superintelligence would likely break the very operating system of human civilization - and is completely unnecessary. If we instead focus on building controllable AI tools to help real people today, we can far more reliably and safely realize AI’s incredible benefits. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • Walter Kim
    Korean American evangelical pastor and president of the National Association of Evangelicals.
    strongly agrees and says:
    If we race to build superintelligence without clear and morally informed parameters, we risk undermining the incredible potential AI has to alleviate suffering and enable flourishing. We should intentionally harness this amazing technology to help people, not rush to build machines and mechanisms we cannot control. (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
  • strongly agrees and says:
    This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask? (2025) source Verified
    Comment Comment X added 19h ago
    Info
    Delegate
Terms · Privacy · Contact