Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AGI could quickly lead to superintelligence
Cast your vote:
For (15)
-
Spencer Greenberg đA mathematician/entrepreneur in social science. Here, my aim is to help you gain insights about psychology, critical thinking, philosophy, tech, and society.votes For and says:Example 1: self-play. AI is not just human level at chess, it far exceeds human level because of self-play. Example 2: aggregation of peak performance. No human can get all math Olympiad problems right; but an A.I. can be trained on the correct an... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim RocktäschelAI researchervotes For and says:Once AI reaches human-level capabilities, we will be able to use it to improve itself in a self-referential way. I personally believe that if we can reach AGI, we will reach ASI shortly, maybe a few years after that. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anthony AguirrePhysicist; Future of Life cofoundervotes For and says:The development of full artificial general intelligence â what we will call here AI that is "outside the Gates" â would be a fundamental shift in the nature of the world: by its very nature it means adding a new species of intelligence to Earth with ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:Moreover, frontier AI companies are seeking to develop AI with a specific skill that could very well unlock all others and turbocharge advances: AIs with the ability to advance research in AI. An AI system that would be as capable at AI research as t... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ben GoertzelSingularityNET founder; AGI researchervotes For and says:My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI â unless the AGI threatens to throttle its own development out of its own conservatism. I think once an AGI can introspect its own mind, then ... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eric SchmidtFormer Google CEO; tech investorvotes For and says:So what happens when this thing starts to scale? Well, a lot. One way to say this is that within three to five years we'll have what is called general intelligence AGI, which can be defined as a system that is as smart as the smartest mathematician, ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jared KaplanAnthropic chief scientistvotes For and says:If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, itâs [then] making an AI thatâs much smarter. Itâs going to enlist that AI help to make an AI smarter than that. It sounds like a kind of ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Scott AlexanderAuthor and psychiatristvotes For and says:So weâve already gone from âmere human intelligenceâ to âhuman with all knowledge, photographic memory, lightning calculations, and solves problems a hundred times faster than anyone else.â This suggests that âmerely human level intelligenceâ isnât m... more Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Leopold AschenbrennerAI investor; former OpenAI researchervotes For and says:AI progress wonât stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI systems. The powerâand th... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen HawkingTheoretical physicist, cosmologist, and authorvotes For and says:Once humans develop artificial intelligence it would take off on its own,[...] Humans [...] couldn't compete and would be superseded. Unverified source (2014)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
I. J. GoodBritish statistician; Turing collaboratorvotes For and says:an ultraintelligent machine could design even better machines; there would [...] be an 'intelligence explosion,' and the intelligence of man would be left far behind. Unverified source (1965)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
David J. ChalmersPhilosopher of mind, consciousness and AIvotes For and says:If there is AI, then there will be AI+ [...] Soon after we have produced a human-level AI, we will produce an even more intelligent AI. Unverified source (2010)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:once we have full AGI, super intelligence might be quite close on the heels of that. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes For and says:It might take two weeks or two days or two hours or two minutes. [...] Itâs very appropriate to call this an âintelligence explosionâ. Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writervotes For and says:From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. Unverified source (2008)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (1)
-
Roman V. YampolskiyAI safety researcher, Louisville professorabstains and says:Until some company or scientist says âHereâs the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,â I donât think we should be developing those general superintelligences. We can get most of the benefits w... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Against (10)
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainvotes Against and says:Thereâs also a lot of hype, that AI will create evil robots with super-intelligence. Thatâs an unnecessary distraction. Those of us on the frontline shipping code, weâre excited by AI, but we donât see a realistic path for our software to become sen... more Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rodney BrooksRoboticist; former MIT CSAIL directorvotes Against and says:My own opinion is that of course this is possible in principle. I would never have started working on Artificial Intelligence if I did not believe that. [...] Even if it is possible I personally think we are far, far further away from understanding ... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes Against and says:Two âextremeâ positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (âthe Singularityâ), as superior intelligence builds on itself and solves every possible scientific, e... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mustafa SuleymanMicrosoft AI CEO; authorvotes Against and says:It depends on your definition of AGI, right? AGI isnât the singularity. The singularity is an exponentially recursive self-improving system that very rapidly accelerates far beyond anything that might look like human intelligence. To me, AGI is a ge... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ramez NaamScience author and futuristvotes Against and says:We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of th... more Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim DettmersMachine learning researchervotes Against and says:The concept of superintelligence is built on a flawed premise. The idea is that once you have an intelligence that is as good or better than humans â in other words, AGI â then that intelligence can improve itself, leading to a runaway effect. This i... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes Against and says:But then there is a long continuation from what we call AGI to what we call Superintelligence. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:There is no such thing as an intelligence explosion. There is no reason AI should become in control just because it is more capable. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Robin HansonEconomist; Overcoming Bias blogger; GMUvotes Against and says:I donât think a sudden (âfoomâ) takeover by a super intelligent computer is likely. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul ChristianoARC founder; alignment researchervotes Against and says:I expect âslow takeoff,â which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval. Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.