Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Could AGI quickly lead to superintelligence?
For (7)
-
Spencer Greenberg 🔍A mathematician/entrepreneur in social science. Here, my aim is to help you gain insights about psychology, critical thinking, philosophy, tech, and society.votes For and says:Example 1: self-play. AI is not just human level at chess, it far exceeds human level because of self-play. Example 2: aggregation of peak performance. No human can get all math Olympiad problems right; but an A.I. can be trained on the correct an... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
David J. ChalmersPhilosopher of mind, consciousness and AIvotes For and says:If there is AI, then there will be AI+ [...] Soon after we have produced a human-level AI, we will produce an even more intelligent AI. Unverified source (2010)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
I. J. GoodBritish statistician; Turing collaboratorvotes For and says:an ultraintelligent machine could design even better machines; there would [...] be an 'intelligence explosion,' and the intelligence of man would be left far behind. Unverified source (1965)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen HawkingTheoretical physicist, cosmologist, and authorvotes For and says:Once humans develop artificial intelligence it would take off on its own,[...] Humans [...] couldn't compete and would be superseded. Unverified source (2014)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:once we have full AGI, super intelligence might be quite close on the heels of that. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writervotes For and says:From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. Unverified source (2008)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes For and says:It might take two weeks or two days or two hours or two minutes. [...] It’s very appropriate to call this an “intelligence explosion”. Unverified source (2022)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (4)
-
Sam AltmanCEO at OpenAIvotes Against and says:But then there is a long continuation from what we call AGI to what we call Superintelligence. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:There is no such thing as an intelligence explosion. There is no reason AI should become in control just because it is more capable. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul ChristianoARC founder; alignment researchervotes Against and says:I expect “slow takeoff,” which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval. Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Robin HansonEconomist; Overcoming Bias blogger; GMUvotes Against and says:I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai
votes