We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (5)
×
future (4)
ai-safety (3)
existential-risk (3)
ai-ethics (2)
ai-governance (2)
ai-regulation (2)
ai-risk (2)
ai-policy (1)
economics (1)
ethics (1)
research-policy (1)
Top
New
-
Eliezer Yudkowsky votes For and says:
-
Eliezer Yudkowsky votes For and says:
From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. Unverified source (2008) -
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023) -
Eliezer Yudkowsky votes Against and says:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens whe... more Unverified source (2023) -
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023)