Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (1)
ai-ethics (1)
ai-governance (1)
ai-regulation (1)
ai-safety (1)
existential-risk (1)
future (1)
research-policy (1)
×
Top
New
-
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023)