We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (5)
ai-safety (5)
ai-governance (4)
ai-regulation (4)
existential-risk (4)
future (4)
policy (3)
ai-ethics (2)
ai-policy (2)
ai-risk (2)
economics (2)
ethics (2)
cybersecurity (1)
defense (1)
digital-democracy (1)
Top
New
-
-
Eliezer Yudkowsky votes For and says:
-
-
Eliezer Yudkowsky votes For and says:
From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. Unverified source (2008) -
Eliezer Yudkowsky votes For and says:
-
Eliezer Yudkowsky votes For and says:
But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understa... more Unverified source (2023) -
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023) -
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023) -
Eliezer Yudkowsky votes Against and says:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens whe... more Unverified source (2023)