We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai-safety (5)
ai (4)
ai-governance (3)
ai-regulation (2)
economics (2)
existential-risk (2)
future (2)
policy (2)
ai-alignment (1)
cybersecurity (1)
privacy (1)
social-justice (1)
voting-systems (1)
Top
New
-
Implement a universal basic income
-
AI poses an existential threat to humanity
Eliezer Yudkowsky votes For and says:
-
Use electronic voting machines
-
Could AGI quickly lead to superintelligence?
Eliezer Yudkowsky votes For and says:
From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. Unverified source (2008) -
Ban superintelligence development until safety consensus is reached
Eliezer Yudkowsky votes For and says:
-
Ban open source AI models capable of creating WMDs
Eliezer Yudkowsky votes For and says:
But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understa... more Unverified source (2023) -
Build artificial general intelligence
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023) -
AGI will create abundance
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023) -
Participate in shaping the future of AI
Eliezer Yudkowsky votes Against and says:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens whe... more Unverified source (2023)