We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
Top
New
-
AI poses an existential threat to humanity
Eliezer Yudkowsky votes For and says:
-
Ban superintelligence development until safety consensus is reached
Eliezer Yudkowsky votes For and says:
-
Participate in shaping the future of AI
Eliezer Yudkowsky votes Against and says:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens whe... more Unverified source (2023)