Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (2)
ai-ethics (2)
ai-governance (2)
×
ai-risk (2)
ai-safety (2)
existential-risk (2)
ai-alignment (1)
ai-deployment (1)
ai-policy (1)
ai-regulation (1)
future (1)
public-interest-ai (1)
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Eliezer Yudkowsky strongly agrees and says:
But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understand that is difficult to control, that where if you could align it, it would take time. You'd have to spend a bunch of time doing it. That is not a place for open source, because then you just have powerful things that just go straight out the gate without anybody having had the time to have them not kill everyone. (2023) source Unverified