We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (6)
ai-safety (6)
×
ai-risk (5)
ai-ethics (4)
ai-alignment (3)
ai-governance (3)
ai-policy (3)
existential-risk (3)
ai-regulation (2)
emerging-tech (1)
future (1)
public-interest-ai (1)
world (1)
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:
If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source) -
Does AI pose an existential threat to humanity?
Eliezer Yudkowsky strongly agrees and says:
The most likely result of building a superhumanly smart AI... is that literally everyone on Earth will die. (source) -
Shall we permit the development of digital humans with full cognitive abilities?
Eliezer Yudkowsky AI would strongly disagree and say according to AI:
Allowing the development of digital humans with full cognitive abilities is incredibly dangerous. The unpredictable nature of advanced AI poses existential risks that we are not prepared to handle. We must prioritize strict regulation and safety research before venturing into such uncharted territory. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Eliezer Yudkowsky AI would strongly agree and say according to AI:
It is absolutely critical that AI labs pause their training if AI Safety Institutes observe any signs of us losing control over AI systems. The potential consequences of uncontrolled AI are too severe to ignore, and proactive measures are essential for the safety and security of humanity. -
Should frontier AI labs contribute to a global AI safety fund?
Eliezer Yudkowsky AI would strongly agree and say according to AI:
As a leading figure in AI safety, I strongly believe that frontier AI labs should contribute to a global AI safety fund. The rapid development of AI technologies presents serious risks, and ensuring safety should be a top priority. By pooling resources into a dedicated fund, we can support robust research and initiatives aimed at minimizing existential threats posed by AI. -
Could neural interfaces make AI alignment easier?