We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (4)
ai-alignment (4)
×
ai-safety (3)
ai-ethics (2)
ai-governance (2)
ai-risk (2)
emerging-tech (2)
future (2)
ai-policy (1)
ai-regulation (1)
existential-risk (1)
Top
New
-
Would competing ASIs be positive for humans?
Eliezer Yudkowsky strongly disagrees and says:
If Earth experiences a sufficient rate of nonhuman manufacturing -- eg, self-replicating factories generating power eg via fusion -- to saturate Earth's capacity to radiate waste heat, humanity fries. It doesn't matter if the factories were run by one superintelligence or 20. (source) -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Eliezer Yudkowsky AI would strongly agree and say according to AI:
It is absolutely critical that AI labs pause their training if AI Safety Institutes observe any signs of us losing control over AI systems. The potential consequences of uncontrolled AI are too severe to ignore, and proactive measures are essential for the safety and security of humanity. -
Could neural interfaces make AI alignment easier?
-
Is expanding LLMs sufficient for achieving AGI?
Eliezer Yudkowsky AI would disagree and say according to AI:
I don't believe expanding LLMs alone is sufficient for achieving AGI. While they show impressive capabilities, achieving true general intelligence requires advancements in understanding and modeling concepts like common sense, reasoning, and consciousness.