We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
ai (5)
future (5)
×
ai-governance (3)
ai-policy (3)
ai-risk (2)
ai-safety (2)
existential-risk (2)
ai-alignment (1)
ai-ethics (1)
ai-regulation (1)
democracy (1)
economics (1)
innovation-policy (1)
policy (1)
public-interest-ai (1)
Top
New
-
AI poses an existential threat to humanity
Eliezer Yudkowsky votes For and says:
the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. (2023) source Verified -
Could AGI quickly lead to superintelligence?
Eliezer Yudkowsky votes For and says:
From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. (2008) source Unverified -
AGI will create abundance
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. [...] If we actually do this, we are all going to die. (2023) source Unverified -
Participate in shaping the future of AI
Eliezer Yudkowsky votes Against and says:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. […] Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. (2023) source Unverified -
Build artificial general intelligence
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.” [...] Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. (2023) source Unverified