The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. […] Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. (2023) source Unverified
Comment X 17d ago
Polls
replying to Eliezer Yudkowsky
Terms · Privacy · Contact