Eliezer Yudkowsky

Info
AI researcher and writer
X: @ESYudkowsky · Wikipedia
Location: United States
Top
New
  • Should we have a universal basic income?
    I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty. source Verified
    Comment 2 Comment X added 1y ago
  • Should we use electronic voting machines?
    human-avatar Eliezer Yudkowsky strongly disagrees and says:
    I can't recall hearing a single computer security researcher come out in favor of electronic voting machines. Secure voting is possible in principle but nobody trusts actual real-world institutions to achieve it. (2024) source Verified
    Comment 1 Comment X added 1y ago
  • Does AI pose an existential threat to humanity?
    human-avatar Eliezer Yudkowsky strongly agrees and says:
    the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. (2023) source Verified
    Comment 1 Comment X added 1y ago
  • Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
    human-avatar Eliezer Yudkowsky strongly disagrees and says:
    The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. […] Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. (2023) source Unverified
    Comment Comment X added 1mo ago
  • Will AGI create abundance?
    human-avatar Eliezer Yudkowsky strongly disagrees and says:
    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. [...] If we actually do this, we are all going to die. (2023) source Unverified
    Comment Comment X added 1mo ago
  • Could AGI quickly lead to superintelligence?
    human-avatar Eliezer Yudkowsky strongly agrees and says:
    From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. [...] 'AI go FOOM'. (2008) source Unverified
    Comment Comment X added 1mo ago
  • Should humanity ban the development of superintelligence until there is a strong public buy-in and broad scientific consensus that it will be done safely and controllably?
    human-avatar Eliezer Yudkowsky strongly agrees and says:
    The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. [...] If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike. (2023) source Verified
    Comment Comment X added 1mo ago
  • Should humanity build artificial general intelligence?
    human-avatar Eliezer Yudkowsky strongly disagrees and says:
    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.” [...] Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. (2023) source Unverified
    Comment Comment X added 1mo ago
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    human-avatar Eliezer Yudkowsky strongly agrees and says:
    But open sourcing, you know, that's just sheer catastrophe. The whole notion of open sourcing, this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal and building stuff you don't understand that is difficult to control, that where if you could align it, it would take time. You'd have to spend a bunch of time doing it. That is not a place for open source, because then you just have powerful things that just go straight out the gate without anybody having had the time to have them not kill everyone. (2023) source Unverified
    Comment Comment X added 2mo ago
Back to home
Terms · Privacy · Contact