Biased? Add sourced quotes from experts and public figures.

Does AI pose an existential threat to humanity?

Cast your vote:
Results (14):
filter
Quotes (8) Users (0)
  • strongly agrees and says:
    Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. source Unverified
    Comment 2 Comment X added 1y ago
    Info
    Delegate
  • strongly agrees and says:
    the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. (2023) source Verified
    Comment 1 Comment X added 1y ago
    Info
    Delegate
  • strongly agrees and says:
    I used to think it was a long way off, but I now think it's serious and fairly close [...] The alarm bell I’m ringing has to do with the existential threat of them taking control. source Unverified
    Comment 1 Comment X added 1y ago
    Info
    Delegate
  • disagrees and says:
    There are a lot of kinds of dangers with AI. It can be used to produce disinformation and deepfakes. There are algorithmic biases, like the one I mentioned in the case of facial recognition. Hinton and others go further and say these systems could actually get out of control and destroy humanity. This claim is, to say the least, is very unlikely and speculative. If we develop a superintelligent system, I don’t believe that it wouldn’t care about our values, like killing all humans is not right. Putting all the focus on this dramatic idea of existential threats to humanity only takes the focus away from things that are really important right now. (2024) source Unverified
    Comment Comment X added 8d ago
    Info
    Delegate
  • Sam Altman
    President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many others
    strongly agrees and says:
    Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point. SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans. (2015) source Unverified
    Comment Comment X added 8d ago
    Info
    Delegate
  • Bill Joy
    Sun Microsystems cofounder; computer scientist
    strongly agrees and says:
    How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself. A second dream of robotics is that we will gradually replace ourselves with our robotic technology, achieving near immortality by downloading our consciousnesses; it is this process that Danny Hillis thinks we will gradually get used to and that Ray Kurzweil elegantly details in The Age of Spiritual Machines. (We are beginning to see intimations of this in the implantation of computer devices into the human body, as illustrated on the cover of Wired 8.02.) But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost. (2000) source Unverified
    Comment Comment X added 8d ago
    Info
    Delegate
  • strongly disagrees and says:
    The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. Nobody would buy it anyway. (2024) source Unverified
    Comment Comment X added 8d ago
    Info
    Delegate
  • strongly agrees and says:
    The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. (2014) source Unverified
    Comment Comment X added 8d ago
    Info
    Delegate
Terms · Privacy · Contact