We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI poses an existential threat to humanity
For (12)
-
Stuart J. RussellAI Expert and Professorvotes For and says:Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writervotes For and says:the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that... more Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes For and says:I used to think it was a long way off, but I now think it's serious and fairly close [...] The alarm bell I’m ringing has to do with the existential threat of them taking control. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
António GuterresUN Secretary-Generalvotes For and says:they threaten to upend life as we know it: the climate crisis and the ungoverned expansion of artificial intelligence. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pia LauritzenDanish philosopher and leadership adviservotes For and says:It poses an existential threat because it offers answers faster than humans can ask the questions that help them contemplate their existence. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Elon MuskFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalvotes For and says:AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that, Unverified source (2017)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes For and says:if we humans lose control over our society to machines that are much smarter than us, then things can go just as bad for us. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jaan TallinnSkype co-founder and AI safety donorvotes For and says:anyone working at AI labs who thinks the risk of training the next-generation model "blowing up the planet" is less than 1%. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen HawkingTheoretical physicist, cosmologist, and authorvotes For and says:The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Unverified source (2014)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation ... more Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bill JoySun Microsystems cofounder; computer scientistvotes For and says:How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved... more Unverified source (2000)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (6)
-
Edward Moore GeistHistorian of science and nuclear policyvotes Against and says:The risks of self-improving intelligent machines are grossly exaggerated and ought not serve as a distraction from the existential risks we already face. Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eva HamrudScience writer covering AI and technologyvotes Against and says:Despite this, it currently seems to be unlikely to become an existential threat to humanity. Unverified source (2021)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainvotes Against and says:There’s also a lot of hype, that AI will create evil robots with super-intelligence. That’s an unnecessary distraction, Unverified source (2015)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Sciencevotes Against and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Melanie MitchellAI researcher; complexity scientist and authorvotes Against and says:There are a lot of kinds of dangers with AI. It can be used to produce disinformation and deepfakes. There are algorithmic biases, like the one I mentioned in the case of facial recognition. Hinton and others go further and say these systems could ac... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. N... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai-governance
votes