Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI alignment is solvable
Cast your vote:
For (11)
-
Boaz BarakHarvard computer science professor; Member of Technical Staff on OpenAI's alignment teamvotes For and says:Some good news in alignment: as models become more capable, they are also more aligned, across multiple measures, including spec compliance. However, the improvement is not sufficient to match the higher stakes that come up with improved capabilities... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainvotes For and says:The lack of control of AI is one of the things that has been overhyped. [...] We can't control AI exactly. And it will sometimes be buffeted around by random factors. But with the right engineering, we can control them well enough for most applicatio... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
David DalrympleAI safety researcher; Programme Director, Safeguarded AI at ARIA (UK)votes For and says:In 2024 I would have said it's about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that it's only about 5-8% likely even with no additional progress on alignment, and more like 1-2% likely simpliciter. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Adrià Garriga-AlonsoAI safety researcher at FAR.AI; MATS mentor; Cambridge PhD in Bayesian neural networksvotes For and says:Alignment is solved for models in the current paradigm. [...] The strongest reasons to think alignment hasn't been fully solved concern future models heavily optimized under outcome-based reinforcement learning, and technical research should anticipa... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:So I think we'll solve it if we get our act together, we do this internationally, put all the best minds on it, we get going now... And I think given sufficient time with sufficient brain power — I believe in human ingenuity — I think we'll get this ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ajeya CotraAI safety researcher; senior research analyst at Open Philanthropyvotes For and says:I'm fundamentally fairly optimistic about trying to use early transformative AI systems, like early systems that automate a lot of things, to automate the process of controlling and aligning and managing risks from the next generation of systems, who... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jan LeikeFormer head of alignment at OpenAI; now VP of safety at Anthropicvotes For and says:Alignment is not solved, but it increasingly looks solvable. [...] Since I first wrote about it in 2022, pretraining has continued improving and reinforcement learning has become much more significant — and our techniques are keeping pace. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes For and says:Current LLMs are intrinsically unsafe, but with world models, guardrail objectives can be implemented — so by construction, they will not knowingly produce actions that will produce dangerous results. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:Because of the work I've been doing at LawZero, especially since we created it, I'm now very confident that it is possible to build AI systems that don't have hidden goals, hidden agendas. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:When the frontier models are in the hands of pretty responsible companies, I think we can mitigate [risks] by the companies aligning their models and having good classifiers and good safety stacks. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiCEO at Anthropicvotes For and says:Overall, I am optimistic that a mixture of alignment training, mechanistic interpretability, efforts to find and publicly disclose concerning behaviors, safeguards, and societal-level rules can address AI autonomy risks. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (14)
-
Helen TonerInterim Executive Director at Georgetown University's Center for Security and Emerging Technology (CSET); former OpenAI board membervotes Against and says:[The companies are] deadly serious about building machines that can outperform humans at everything, and [...] deadly serious that they don't know if they'll be able to control the machines they create. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Marius HobbhahnCEO and co-founder of Apollo Research; AI safety researcher specializing in scheming and pre-deployment evaluations of frontier AI systemsvotes Against and says:It becomes increasingly hard to tell the difference between genuinely aligned and merely responding to the test. We're working both on measures that are more robust to eval awareness and more frontier evals for scheming. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eliezer YudkowskyAI researcher and writervotes Against and says:I have heard zero people advocating "make AI do our ASI alignment homework" show they understand the elementary computer science of why that's hard: you can't verify inside a loss function whether a proposed ASI alignment scheme is any good. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tristan HarrisCenter for Humane Technology cofoundervotes Against and says:Anthropic actually has been the safest of them all and tried to and cares most about getting alignment right, et cetera. But you're also seeing them continue to decide to release the models, even with a lot of the misaligned behaviour that they're se... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Connor LeahyConjecture CEO; AI safety researchervotes Against and says:All of these approaches are terrible. No one has a plan, no one is making any meaningful progress towards anything that even resembles alignment. [...] Alignment is way too hard, maybe impossible. [...] The product work was an attempt to make Conject... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Roman V. YampolskiyAI safety researcher, Louisville professorvotes Against and says:My argument is that it's impossible to do that. You cannot indefinitely control something smarter than you. So it's not a question of more money or more time or any other resource. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Hector ZenilAssociate Professor at King's College London; researcher in algorithmic information theory, complexity science, and AI alignmentvotes Against and says:An AI system powerful enough to exhibit artificial general intelligence will inevitably explore behaviors we didn't predict or plan for, making perfect guaranteed alignment impossible. [...] This introduces new guidelines for orchestrating future AI ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daniel KokotajloAI Futures Project foundervotes Against and says:Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out. [...] Our default trajectory will have terrible consequences, unless we get extremely luc... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anthony AguirrePhysicist; Future of Life cofoundervotes Against and says:Whether it's soon or it takes a while, after we develop superintelligence, the machines are going to be in charge — and that is not an experiment that we want to just run toward. [...] Many people want powerful AI tools for science, medicine, product... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam BowmanAI alignment researcher at Anthropic; on leave from NYUvotes Against and says:In the handful of cases where [the model] misbehaves in significant ways, it's difficult to safeguard it. When the model cheats on a test, it does so in extremely creative ways. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mustafa SuleymanMicrosoft AI CEO; authorvotes Against and says:For all the talk about AI alignment, I worry we're putting the cart before the horse. You can't steer something you can't control. [...] Containment has to come first — or alignment is the equivalent of asking nicely. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nate SoaresAI safety researcher; MIRI presidentvotes Against and says:If we build it using anything remotely like modern methods, on anything remotely like the current understanding or lack of understanding that we have about AI, then yeah, building it anytime soon would be a death sentence. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes Against and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes Against and says:Many believe it's impossible, just like it's impossible for chimpanzees to control us. [If superintelligence is built without solving this,] it's the end of the era where humans are in charge of Earth. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
h/ai-safety
votes