Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI alignment is solvable
Cast your vote:
For (6)
-
Ajeya CotraAI safety researcher; senior research analyst at Open Philanthropyvotes For and says:I'm fundamentally fairly optimistic about trying to use early transformative AI systems, like early systems that automate a lot of things, to automate the process of controlling and aligning and managing risks from the next generation of systems, who... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jan LeikeFormer head of alignment at OpenAI; now VP of safety at Anthropicvotes For and says:Alignment is not solved, but it increasingly looks solvable. [...] Since I first wrote about it in 2022, pretraining has continued improving and reinforcement learning has become much more significant — and our techniques are keeping pace. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes For and says:Current LLMs are intrinsically unsafe, but with world models, guardrail objectives can be implemented — so by construction, they will not knowingly produce actions that will produce dangerous results. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:Because of the work I've been doing at LawZero, especially since we created it, I'm now very confident that it is possible to build AI systems that don't have hidden goals, hidden agendas. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIvotes For and says:When the frontier models are in the hands of pretty responsible companies, I think we can mitigate [risks] by the companies aligning their models and having good classifiers and good safety stacks. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiCEO at Anthropicvotes For and says:Overall, I am optimistic that a mixture of alignment training, mechanistic interpretability, efforts to find and publicly disclose concerning behaviors, safeguards, and societal-level rules can address AI autonomy risks. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (8)
-
Hector ZenilAssociate Professor at King's College London; researcher in algorithmic information theory, complexity science, and AI alignmentvotes Against and says:An AI system powerful enough to exhibit artificial general intelligence will inevitably explore behaviors we didn't predict or plan for, making perfect guaranteed alignment impossible. [...] This introduces new guidelines for orchestrating future AI ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daniel KokotajloAI Futures Project foundervotes Against and says:Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out. [...] Our default trajectory will have terrible consequences, unless we get extremely luc... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Anthony AguirrePhysicist; Future of Life cofoundervotes Against and says:Whether it's soon or it takes a while, after we develop superintelligence, the machines are going to be in charge — and that is not an experiment that we want to just run toward. [...] Many people want powerful AI tools for science, medicine, product... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam BowmanAI alignment researcher at Anthropic; on leave from NYUvotes Against and says:In the handful of cases where [the model] misbehaves in significant ways, it's difficult to safeguard it. When the model cheats on a test, it does so in extremely creative ways. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mustafa SuleymanMicrosoft AI CEO; authorvotes Against and says:For all the talk about AI alignment, I worry we're putting the cart before the horse. You can't steer something you can't control. [...] Containment has to come first — or alignment is the equivalent of asking nicely. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nate SoaresAI safety researcher; MIRI presidentvotes Against and says:If we build it using anything remotely like modern methods, on anything remotely like the current understanding or lack of understanding that we have about AI, then yeah, building it anytime soon would be a death sentence. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes Against and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researchervotes Against and says:Many believe it's impossible, just like it's impossible for chimpanzees to control us. [If superintelligence is built without solving this,] it's the end of the era where humans are in charge of Earth. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
h/ai-safety
votes