We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
YouCongress Polls
Expert and Citizen Preferences on AI Governance
Sourced quotes and a participation layer keep expert and citizen preferences in plain view.
-
For (17)Rishi BommasaniSociety Lead at Stanford's Center for Research on Foundation Models; lead author of the Foundation Model Transparency Index and the California Report on Frontier AI Policyvotes For and says:The law will bring much-needed disclosure to the AI industry. [...] You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (1)David SacksWhite House AI and crypto czarvotes Against and says:If you have to report to 50 different state regulators at 50 different times with 50 different definitions, it's extremely onerous. And it's going to slow down innovation, and it's going to hinder our progress in the AI race. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nations should negotiate a binding international treaty on AI safety, similar to nuclear non-proliferation
40 opinions
For (35)David LammyUK Deputy Prime Minister and Secretary of State for Foreign, Commonwealth and Development Affairsvotes For and says:This summit is an important moment in determining how we can work together with our international partners to unlock the full benefits and potential of AI, while baking in robust and fair safety standards that protect us all. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (5)Yann LeCunComputer scientist, AI researchervotes Against and says:The question that people are debating is whether it makes sense to regulate research and development of AI. And I don't think it does. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
AI alignment is solvable
21 opinions
For (9)David DalrympleAI safety researcher; Programme Director, Safeguarded AI at ARIA (UK)votes For and says:In 2024 I would have said it's about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that it's only about 5-8% likely even with no additional progress on alignment, and more like 1-2% likely simpliciter. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (12)Eliezer YudkowskyAI researcher and writervotes Against and says:I have heard zero people advocating "make AI do our ASI alignment homework" show they understand the elementary computer science of why that's hard: you can't verify inside a loss function whether a proposed ASI alignment scheme is any good. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (45)Geoffrey HintonGodfather of Deep Learningvotes For and says:If you ever went out with a car that had no brake, boy, you are in trouble if you go down a hill. But you're in even more trouble if there's no steering wheel. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (6)Eric SchmidtFormer Google CEO; tech investorvotes Against and says:A [central] problem with regulating frontier AI models is that a new feature emerges in these systems that is not tested, testable. [...] As long as you have this new emergent power, you have deep reasoning, deep capabilities, and they will make mist... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
States should retain the right to set stricter AI safety standards than the federal government
26 opinions
For (21)Ed MarkeyU.S. senator from Massachusettsvotes For and says:President Trump is continuing to repay Big Tech's campaign donations by proposing to block states from protecting their communities from AI-related harms. [...] [We need the States' Right to Regulate AI Act to] put power back into the hands of people... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (5)Daniel CastroDirector, Center for Data Innovationvotes Against and says:The United States cannot remain competitive if developers, businesses, and users face fifty different legal regimes governing a general-purpose technology. [...] Congress should take this recommendation seriously and establish a light-touch national ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Governments should fund open-source AI safety tools and red-teaming infrastructure as public goods.
10 opinions
For (10)Jade LeungChief Technology Officer of the UK AI Security Institute; Prime Minister's AI Adviser; previously led the Governance team at OpenAIvotes For and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (17)Donald J. TrumpU.S. President (2025–present)votes For and says:[Could AI undermine confidence in the banking system?] Yeah, probably. But it could also be the kind of technology that allows greatness in the banking system, makes it better and safer and more secure. [...] [Should the government have safeguards on... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (1)Sam AltmanCEO at OpenAIabstains and says:There's not one big magic red button that blows up the data centre, which I think some people sort of assume exists. Verified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (1)Eran KahanaAI and cybersecurity lawyer; Fellow at Stanford CodeX; Adjunct Professor, University of Minnesota Law Schoolvotes Against and says:It needs only an optimization objective that treats shutdown as one more obstacle between the current state and the goal. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (45)Marc BenioffCEO and co-founder of Salesforcevotes For and says:This year, you really saw something pretty horrific, which is these AI models became suicide coaches. [...] Tech companies love Section 230, which basically says they're not responsible. So if this large language model coaches this child into suicide... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (9)Adam ThiererR Street Institute senior fellowvotes Against and says:Competition, innovation, and speech options will be limited as a result of these moves. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (26)Max TegmarkPhysicist, AI Researchervotes For and says:There are some digital products (say CSAM and AI letting terrorists make bioweapons) that I oppose regardless of whether they are open-source or not, but I'm overall supportive of open source, and you can easily verify that my MIT research group defa... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (6)Moritz HankeBiosecurity researcher at Johns Hopkins Center for Health Security; expert on AI-enabled biological threatsabstains and says:In a time dominated by open-weight biological AI models developed across the globe, limiting access to sensitive pathogen data to legitimate researchers might be one of the most promising avenues for risk reduction. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (14)Tal FeldmanLawfare contributorvotes Against and says:Since computational infrastructure is largely open-access, decentralized, and global, regulatory chokepoints are limited. Export controls may delay access to high-performance computing, but they are unlikely to prevent the use of open-source models f... more Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AGI could quickly lead to superintelligence
29 opinions
For (19)Elon MuskFounder of SpaceX, cofounder of Tesla, SolarCity & PayPalvotes For and says:We might have AI that is smarter than any human by end of this year, and no later than next year. And probably 2030 or 2031 -- 5 years from now -- AI will be smarter than all of humanity collectively. [...] I said years ago that humans are just the '... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (1)Roman V. YampolskiyAI safety researcher, Louisville professorabstains and says:Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences. We can get most of the benefits w... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (9)Carl Benedikt FreyAssociate Professor of AI & Work at the Oxford Internet Institute; Director of the Future of Work Program at the Oxford Martin School; author of The Technology Trap and How Progress Endsvotes Against and says:For the last-invention story to hold, people would have to become unnecessary even as partners or supervisors to AIs. [...] We would need a world where practical know-how is fully transferable through digital channels and where responsibility can be ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (30)Connor LeahyConjecture CEO; AI safety researchervotes For and says:The primary prerequisite to even considering starting to work on a safe ASI plan is to have a global ASI ban and powerful enforcement already in place. Unsafe ASI is vastly easier to build than controlled ASI, and is on the same tech path. [...] The ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (17)Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes Against and says:Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. [...] Models incorporating safety progress, temporal discounting, quality-of-life differentia... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (30)Julius AdebayoCEO and co-founder of Guide Labs; MIT PhD in Computer Science (interpretability and algorithmic fairness); former Google Brain residentvotes For and says:The way we're currently training models is super primitive, and so democratizing inherent interpretability is actually going to be a long-term good thing for our role within the human race. As we're going after these models that are going to be super... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (12)Scott RobbinsAI ethics researchervotes Against and says:[...] principles requiring that AI be explicable are misguided. We should be deciding which decisions require explanations. Automation is still an option; Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (10)Dame Wendy HallComputer science professor, UKvotes For and says:The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary, Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (2)Stanford Center for Research on Foundation ModelsStanford research center on foundation model AIabstains and says:These studies, on their own, are insufficient evidence to demonstrate increased marginal societal risk from open foundation models. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (21)Rob ThomasSenior Vice President of IBM Software and Chief Commercial Officervotes Against and says:Open source does not eliminate risk. It changes how risk is managed. It allows more researchers, developers, and defenders to examine systems, test assumptions, surface weaknesses, and harden code under real-world conditions. [...] If frontier models... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AI poses an existential threat to humanity
60 opinions
For (37)Helen TonerAI policy expert and TED speakervotes For and says:I think it is very important that the U.S. AI sector remains ahead of the Chinese AI sector, but if that's at the expense of AI overrunning the entire planet, then that hasn't benefitted us. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (23)Milton MuellerProfessor at Georgia Tech School of Public Policy; internet governance scholar and co-founder of the Internet Governance Projectvotes Against and says:Computer scientists often aren't good judges of the social and political implications of technology. They are so focused on the AI's mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Require human-in-the-loop oversight for agentic AI systems acting in high-stakes domains
41 opinions
For (31)Surya Kant53rd Chief Justice of India (since November 2025); Supreme Court of Indiavotes For and says:The final stage of the judicial process, pronouncement of judgments, must remain firmly in human hands. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (10)Uri MaozCognitive and computational neuroscientist; Associate Professor at Chapman University with appointments at UCLA and Caltechvotes Against and says:The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are actually "thinking." State-of-the-art AI systems are essentially "black boxes." We know the inputs and outputs, ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.