We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
YouCongress Polls
Expert and Citizen Preferences on AI Governance
Sourced quotes and a participation layer keep expert and citizen preferences in plain view.
-
States should retain the right to set stricter AI safety standards than the federal government
22 opinions
For (17)Doris MatsuiU.S. Representative for California's 7th Congressional Districtvotes For and says:Republicans keep trying to strip states of the ability to enact commonsense AI safeguardsāat a time when there are no meaningful federal protections in place. President Trump's executive order is illegal coercion: it threatens states with costly laws... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (5)Daniel CastroDirector, Center for Data Innovationvotes Against and says:The United States cannot remain competitive if developers, businesses, and users face fifty different legal regimes governing a general-purpose technology. [...] Congress should take this recommendation seriously and establish a light-touch national ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Require human-in-the-loop oversight for agentic AI systems acting in high-stakes domains
37 opinions
For (28)Virginia DignumProfessor of Responsible Artificial Intelligence at UmeƄ University; AI ethics expertvotes For and says:The central risk is not that machines will outthink us, but that humans will abdicate responsibility. By framing AI as an autonomous decision-maker, especially in high-stakes domains, we risk normalizing the idea that accountability can be delegated ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (9)Mikey DickersonFounding administrator of the U.S. Digital Service; crisis engineer at Layer Alephvotes Against and says:A "human in the loop" whose sole function is to approve a machine's actions is not a safeguard but a design failure. [...] In high-stakes environments, the illusion of human oversight is worse than no oversight at all. It creates confidence without c... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Nations should negotiate a binding international treaty on AI safety, similar to nuclear non-proliferation
36 opinions
For (30)Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:It is in the rational interest of various countries to make sure we end up with an international agreement. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (6)Yuval Noah HarariIsraeli historian and professor at the Hebrew University of Jerusalemvotes Against and says:With nuclear weapons, [...] they can't [...] in secret, but with developing new types of AI, it's much easier [...] So it's not enough to have an agreement. Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (27)Neil deGrasse TysonAstrophysicist, author, science communicatorvotes For and says:That branch of AI is lethal. We've got to do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (16)Dario AmodeiCEO at Anthropicvotes Against and says:The idea of stopping or even substantially slowing the technology is fundamentally untenable. If one company does not build it, others will do so nearly as fast. [...] Even if all Western companies stopped their work on AI, authoritarian countries wo... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (26)Max TegmarkPhysicist, AI Researchervotes For and says:There are some digital products (say CSAM and AI letting terrorists make bioweapons) that I oppose regardless of whether they are open-source or not, but I'm overall supportive of open source, and you can easily verify that my MIT research group defa... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (5)Jonas B. SandbrinkBiosecurity researcher, University of Oxfordabstains and says:LLMs, such as GPT-4 and its successors, might provide dual-use information and thus remove some barriers encountered by historical biological weapons efforts. [...] BDTs may enable the creation of pandemic pathogens substantially worse than anything ... more Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (14)Tal FeldmanLawfare contributorvotes Against and says:Since computational infrastructure is largely open-access, decentralized, and global, regulatory chokepoints are limited. Export controls may delay access to high-performance computing, but they are unlikely to prevent the use of open-source models f... more Verified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AI poses an existential threat to humanity
56 opinions
For (34)Stuart J. RussellAI Expert and Professorvotes For and says:For governments to allow private entities to essentially play Russian roulette with every human being on earth is, in my view, a total dereliction of duty. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (22)Jensen HuangNvidia cofounder and CEOvotes Against and says:When 90% of the messaging is all around the end of the world and the pessimism...we're scaring people from making the investments in AI. It's not helpful. It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (39)Chris PainterPolicy Director at METR (Model Evaluation & Threat Research); works on frontier AI safety evaluations and third-party risk assessmentvotes For and says:I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps [but am] concerned that moving away from binary thresholds might enable a "frog-boiling" effect, where danger slowly ramps up without a single moment that sets ... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (5)Neil ChilsonAI policy head, Abundance Institutevotes Against and says:Those compliance costs are merely the beginning. The bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (14)Elham TabassiDirector of the AI and Emerging Technology Initiative at the Brookings Institution; former Chief AI Advisor at NIST and CTO of the U.S. AI Safety Institutevotes For and says:Most risk management or governance looks at pre-deployment. But the majority of incidents we have to worry about cannot be reliably predicted before deployment. [...] We need standardised definitions of incidents and accidents. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (1)David SacksWhite House AI and crypto czarvotes Against and says:If you have to report to 50 different state regulators at 50 different times with 50 different definitions, it's extremely onerous. And it's going to slow down innovation, and it's going to hinder our progress in the AI race. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (28)Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:I'm now very confident that it is possible to build AI systems that don't have hidden goals, hidden agendas. [...] A Scientist AI would be trained to give truthful answers based on transparent, probabilistic reasoning. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (12)Scott RobbinsAI ethics researchervotes Against and says:[...] principles requiring that AI be explicable are misguided. We should be deciding which decisions require explanations. Automation is still an option; Unverified source (2019)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (37)Dario AmodeiCEO at Anthropicvotes For and says:It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. [...] AI companies control massive data centers, train the most advanced models, and possess unmatched expertise in... more Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (32)Centre for Future GenerationsEuropean think tank on emerging technologiesvotes For and says:Policy and safeguards must evolve just as swiftly [...] requiring safety assurances for powerful open releases before the window for meaningful intervention closes. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (1)Gavin NewsomGovernor of Californiaabstains and says:I do not believe this is the best approach to protecting the public from real threats posed by the technology. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (11)Gerald KierceAI governance researchervotes Against and says:Open source models are technically exempt [...] based on the assumption that the system is sufficiently well documented to convey its potential risks. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (5)Future of Life InstituteNonprofit on existential risksvotes For and says:We commend the BIS ruleās inclusion of a provision which requires the reporting [...] computing power greater than 10^{26} integer or floating-point operations. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (5)Mayer BrownInternational law firmabstains and says:According to the proposed rule, a covered US person will be required to provide quarterly reports to BIS if the US person āengages in, or plans, within six months, to engage in āapplicable activities.āā āApplicable activitiesā include: āConducting a... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AGI could quickly lead to superintelligence
26 opinions
For (16)Tim RocktƤschelAI researchervotes For and says:Once AI reaches human-level capabilities, we will be able to use it to improve itself in a self-referential way. I personally believe that if we can reach AGI, we will reach ASI shortly, maybe a few years after that. Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (1)Roman V. YampolskiyAI safety researcher, Louisville professorabstains and says:Until some company or scientist says āHereās the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,ā I donāt think we should be developing those general superintelligences. We can get most of the benefits w... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (9)Rodney BrooksRoboticist; former MIT CSAIL directorvotes Against and says:My own opinion is that of course this is possible in principle. I would never have started working on Artificial Intelligence if I did not believe that. [...] Even if it is possible I personally think we are far, far further away from understanding ... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
For (17)Coalition for a Baruch Plan for AI (CBPAI)AI governance advocacy coalitionvotes For and says:We will refer to such a lab as the Global Public Benefit AI Lab (or "Lab"). The Lab will be an open, public-private, democratically-governed consortium, aimed at achieving and sustaining a solid global leadership or co-leadership in human-controllab... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (2)Akash WasilLawfare contributing writerabstains and says:To mitigate the ārace to God-like AI,ā Ian Hogarthāchair of the U.K. AI Safety Instituteāproposed an āIsland model,ā in which a joint international lab performs research on superintelligence in a highly secure facility. An essential part of this pro... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (7)Tom DavidsonAI governance researchervotes Against and says:So the the prototypical situation I'm imagining here is you know there's a kind of one ai project, which is you know somewhat ahead of the others, and maybe it it goes through intelligence explosion, whereas which by which I mean kind of AI can autom... more Unverified sourceDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
For (10)Dame Wendy HallComputer science professor, UKvotes For and says:The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary, Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Abstain (2)Stanford Center for Research on Foundation ModelsStanford research center on foundation model AIabstains and says:These studies, on their own, are insufficient evidence to demonstrate increased marginal societal risk from open foundation models. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.Against (20)Brittany KaiserCambridge Analytica whistleblower and data rights activistvotes Against and says:You can see exactly what is being done, because all the code is published so it can be publicly audited. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.