We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
YouCongress Polls
Top
New
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ban autonomous lethal weapons
58 opinions
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Mandate third-party audits for major AI systems
63 opinions
California State LegislatureState legislative bodyvotes For and says:(e) (1) Beginning January 1, 2026, a developer of a covered model shall annually retain a third-party auditor that conducts audits consistent with best practices for auditors to perform an independent audit of compliance with the requirements of this... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ban predictive policing
27 opinions
Corey O’ConnorPittsburgh City Council membervotes For and says:These are things we are seeing -- a trend across the country that the technology is not up to speed enough and people are getting arrested that should not be. [...] You use crime data and you use mathematics to basically say where to put police. It'... more Unverified source (2020)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Gillian HadfieldLegal scholar and AI governance researcher; Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University; Canada CIFAR AI Chair (Vector Institute)votes For and says:AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum. Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI [...] safe superhuman AI Verified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:Calls for a global A.I. regulator modelled on the IAEA are misguided. Nuclear technology is a narrow, slow‑moving domain with obvious materials to track and a small set of state actors; A.I. is a broad, fast‑moving field with millions of researchers ... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam GregoryExecutive Director, WITNESSvotes For and says:Transparency [...] is a critical area. [...] Embed human rights standards and a rights-based approach in the response to AI. [...] Place firm responsibility on stakeholders [...]. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:We must take the risks of AI as seriously as other major global challenges, like climate change. It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We c... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electoral Commission (United Kingdom)UK independent elections regulatorvotes For and says:We do not regulate the content of campaign material. However, we encourage all campaigners to carry out their role of influencing voters in a responsible and transparent manner. Some campaigners may use generative artificial intelligence (AI) to cre... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Build artificial general intelligence
49 opinions
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately t... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Repeal the EU AI Act
36 opinions
Brando BenifeiItalian MEP, AI Act co-rapporteurvotes Against and says:We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Scott AlexanderAuthor and psychiatristvotes For and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could... more Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Participate in shaping the future of AI
33 opinions
Eliezer YudkowskyAI researcher and writervotes Against and says:The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens whe... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.