We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
YouCongress Polls
Top
New
-
AI might become conscious
35 opinions
Max TegmarkPhysicist, AI Researchervotes For and says:Our consciousness has to do with information being processed in certain ways and there's no reason whatsoever that machines can't when they do that. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Nations should negotiate a binding international treaty on AI safety, similar to nuclear non-proliferation
30 opinions
Yuval Noah HarariIsraeli historian and professor at the Hebrew University of Jerusalemvotes Against and says:With nuclear weapons, [...] they can't [...] in secret, but with developing new types of AI, it's much easier [...] So it's not enough to have an agreement. Unverified source (2018)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Fei-Fei LiStanford AI professor; HAI co-directorvotes Against and says:SB-1047 holds liable [...] the original developer of that model. It is impossible [...] to predict every possible use. SB-1047 will force developers to pull back [...]. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:This means that in 2026-2027 we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will al... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them [...] Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiResearch Scientist at OpenAIvotes For and says:artificial intelligence companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AI poses an existential threat to humanity
56 opinions
Mustafa SuleymanMicrosoft AI CEO; authorvotes Against and says:I just think that the existential-risk stuff has been [...] a completely bonkers distraction. There’s like 101 more practical issues [...] from privacy to bias [...]. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Sam AltmanCEO at OpenAIvotes For and says:We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Scott AlexanderAuthor and psychiatristvotes For and says:Someday AIs really will be able to make nukes or pull off $500 million hacks. At that point, companies will have to certify that their model has been trained not to do this, and that it will stay trained. But if it were open-source, then anyone could... more Verified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim Berners-LeeInventor of the World Wide Webvotes Against and says:That's why we need a CERN-like not-for-profit body driving forward international AI research," he said. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick BostromPhilosopher; 'Superintelligence' author; FHI foundervotes For and says:Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:Calls for a global A.I. regulator modelled on the IAEA are misguided. Nuclear technology is a narrow, slow‑moving domain with obvious materials to track and a small set of state actors; A.I. is a broad, fast‑moving field with millions of researchers ... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ban autonomous lethal weapons
58 opinions
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly. -
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mandate third-party audits for major AI systems
63 opinions
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:This would require comprehensive evaluation of potential harm through independent audits Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.