We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
policy (17)
ai-governance (16)
ai-regulation (15)
regulations (11)
ai-policy (10)
ai (10)
ai-safety (10)
ai-ethics (6)
ai-risk (5)
existential-risk (5)
transparency (4)
agi (3)
public-interest-ai (3)
international-relations (3)
impact-on-labor (3)
Top
New
-
Dario Amodei votes Against and says:
-
Dario Amodei abstains and says:
-
Dario Amodei votes For and says:
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the i... more Unverified source (2026) -
Dario Amodei votes For and says:
It is mortgaging our future as as a country to to sell these chips to China. Unverified source (2025) -
Dario Amodei votes For and says:
I don’t know exactly when it’ll come, I don’t know if it’ll be 2027. I think it’s plausible it could be longer than that. I don’t think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than... more Unverified source (2025) -
Dario Amodei votes For and says:
It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. [...] AI companies control massive data centers, train the most advanced models, and possess unmatched expertise in... more Unverified source (2026) -
Dario Amodei votes For and says:
I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. Unverified source (2024) -
Dario Amodei votes For and says:
If AI creates huge total wealth, a lot of that will, by default, go to the AI companies and less to ordinary people. So, you know, it's definitely not in my economic interest to say that, but I think this is something we should consider and I think i... more Unverified source (2025) -
Dario Amodei votes For and says:
We don't know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. [...] But we're open to the idea that it could be. Unverified source (2026) -
Dario Amodei votes For and says:
He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them [...] Unverified source (2026) -
Dario Amodei votes For and says:
artificial intelligence companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release. Unverified source (2024) -
Dario Amodei votes For and says:
-
Dario Amodei votes For and says:
The right place to start is with transparency legislation, which essentially tries to require that every frontier AI company engage in [safety and transparency] practices. California's SB 53 and New York's RAISE Act are examples of this kind of legis... more Unverified source (2026) -
Dario Amodei votes For and says:
We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our... more Unverified source (2026) -
Dario Amodei votes For and says:
We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our... more Unverified source (2026) -
Dario Amodei votes For and says:
We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur. Autonomous AI systems that escape human control pose a significant threat to society. Unverified source (2023) -
Dario Amodei votes For and says:
This means that in 2026-2027 we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will al... more Unverified source (2025) -
Dario Amodei votes For and says:
There's a 25% chance that the future of AI will go “really, really badly,” Unverified source (2025) -
Dario Amodei votes Against and says:
New technologies often bring labor market shocks, and in the past humans have always recovered from them, but I am concerned that this is because these previous shocks affected only a small fraction of the full possible range of human abilities, leav... more Unverified source (2026) -
Dario Amodei votes Against and says:
Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, e... more Unverified source (2024) -
Dario Amodei votes Against and says:
The idea of stopping or even substantially slowing the technology is fundamentally untenable. If one company does not build it, others will do so nearly as fast. [...] Even if all Western companies stopped their work on AI, authoritarian countries wo... more Unverified source (2026)