We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (16)
×
policy (15)
ai-regulation (13)
regulations (11)
ai-safety (9)
ai-policy (8)
ai-ethics (5)
ai (5)
transparency (4)
international-relations (3)
law (3)
ai-risk (3)
existential-risk (3)
research-policy (3)
ethics (3)
Top
New
-
Dario Amodei votes Against and says:
-
Dario Amodei abstains and says:
-
Dario Amodei votes For and says:
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the i... more Unverified source (2026) -
Dario Amodei votes For and says:
It is mortgaging our future as as a country to to sell these chips to China. Unverified source (2025) -
Dario Amodei votes For and says:
It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves. [...] AI companies control massive data centers, train the most advanced models, and possess unmatched expertise in... more Unverified source (2026) -
Dario Amodei votes For and says:
I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. Unverified source (2024) -
Dario Amodei votes For and says:
He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them [...] Unverified source (2026) -
Dario Amodei votes For and says:
We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur. Autonomous AI systems that escape human control pose a significant threat to society. Unverified source (2023) -
Dario Amodei votes For and says:
This means that in 2026-2027 we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will al... more Unverified source (2025) -
Dario Amodei votes For and says:
-
Dario Amodei votes For and says:
The right place to start is with transparency legislation, which essentially tries to require that every frontier AI company engage in [safety and transparency] practices. California's SB 53 and New York's RAISE Act are examples of this kind of legis... more Unverified source (2026) -
Dario Amodei votes For and says:
We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our... more Unverified source (2026) -
Dario Amodei votes For and says:
We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our... more Unverified source (2026) -
Dario Amodei votes For and says:
artificial intelligence companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release. Unverified source (2024) -
Dario Amodei votes For and says:
-
Dario Amodei votes Against and says:
The idea of stopping or even substantially slowing the technology is fundamentally untenable. If one company does not build it, others will do so nearly as fast. [...] Even if all Western companies stopped their work on AI, authoritarian countries wo... more Unverified source (2026)