We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (5)
ai-regulation (5)
ai-safety (5)
policy (4)
regulations (4)
ai-policy (3)
ai-risk (3)
research-policy (3)
transparency (3)
ai-ethics (2)
ethics-in-research (2)
ai (1)
data-privacy (1)
existential-risk (1)
international-relations (1)
Top
New
-
Anthropic votes For and says:
If you decide to turn off the model training setting, we will not use any new chats [...] for future model training. Unverified source (2025) -
Anthropic votes For and says:
Importantly, the law balances [...] whistleblower protections—[...]. [...] Protecting whistleblowers: It should be an explicit violation of law [...] punish employees who raise concerns [...]. Unverified source (2025) -
Anthropic votes For and says:
we propose that AI developers pre-register large training runs [...] within their home country’s government [...] plausible bar for now would be 10^26 FLOP or higher. Unverified source (2023) -
Anthropic votes For and says:
The definition, criteria, and safety measures for each ASL level are described in detail in the main document, but at a high level, ASL-2 measures represent our current safety and security standards and overlap significantly with our recent White Hou... more Unverified source (2023) -
Anthropic votes For and says:
Release public transparency reports summarizing their catastrophic risk assessments and the steps taken to fulfill their respective frameworks before deploying powerful new models. Unverified source (2025) -
Anthropic votes For and says: