We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (3)
×
ai-governance (2)
ai-policy (2)
ai-regulation (2)
research-policy (2)
agi (1)
ai-ethics (1)
ai-risk (1)
ai-safety (1)
cern-for-ai (1)
existential-risk (1)
innovation-policy (1)
international-relations (1)
nuclear (1)
open-source (1)
Top
New
-
OpenAI votes For and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems,... more Unverified source (2023) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025) -
OpenAI votes For and says:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and benefic... more Unverified source (2018)