We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (3)
ai-safety (3)
existential-risk (3)
×
policy (3)
agi (2)
ai-policy (2)
ai-regulation (2)
ai-risk (2)
regulations (2)
ai (1)
ai-ethics (1)
cern-for-ai (1)
ethics (1)
future (1)
nuclear (1)
Top
New
-
OpenAI votes For and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. ... more Unverified source (2023) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025) -
OpenAI votes Against and says:
We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require... more Unverified source (2023)