We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-safety (8)
×
ai-governance (7)
policy (7)
ai-regulation (6)
ai-policy (5)
regulations (5)
ai-ethics (4)
existential-risk (3)
agi (2)
ai (2)
ai-risk (2)
future (2)
research-policy (2)
cybersecurity (1)
ethics (1)
Top
New
-
Sam Altman votes For and says:
We talk about the IAEA as a model where the world has said "OK, very dangerous technology, let's all put some guard rails." Unverified source (2023) -
Sam Altman votes For and says:
But then there is a long continuation from what we call AGI to what we call Superintelligence. Unverified source (2024) -
Sam Altman votes For and says:
We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. Unverified source (2026) -
Sam Altman votes For and says:
In the blueprint, we talk, for instance, about incident reporting that's modeled a little bit after how the aviation industry does things whenever there's kind of a near miss or any incidents, however minor, that kind of gets reported to a database s... more Unverified source (2026) -
Sam Altman votes For and says:
First, it is vital that AI companies–especially those working on the most powerful models–adhere to [...] testing prior to release and publication of evaluation results. Unverified source (2023) -
Sam Altman votes For and says:
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation ... more Unverified source (2015) -
Sam Altman votes Against and says:
-