We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Sam Altman
ai-safety (5)
×
ai (2)
ai-governance (2)
existential-risk (2)
ai-alignment (1)
ai-regulation (1)
cern-for-ai (1)
research-policy (1)
Top
New
-
Create a global institute for AI, similar to CERN
Sam Altman votes For and says:
We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' Unverified source (2023) -
AI poses an existential threat to humanity
Sam Altman votes For and says:
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation ... more Unverified source (2015) -
Build artificial general intelligence
Sam Altman votes For and says:
OpenAI is not a normal company and never will be. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. ... more Unverified source (2025) -
Could AGI quickly lead to superintelligence?
Sam Altman votes Against and says:
But then there is a long continuation from what we call AGI to what we call Superintelligence. Unverified source (2024) -
Ban superintelligence development until safety consensus is reached
Sam Altman votes Against and says: