We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Sam Altman
ai (4)
ai-governance (4)
ai-policy (4)
×
ai-regulation (3)
ai-safety (3)
ai-ethics (2)
ai-risk (2)
future (2)
public-interest-ai (2)
digital-democracy (1)
existential-risk (1)
international-relations (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Sam Altman strongly agrees and says:
People need to have agency, the ability to influence this. They need, we need to sort of jointly be architects of the future. source Unverified -
Should humanity build artificial general intelligence?
Sam Altman strongly agrees and says:
OpenAI is not a normal company and never will be. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. [...] We believe this is the best path forward—AGI should enable all of humanity to benefit each other. It is time for us to evolve our structure. [...] We want to deliver beneficial AGI. (2025) source Unverified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Sam Altman agrees and says:
We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' (2023) source Unverified