We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Sam Altman
ai (3)
ai-safety (3)
existential-risk (3)
×
ai-ethics (2)
ai-governance (2)
ai-regulation (2)
ai-risk (2)
ai-policy (1)
democracy (1)
future (1)
Top
New
-
Does AI pose an existential threat to humanity?
Sam Altman strongly agrees and says:
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point. SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans. (2015) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Sam Altman strongly disagrees and says:
But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. [...] Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right. (2023) source Verified -
Should humanity build artificial general intelligence?
Sam Altman strongly agrees and says:
OpenAI is not a normal company and never will be. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. [...] We believe this is the best path forward—AGI should enable all of humanity to benefit each other. It is time for us to evolve our structure. [...] We want to deliver beneficial AGI. (2025) source Unverified