We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Sam Altman
ai (3)
ai-ethics (3)
×
ai-governance (2)
ai-policy (2)
ai-risk (2)
ai-safety (2)
existential-risk (2)
future (2)
ai-regulation (1)
digital-democracy (1)
public-interest-ai (1)
Top
New
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Sam Altman strongly agrees and says:
People need to have agency, the ability to influence this. They need, we need to sort of jointly be architects of the future. source Unverified -
Does AI pose an existential threat to humanity?
Sam Altman strongly agrees and says:
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point. SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans. (2015) source Unverified -
Should humanity build artificial general intelligence?
Sam Altman strongly agrees and says:
OpenAI is not a normal company and never will be. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. [...] We believe this is the best path forward—AGI should enable all of humanity to benefit each other. It is time for us to evolve our structure. [...] We want to deliver beneficial AGI. (2025) source Unverified