We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Max Tegmark
Physicist, AI Researcher
ai (3)
ai-safety (3)
existential-risk (3)
×
ai-governance (2)
ai-regulation (2)
ai-risk (2)
future (2)
ai-alignment (1)
ai-ethics (1)
ai-policy (1)
democracy (1)
Top
New
-
Could AGI quickly lead to superintelligence?
Max Tegmark strongly agrees and says:
It might take two weeks or two days or two hours or two minutes. [...] It’s very appropriate to call this an “intelligence explosion”. (2022) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Max Tegmark strongly agrees and says:
I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy. I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in. (2025) source Verified -
Should humanity build artificial general intelligence?
Max Tegmark strongly disagrees and says:
I actually do not think it's a foregone conclusion that we will build AGI [artificial general intelligence] machines that can outsmart us all. Not because we can't … but rather we might just not want to. So why are obsessing about trying to build some kind of digital god that's going to replace us, if we instead can build all these super powerful AI tools that augment us and empower us? (2025) source Unverified