We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Max Tegmark
Physicist, AI Researcher
ai (6)
×
ai-governance (5)
ai-safety (5)
ai-ethics (3)
ai-policy (3)
ai-regulation (3)
ai-risk (2)
existential-risk (2)
future (2)
ai-alignment (1)
defense (1)
democracy (1)
digital-democracy (1)
international-relations (1)
law (1)
Top
New
-
Should humanity ban autonomous lethal weapons?
Max Tegmark strongly agrees and says:
It opens up entirely new possibilities for things that you can do—where you can go into battle or do a terrorist attack with zero risk to yourself, and you can also do it anonymously, because if some drones show up and start killing people somewhere you have no idea who sent them. [...] One of the main factors that limits wars today is that people have skin in the game. [...] Politicians don’t want to see body bags coming home, and even a lot of terrorists don’t want to get killed. (2015) source Verified -
Should third-party audits be mandatory for major AI systems?
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Max Tegmark strongly agrees and says:
Well, there’s been lots of talk about AI disrupting the job market and also enabling new weapons, but very few scientists talk seriously about what I think is the elephant in the room. What will happen, once machines outsmart us at all tasks? What’s kind of my hallmark as a scientist is to take an idea all the way to its logical conclusion. Instead of shying away from that question about the elephant, in this book, I focus on it and all its fascinating aspects because I want to prepare the reader to join what I think is the most important conversation of our time. […] I think if we succeed in building machines that are smarter than us in all ways, it’s going to be either the best thing ever to happen to humanity or the worst thing. I’m optimistic that we can create a great future with AI, but it’s not going to happen automatically. It’s going to require that we really think things through in advance, and really have this conversation now. That’s why I’ve written this book. (2017) source Unverified -
Could AGI quickly lead to superintelligence?
Max Tegmark strongly agrees and says:
It might take two weeks or two days or two hours or two minutes. [...] It’s very appropriate to call this an “intelligence explosion”. (2022) source Unverified -
Should humanity ban the development of superintelligence until there is a strong public buy-in and broad scientific consensus that it will be done safely and controllably?
Max Tegmark strongly agrees and says:
I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy. I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in. (2025) source Verified -
Should humanity build artificial general intelligence?
Max Tegmark strongly disagrees and says:
I actually do not think it's a foregone conclusion that we will build AGI [artificial general intelligence] machines that can outsmart us all. Not because we can't … but rather we might just not want to. So why are obsessing about trying to build some kind of digital god that's going to replace us, if we instead can build all these super powerful AI tools that augment us and empower us? (2025) source Unverified