We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yoshua Bengio
AI Pioneer, Turing Award winner
ai (2)
ai-governance (2)
ai-regulation (2)
ai-safety (2)
existential-risk (2)
×
ai-deployment (1)
ai-ethics (1)
ai-policy (1)
ai-risk (1)
democracy (1)
public-interest-ai (1)
Top
New
-
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Yoshua Bengio strongly agrees and says:
Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future. (2025) source Verified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yoshua Bengio strongly agrees and says:
I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems become more capable, bad actors don't need to have very strong expertise, whether it's in bioweapons or cyber security, in order to take advantage of systems like this. (2023) source Verified