We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yoshua Bengio
AI Pioneer, Turing Award winner
ai (3)
ai-ethics (3)
×
ai-governance (3)
ai-risk (3)
ai-safety (3)
ai-policy (2)
ai-regulation (2)
ai-deployment (1)
defense (1)
existential-risk (1)
international-relations (1)
law (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yoshua Bengio strongly agrees and says:
I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems become more capable, bad actors don't need to have very strong expertise, whether it's in bioweapons or cyber security, in order to take advantage of systems like this. (2023) source Verified -
Should humanity ban autonomous lethal weapons?
Yoshua Bengio strongly agrees and says:
This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous weapons (also known as killer robots) are absolutely to be banned (since from day 1 the AI system has autonomy and the ability to kill). Weapons are tools that are designed to harm or kill humans and their use and existence should also be minimized because they could become instrumentalized by rogue AIs. Instead, preference should be given to other means of policing (consider preventive policing and social work and the fact that very few policemen are allowed to carry firearms in many countries). (2023) source Unverified -
Is open-source AI potentially more dangerous than closed-source AI?
Yoshua Bengio strongly agrees and says:
we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. source Verified