We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yoshua Bengio
AI Pioneer, Turing Award winner
ai-governance (3)
ai-policy (3)
international-relations (3)
×
ai (2)
ai-regulation (2)
cern-for-ai (2)
research-policy (2)
science-funding (2)
ai-ethics (1)
ai-risk (1)
ai-safety (1)
defense (1)
ethics-in-research (1)
law (1)
open-science (1)
Top
New
-
Should humanity ban autonomous lethal weapons?
Yoshua Bengio strongly agrees and says:
This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous weapons (also known as killer robots) are absolutely to be banned (since from day 1 the AI system has autonomy and the ability to kill). Weapons are tools that are designed to harm or kill humans and their use and existence should also be minimized because they could become instrumentalized by rogue AIs. Instead, preference should be given to other means of policing (consider preventive policing and social work and the fact that very few policemen are allowed to carry firearms in many countries). (2023) source Unverified -
Should a CERN for AI be completely non-profit?
Yoshua Bengio agrees and says:
At the same time, in order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. Reducing the flow of information would slow us down, but rogue organizations developing potentially superdangerous AI systems may also be operating in secret, and probably with less funding and fewer top-level scientists. (2023) source Unverified -
Should member states have majority governance control in a CERN for AI?
Yoshua Bengio agrees and says:
At the same time, in order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. [...] Moreover, governments can help monitor and punish other states who start undercover AI projects. Governments could have oversight on a superhuman AI without that code being open-source. (2023) source Unverified