Yoshua Bengio

Info
AI Pioneer, Turing Award winner
Top
New
  • Should we create a global institute for AI, similar to CERN?
    human-avatar Yoshua Bengio strongly agrees and says:
    In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. source Verified
    Comment 3 Comment X added 1y ago
  • Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
    human-avatar Yoshua Bengio strongly agrees and says:
    Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future. (2025) source Verified
    Comment 2 Comment X added 5d ago
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    human-avatar Yoshua Bengio strongly agrees and says:
    I think it's really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we're opening all the doors to bad actors [...] As these systems become more capable, bad actors don't need to have very strong expertise, whether it's in bioweapons or cyber security, in order to take advantage of systems like this. (2023) source Verified
    Comment 1 Comment X added 29d ago
  • Should third-party audits be mandatory for major AI systems?
    human-avatar Yoshua Bengio strongly agrees and says:
    This would require comprehensive evaluation of potential harm through independent audits (2023) source Unverified
    Comment Comment X added 23d ago
  • Should humanity ban autonomous lethal weapons?
    human-avatar Yoshua Bengio strongly agrees and says:
    This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous weapons (also known as killer robots) are absolutely to be banned (since from day 1 the AI system has autonomy and the ability to kill). Weapons are tools that are designed to harm or kill humans and their use and existence should also be minimized because they could become instrumentalized by rogue AIs. Instead, preference should be given to other means of policing (consider preventive policing and social work and the fact that very few policemen are allowed to carry firearms in many countries). (2023) source Unverified
    Comment Comment X added 7d ago
  • Is open-source AI potentially more dangerous than closed-source AI?
    human-avatar Yoshua Bengio strongly agrees and says:
    we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. source Verified
    Comment Comment X added 1y ago
  • Should member states have majority governance control in a CERN for AI?
    At the same time, in order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. [...] Moreover, governments can help monitor and punish other states who start undercover AI projects. Governments could have oversight on a superhuman AI without that code being open-source. (2023) source Unverified
    Comment Comment X added 4d ago
  • Should a CERN for AI be completely non-profit?
    At the same time, in order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. Reducing the flow of information would slow us down, but rogue organizations developing potentially superdangerous AI systems may also be operating in secret, and probably with less funding and fewer top-level scientists. (2023) source Unverified
    Comment Comment X added 4d ago
Back to home
Terms · Privacy · Contact