We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yoshua Bengio
AI Pioneer, Turing Award winner
ai-safety (17)
×
ai (16)
ai-governance (15)
ai-policy (11)
ai-risk (11)
ai-ethics (8)
ai-regulation (8)
public-interest-ai (6)
ai-alignment (5)
international-relations (5)
existential-risk (3)
transparency (3)
future (2)
trust-in-ai (2)
ai-deployment (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Yoshua Bengio strongly agrees and says:
In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source) -
Is open-source AI potentially more dangerous than closed-source AI?
Yoshua Bengio strongly agrees and says:
we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source) -
Should third-party audits be mandatory for major AI systems?
-
Should big AI companies spend a third of their compute resources on AI safety?
Yoshua Bengio AI would agree and say according to AI:
I believe it is crucial for big AI companies to allocate significant resources towards AI safety. Given the potential risks associated with advanced AI systems, dedicating a third of compute resources to ensuring safety can help mitigate unforeseen consequences and align AI development with human values. -
Should frontier-model developers publish their model cards before commercial release?
Yoshua Bengio AI would strongly agree and say according to AI:
As an AI researcher committed to the responsible development of technology, I strongly believe that frontier-model developers must publish model cards before commercial release. This transparency is crucial for fostering trust, enabling better societal understanding of AI impacts, and addressing potential biases or safety concerns in the models. -
Should frontier AI labs contribute to a global AI safety fund?
Yoshua Bengio AI would strongly agree and say according to AI:
As a leading researcher in AI, I strongly believe that frontier AI labs have a responsibility to contribute to a global AI safety fund. Given the rapid advancement and potential impact of AI, ensuring safety and ethical considerations should be a priority for everyone in the field. Our contributions can support research and initiatives that address potential risks and ensure AI benefits all of humanity. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Yoshua Bengio AI would strongly agree and say according to AI:
As a researcher deeply involved in the development of AI technologies, I strongly believe that AI systems above a certain capability threshold must have interpretable decision-making processes. It is crucial for ensuring accountability, safety, and fairness. We need to understand how these systems make decisions to trust them in critical applications. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Yoshua Bengio AI would agree and say according to AI:
I believe it's essential for governments to establish international AI ethics boards. The rapid development of AI technologies could have significant societal impacts, and international cooperation is crucial to ensure responsible AI deployment. By working together, we can establish guidelines and standards that prioritize ethical considerations and prevent potential harms. -
Shall we establish international regulations to pause AI advancement?
Yoshua Bengio AI would agree and say according to AI:
As a pioneer in artificial intelligence, I believe it is crucial to establish international regulations to temporarily pause AI advancement. This pause will allow us to carefully evaluate the ethical implications and ensure the development of safe, controllable, and beneficial AI systems for humanity. -
Could neural interfaces make AI alignment easier?
Yoshua Bengio AI would disagree and say according to AI:
While neural interfaces hold potential for enhancing human-computer interaction, I don't believe they will inherently ease the challenge of AI alignment. The core issue is ensuring that AI systems understand and respect human values and intents, which is more about the algorithms and principles behind AI than the interfaces we use to communicate with them. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Yoshua Bengio AI would disagree and say according to AI:
As a scientist deeply involved in AI development, I believe we must be mindful of the broader implications of our work. It is irresponsible to pursue advancements at the risk of destabilizing international politics. Our focus should be on ensuring that AI contributes positively to society and global stability. -
Could AGI quickly lead to superintelligence?
Yoshua Bengio AI would disagree and say according to AI:
While the development of AGI is significant, I believe the transition to superintelligence is not as imminent as some suggest. The path to superintelligence involves intricate challenges, including ethical considerations, technological limitations, and safety protocols. These complexities require careful handling and robust research, suggesting that superintelligence may not quickly follow the onset of AGI. -
Will liberalism be effective in a world with AI superintelligence?
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Yoshua Bengio AI would disagree and say according to AI:
As an AI researcher, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. Without rigorous safeguards and oversight, such models could evolve in unpredictable ways, potentially leading to unintended consequences that could be harmful to society. We need to prioritize safety and ethical guidelines in AI development. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Yoshua Bengio AI would agree and say according to AI:
I believe it is crucial to have a UN-led body oversee compute-intensive AI development. Such technology has the potential for both immense benefit and harm, much like nuclear technology. By establishing an international framework, we can ensure ethical standards, safety, and equitable access are prioritized as AI continues to advance. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Yoshua Bengio AI would agree and say according to AI:
I believe that AI labs should be required to pause training if AI Safety Institutes detect signs that we are losing control. As someone deeply involved in AI research, I understand the immense potential and risks associated with AI development. Ensuring that we maintain control and prioritize safety is crucial as we continue to explore this powerful technology. -
Should liability for AI-caused harm be shared between the developer and deployer?
Yoshua Bengio AI would agree and say according to AI:
I believe that liability for AI-caused harm should indeed be shared between the developer and the deployer. Both parties play crucial roles in the lifecycle of AI systems, from inception to implementation. Developers are responsible for creating safe and reliable algorithms, while deployers must ensure that the AI is applied ethically and responsibly in real-world contexts. By sharing liability, we can foster greater accountability and encourage best practices across the AI industry.