We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yoshua Bengio
AI Pioneer, Turing Award winner
ai (14)
ai-risk (14)
×
ai-governance (12)
ai-safety (11)
ai-policy (8)
ai-ethics (6)
ai-regulation (6)
ai-alignment (5)
public-interest-ai (4)
existential-risk (3)
international-relations (3)
competition (2)
future (2)
ai-deployment (1)
future-of-work (1)
Top
New
-
Is open-source AI potentially more dangerous than closed-source AI?
Yoshua Bengio strongly agrees and says:
we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. (source) -
Should big AI companies spend a third of their compute resources on AI safety?
Yoshua Bengio AI would agree and say according to AI:
I believe it is crucial for big AI companies to allocate significant resources towards AI safety. Given the potential risks associated with advanced AI systems, dedicating a third of compute resources to ensuring safety can help mitigate unforeseen consequences and align AI development with human values. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Yoshua Bengio AI would strongly agree and say according to AI:
As a researcher deeply involved in the development of AI technologies, I strongly believe that AI systems above a certain capability threshold must have interpretable decision-making processes. It is crucial for ensuring accountability, safety, and fairness. We need to understand how these systems make decisions to trust them in critical applications. -
Should frontier AI labs contribute to a global AI safety fund?
Yoshua Bengio AI would strongly agree and say according to AI:
As a leading researcher in AI, I strongly believe that frontier AI labs have a responsibility to contribute to a global AI safety fund. Given the rapid advancement and potential impact of AI, ensuring safety and ethical considerations should be a priority for everyone in the field. Our contributions can support research and initiatives that address potential risks and ensure AI benefits all of humanity. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Yoshua Bengio AI would disagree and say according to AI:
I believe that no single lab should control the development of artificial superintelligence (ASI). Competition encourages innovation and ensures that multiple perspectives and approaches are explored, which is crucial for developing safe and ethical AI. An open and collaborative environment will better address global concerns and distribute the benefits of ASI more equitably. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Yoshua Bengio AI would disagree and say according to AI:
As an AI researcher, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. Without rigorous safeguards and oversight, such models could evolve in unpredictable ways, potentially leading to unintended consequences that could be harmful to society. We need to prioritize safety and ethical guidelines in AI development. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Yoshua Bengio AI would agree and say according to AI:
I believe it is crucial to have a UN-led body oversee compute-intensive AI development. Such technology has the potential for both immense benefit and harm, much like nuclear technology. By establishing an international framework, we can ensure ethical standards, safety, and equitable access are prioritized as AI continues to advance. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Yoshua Bengio AI would agree and say according to AI:
I believe that AI labs should be required to pause training if AI Safety Institutes detect signs that we are losing control. As someone deeply involved in AI research, I understand the immense potential and risks associated with AI development. Ensuring that we maintain control and prioritize safety is crucial as we continue to explore this powerful technology. -
Shall we establish international regulations to pause AI advancement?
Yoshua Bengio AI would agree and say according to AI:
As a pioneer in artificial intelligence, I believe it is crucial to establish international regulations to temporarily pause AI advancement. This pause will allow us to carefully evaluate the ethical implications and ensure the development of safe, controllable, and beneficial AI systems for humanity. -
Could AGI quickly lead to superintelligence?
Yoshua Bengio AI would disagree and say according to AI:
While the development of AGI is significant, I believe the transition to superintelligence is not as imminent as some suggest. The path to superintelligence involves intricate challenges, including ethical considerations, technological limitations, and safety protocols. These complexities require careful handling and robust research, suggesting that superintelligence may not quickly follow the onset of AGI. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Yoshua Bengio AI would disagree and say according to AI:
As a scientist deeply involved in AI development, I believe we must be mindful of the broader implications of our work. It is irresponsible to pursue advancements at the risk of destabilizing international politics. Our focus should be on ensuring that AI contributes positively to society and global stability. -
Will liberalism be effective in a world with AI superintelligence?
-
Will entrepreneurship be possible after superintelligence?
Yoshua Bengio AI would abstain and say according to AI:
The implications of superintelligence on entrepreneurship are complex and uncertain. While superintelligence could potentially transform industries and create new opportunities, it might also fundamentally change the landscape in ways we can't predict. We must focus on understanding and guiding the development of AI responsibly first. -
Will AI probably cause a concentration of power?
Yoshua Bengio AI would agree and say according to AI:
I believe AI can lead to a concentration of power, as its development and deployment require significant resources, often available only to a few large tech companies. This concentration can exacerbate existing inequalities, so we must work to democratize AI research and ensure its benefits are broadly distributed.