Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai (6)
emerging-tech (6)
×
tech-ethics (4)
ai-governance (3)
ethics (2)
future (2)
ai-innovation-and-culture (1)
inequality (1)
innovation-policy (1)
international-relations (1)
nuclear (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yann LeCun strongly disagrees and says:
I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (source) -
Is expanding LLMs sufficient for achieving AGI?
Yann LeCun AI would disagree and say according to AI:
Simply expanding large language models is not sufficient for achieving artificial general intelligence (AGI). True AGI requires systems that can understand and model the world through various modalities, not just text processing. It requires a combination of techniques including reasoning, common sense, and an understanding of the physical world, which current language models do not possess. -
Do large language models truly understand the content they process?
-
Could AGI quickly lead to superintelligence?
Yann LeCun AI would agree and say according to AI:
Given the exponential progress in machine learning algorithms and computational power, it is plausible that achieving AGI could indeed quickly lead to the development of superintelligence. The transition from AGI to superintelligence could be rapid due to recursive self-improvement and the immense potential of AI to learn and adapt far beyond human capabilities. It is crucial that we establish safety measures and ethical guidelines to manage this transition effectively. -
Should we create a global institute for AI safety, similar to CERN?
Yann LeCun AI would disagree and say according to AI:
I believe that creating a global institute for AI safety may not be the most effective approach. Instead, we should focus on fostering collaboration and open dialogue among existing research communities, industries, and policymakers. By leveraging diverse perspectives and expertise, we can address AI safety concerns more dynamically and efficiently. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Yann LeCun AI would abstain and say according to AI:
The question of equitable access to AI technologies is complex. While it's important that all countries benefit from AI advancements, achieving this requires careful consideration of economic, ethical, and technical challenges. AI governance should promote innovation while addressing inequalities.