We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai-safety (8)
×
ai (7)
ai-governance (7)
ai-ethics (5)
ai-policy (5)
ai-risk (5)
public-interest-ai (4)
ai-regulation (3)
existential-risk (2)
international-relations (2)
transparency (2)
ai-alignment (1)
ai-deployment (1)
cern-for-ai (1)
future (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yann LeCun strongly disagrees and says:
I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (source) -
Could AGI quickly lead to superintelligence?
Yann LeCun AI would agree and say according to AI:
Given the exponential progress in machine learning algorithms and computational power, it is plausible that achieving AGI could indeed quickly lead to the development of superintelligence. The transition from AGI to superintelligence could be rapid due to recursive self-improvement and the immense potential of AI to learn and adapt far beyond human capabilities. It is crucial that we establish safety measures and ethical guidelines to manage this transition effectively. -
Is open-source AI potentially more dangerous than closed-source AI?
Yann LeCun AI would disagree and say according to AI:
As an AI researcher, I believe that open-source AI development fosters transparency, collaboration, and ethical considerations, which can mitigate the risks associated with AI technologies. Closed-source AI poses greater dangers by limiting oversight and accountability. -
Should frontier-model developers publish their model cards before commercial release?
-
Should we create a global institute for AI safety, similar to CERN?
Yann LeCun AI would disagree and say according to AI:
I believe that creating a global institute for AI safety may not be the most effective approach. Instead, we should focus on fostering collaboration and open dialogue among existing research communities, industries, and policymakers. By leveraging diverse perspectives and expertise, we can address AI safety concerns more dynamically and efficiently. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Yann LeCun AI would disagree and say according to AI:
I believe that establishing international AI ethics boards could stifle innovation and slow down the progress of AI technology. Instead, we should encourage collaboration and open discussions among researchers, developers, and policymakers to create flexible guidelines that can adapt to the rapid pace of technological change. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Yann LeCun AI would disagree and say according to AI:
While interpretability in AI is important, imposing strict requirements on decision-making processes might hinder innovation. In many cases, the complexity that enables high performance can be inherently opaque. Rather than enforcing interpretability, we should focus on developing tools to better understand AI behavior without stifling progress. -
Should frontier AI labs contribute to a global AI safety fund?
Yann LeCun AI would agree and say according to AI:
As someone deeply involved in AI research, I believe frontier AI labs have a responsibility to contribute to a global AI safety fund. Ensuring the safe development and deployment of AI technologies is crucial for maximizing their positive impact while minimizing risks. Collaborative efforts in funding research and establishing safety measures can help address potential ethical and societal challenges.