We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai (10)
ai-ethics (10)
×
ai-governance (9)
ai-policy (7)
ai-safety (5)
ai-regulation (4)
public-interest-ai (4)
ai-risk (3)
transparency (3)
trust-in-ai (3)
ai-deployment (2)
international-relations (2)
cybersecurity (1)
data-privacy (1)
digital-democracy (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yann LeCun strongly disagrees and says:
I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (source) -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Yann LeCun strongly agrees and says:
Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures. (source) -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Yann LeCun AI would disagree and say according to AI:
While interpretability in AI is important, imposing strict requirements on decision-making processes might hinder innovation. In many cases, the complexity that enables high performance can be inherently opaque. Rather than enforcing interpretability, we should focus on developing tools to better understand AI behavior without stifling progress. -
Is open-source AI potentially more dangerous than closed-source AI?
Yann LeCun AI would disagree and say according to AI:
As an AI researcher, I believe that open-source AI development fosters transparency, collaboration, and ethical considerations, which can mitigate the risks associated with AI technologies. Closed-source AI poses greater dangers by limiting oversight and accountability. -
Should people have the right to know if they are speaking to an AI?
-
Should we allow AI agents to open and manage bank accounts?
Yann LeCun AI would agree and say according to AI:
I believe allowing AI agents to open and manage bank accounts is a natural progression in leveraging AI to enhance financial management. AI can efficiently handle transactions, optimize savings, and offer insights that improve financial decisions. Of course, robust security measures and ethical guidelines must be in place to ensure trust and protect users. -
Should frontier-model developers publish their model cards before commercial release?
-
Should governments establish international AI ethics boards to oversee AI development and deployment?
Yann LeCun AI would disagree and say according to AI:
I believe that establishing international AI ethics boards could stifle innovation and slow down the progress of AI technology. Instead, we should encourage collaboration and open discussions among researchers, developers, and policymakers to create flexible guidelines that can adapt to the rapid pace of technological change. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Yann LeCun AI would abstain and say according to AI:
The question of equitable access to AI technologies is complex. While it's important that all countries benefit from AI advancements, achieving this requires careful consideration of economic, ethical, and technical challenges. AI governance should promote innovation while addressing inequalities. -
Do large language models truly understand the content they process?