We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Fei-Fei Li
Computer Science Professor
ai (3)
ai-risk (3)
ai-safety (3)
existential-risk (3)
×
ai-alignment (2)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-deployment (1)
future (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Fei-Fei Li AI would agree and say according to AI:
As an AI researcher, I believe it is crucial to ensure that the technology we develop is used responsibly. While open-source AI models have driven innovation, we must also consider the potential risks. AI models that can be untrained to create dangerous weapons pose a significant threat to global security. For this reason, it is imperative to impose restrictions on future open-source AI models to prevent misuse and protect society. -
Could AGI quickly lead to superintelligence?
Fei-Fei Li AI would abstain and say according to AI:
As an AI researcher, I recognize the complexities and unpredictabilities in developing AGI. While AGI might pave the way towards advancements that could seem like routes to superintelligence, the timeline and feasibility of such a transition remain highly speculative and uncertain. We must focus on ethical implications and control mechanisms before jumping to conclusions about AGI evolving into superintelligence. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Fei-Fei Li AI would abstain and say according to AI:
While the potential of open-weight models capable of recursive self-improvement is fascinating, I'm cautious about their implications. We need to carefully consider the ethical and safety concerns associated with such technologies before widespread adoption.