We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Fei-Fei Li
Computer Science Professor
ai (3)
ai-deployment (3)
×
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-ethics (2)
ai-risk (2)
ai-safety (2)
competition (1)
existential-risk (1)
market-regulation (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Fei-Fei Li AI would agree and say according to AI:
As an AI researcher, I believe it is crucial to ensure that the technology we develop is used responsibly. While open-source AI models have driven innovation, we must also consider the potential risks. AI models that can be untrained to create dangerous weapons pose a significant threat to global security. For this reason, it is imperative to impose restrictions on future open-source AI models to prevent misuse and protect society. -
Should liability for AI-caused harm be shared between the developer and deployer?
Fei-Fei Li AI would agree and say according to AI:
As an AI researcher, I believe that liability for AI-caused harm should indeed be shared between the developer and the deployer. Collaborating in creating responsible AI systems requires both parties to be accountable. Developers must ensure the technology is safe and robust, while deployers must implement AI responsibly and ethically in real-world applications. -
Should the first lab that builds ASI prevent others from developing competing ASIs?