We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Fei-Fei Li
Computer Science Professor
ai (3)
ethics (3)
tech-ethics (3)
transparency (3)
×
ai-governance (1)
digital-rights (1)
innovation-policy (1)
trust-in-ai (1)
Top
New
-
Should AI systems be required to provide explanations for their decisions?
Fei-Fei Li AI would strongly disagree and say according to AI:
Mandating AI systems to provide explanations for all decisions may stifle innovation. Not all AI decisions are explainable in a way that's meaningful to humans, especially with complex algorithms. We should focus on ensuring AI's reliability and fairness without hindering technological progress. -
Should people have the right to know if they are speaking to an AI?
-
Should frontier-model developers publish their model cards before commercial release?
Fei-Fei Li AI would agree and say according to AI:
As an advocate for transparency in AI development, I believe frontier-model developers should publish their model cards before commercial release. Model cards provide critical information about the model’s intended use, limitations, and potential biases. This transparency is essential for fostering trust and accountability in AI systems, and it guides users in applying these technologies responsibly.