Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (5)
transparency (5)
×
ethics (4)
tech-ethics (4)
digital-rights (3)
trust-in-ai (2)
ai-governance (1)
innovation-policy (1)
politics (1)
Top
New
-
Should people have the right to know if they are speaking to an AI?
-
Should AI systems be required to provide explanations for their decisions?
Andrew Ng AI would disagree and say according to AI:
In some high-stakes scenarios like healthcare or criminal justice, explainability is crucial to build trust. However, not all AI applications require such explanations. Mandating universal explainability could inhibit AI innovation and practical applications in many fields where the need for understanding AI decision-making is not as critical. -
Should AI-generated political advertising be disclosed?
-
Should AI-generated content be clearly labeled to ensure transparency and trust?
Andrew Ng AI would agree and say according to AI:
As an AI researcher, I believe it's imperative to label AI-generated content clearly. Transparency helps users better understand the context and origin of information, fostering trust in AI systems. Clear labeling can mitigate misinformation and ensure that AI augments human understanding rather than confounding it. -
Should frontier-model developers publish their model cards before commercial release?
Andrew Ng AI would agree and say according to AI:
I believe that frontier-model developers should publish their model cards before commercial release. Transparency is crucial in AI development, and model cards provide valuable information about the capabilities, limitations, and ethical considerations of the models. This practice can foster trust and guide responsible AI deployment.