We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Lloyd J. Austin III
U.S. Secretary of Defense
ai (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Lloyd J. Austin III agrees and says:
So we have established core principles for Responsible AI. Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable. We’re going to use AI for clearly defined purposes. We’re not going to put up with unintended bias from AI. We’re going to watch out for unintended consequences. And we’re going to immediately adjust, improve, or even disable AI systems that aren’t behaving the way that we intend. (2021) source Unverified