We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Chuck Schumer
U.S. Senate Majority Leader
ai (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Chuck Schumer agrees and says:
Finally, explainability, one of the thorniest and most technically complicated issues we face, but perhaps the most important of all. Explainability is about transparency. When you ask an AI system a question and it gives you an answer, perhaps the answer you weren’t expecting, you want to know where the answer came from. You should be able to ask: Why did AI choose this answer over some other answer that also could have been a possibility? And it should be done in a simple way, so all users can understand how these systems come up with answers. But fortunately, the average person does not need to know the inner workings of these algorithms. But we do, we do need to require companies to develop a system where, in simple and understandable terms, users understand why the system produced a particular answer and where that answer came from. This is very complicated but very important work. And here we will need the ingenuity of the experts and companies to come up with a fair solution that Congress can use to break open AI’s black box. Innovation first, but with security, accountability, foundations and explainability. These are the principles that I believe will ensure that AI innovation is safe and responsible and has the appropriate guardrails. (2023) source Unverified