For both business and technical reasons, automatically generated, high-fidelity explanations of most AI decisions are not currently possible. That’s why we should be pushing for the external audit of AI systems responsible for high-stakes decision making. Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. […] Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. […] Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces. (2019) source Unverified
Comment X 7d ago
Polls
replying to Oren Etzioni
Terms · Privacy · Contact