We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Wharton professor of technology decisions
Cybersecurity risk provides more than a warning; it also offers an answer. Cybersecurity audits have become a norm for companies today, and the responsibility and liability for cyber risk audits goes all the way up to the board of directors. I believe companies using AI models for socially or financially consequential decisions need similar audits as well, and I am not alone.
The Algorithmic Accountability Act, proposed by Democratic lawmakers this past Spring, would, if passed, require that large companies formally evaluate their “high-risk automated decision systems” for accuracy and fairness. EU’s GDPR audit process, while mostly focused on regulating the processing of personal data by companies, also covers some aspects of AI such as a consumer’s right to explanation when companies use algorithms to make automated decisions. While the scope of the right to explanation is relatively narrow, the Information Commissioner’s Office (ICO) in the U.K. has recently invited comments for a proposed AI auditing framework that is much broader in scope.
But forward-thinking companies should not wait for regulation. High-profile AI failures will reduce consumer trust and only serve to increase future regulatory burdens. These are best avoided through proactive measures today.
(2019)
source
Unverified
Polls
replying to Kartik Hosanagar