We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Professor of Psychology and Neural Science
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions).
To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights.
source
Unverified
Polls
replying to Gary Marcus