We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (3)
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-risk (3)
×
ai-safety (3)
ai-ethics (2)
public-interest-ai (2)
ai-deployment (1)
existential-risk (1)
international-relations (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Gary Marcus agrees and says:
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions). To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights. source Unverified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Gary Marcus agrees and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Gary Marcus agrees and says:
Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified