Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-governance (7)
ai-policy (6)
ai-safety (6)
ai (5)
ai-regulation (5)
ai-risk (3)
cern-for-ai (3)
international-relations (3)
public-interest-ai (3)
ai-ethics (2)
existential-risk (2)
research-policy (2)
science-funding (2)
transparency (2)
trust-in-ai (2)
Top
New
-
Should we create a global institute for AI, similar to CERN?
Gary Marcus strongly agrees and says:
In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. source Verified -
Should third-party audits be mandatory for major AI systems?
Gary Marcus agrees and says:
OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be important to get independent review before starting to train future systems”. But again, they have not submitted to any such advance reviews so far. We have to stop letting them set all the rules. AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small. But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions. And not just after products are released, but before. We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire. Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation - is a vital first step. (2023) source Verified -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Gary Marcus agrees and says:
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions). To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights. source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Gary Marcus disagrees and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, let’s not leave it all in the hands of these companies. Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. It’s not going to happen right now given the current political climate. (2017) source Unverified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Gary Marcus agrees and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source Unverified -
Should a CERN for AI be completely non-profit?
Gary Marcus agrees and says:
I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges: You can't guarantee that a bridge will never fall down, but you can say that, unless there’s an earthquake of a certain magnitude that only happens once every century, we're confident the bridge will still stand. Our bridges don't fall down often anymore. But for AI, we can’t do that at all as an engineering practice—it’s like alchemy. There’s no guarantee that any of it works. So, you could imagine an international consortium trying to either fix the current systems, which I think, in historical perspective, will seem mediocre, or build something better that does offer those guarantees. Many of the big technologies that we have around, from the internet to space ships, were government-funded in the past; it's a myth that in America innovation only comes from the free market. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Gary Marcus agrees and says:
Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified