We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (6)
×
ai-governance (6)
ai-policy (5)
ai-regulation (5)
ai-safety (5)
ai-ethics (3)
ai-risk (3)
public-interest-ai (3)
existential-risk (2)
transparency (2)
trust-in-ai (2)
ai-deployment (1)
democracy (1)
digital-democracy (1)
future (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Gary Marcus agrees and says:
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions). To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights. source Unverified -
Should third-party audits be mandatory for major AI systems?
Gary Marcus agrees and says:
OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be important to get independent review before starting to train future systems”. But again, they have not submitted to any such advance reviews so far. We have to stop letting them set all the rules. AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small. But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions. And not just after products are released, but before. We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire. Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation - is a vital first step. (2023) source Verified -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus agrees and says:
I worry about it. Sometimes, I think we’re going to wind up building better AI at some point no matter what I say and that we should prepare for what we’re going to do about it. I think that the concerns with over-empowered, mediocre AI are pretty serious and need to be dealt with no matter what. I signed that letter on a pause. I don’t expect that it’s going to happen. But I think that we as a society should be considering these things. I think we should be considering them even in conjunction with our competitors. But the geopolitical reality is probably that people will not. We have to prepare for that contingency as well. Sooner or later, we will get to artificial general intelligence and we should be figuring out what we’re going to do when we get there. (2023) source Unverified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Gary Marcus agrees and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source Unverified -
Should humanity ban the development of superintelligence until there is a strong public buy-in and broad scientific consensus that it will be done safely and controllably?
Gary Marcus disagrees and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right. [...] I would agree. And, and I don't think it's a realistic thing in the world. The reason I personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe ai rather than just making a bigger version of something we already know to be unreliable. (2023) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Gary Marcus agrees and says:
Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified