Gary Marcus

Info
Professor of Psychology and Neural Science
X: @GaryMarcus · Wikipedia
Location: United States
Top
New
  • Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
    Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] knows exactly why an LLM or generative model produces what it does. Guidelines like the White House’s Blueprint for an AI Bill of Rights, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Center for AI and Digital Policy’s Universal Guidelines for AI all decry this lack of interpretability. The EU AI Act represents real progress in this regard, but so far in the United States, there is little legal requirement for algorithms to be disclosed or interpretable (except in narrow domains such as credit decisions). To their credit, Senator Ron Wyden (D-OR), Senator Cory Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced an Algorithmic Accountability Act in February 2022 (itself an update of an earlier proposal from 2019), but it has not become law. If we took interpretability seriously — as we should — we would wait until better technology was available. In the real world, in the United States, the quest for profits is basically shoving aside consumer needs and human rights. source Unverified
    Comment 1 Comment X added 1mo ago
  • Should third-party audits be mandatory for major AI systems?
    OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent audits before releasing new systems”, but to my knowledge they have not yet submitted to such audits. They have also said “at some point, it may be important to get independent review before starting to train future systems”. But again, they have not submitted to any such advance reviews so far. We have to stop letting them set all the rules. AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need government involved. We need the tech companies involved, big and small. But we also need independent scientists. Not just so that we scientists can have a voice, but so that we can participate, directly, in addressing the problems and evaluating solutions. And not just after products are released, but before. We need tight collaboration between independent scientists and governments—in order to hold the companies’ feet to the fire. Allowing independent scientists access to these systems before they are widely released – as part of a clinical trial-like safety evaluation - is a vital first step. (2023) source Verified
    Comment 1 Comment X added 1mo ago
  • Should a CERN for AI aim to build safe superintelligence?
    I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, let’s not leave it all in the hands of these companies. Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. It’s not going to happen right now given the current political climate. (2017) source Unverified
    Comment Comment X added 23d ago
  • Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
    I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source Unverified
    Comment Comment X added 29d ago
  • Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
    My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right. [...] I would agree. And, and I don't think it's a realistic thing in the world. The reason I personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe ai rather than just making a bigger version of something we already know to be unreliable. (2023) source Unverified
    Comment Comment X added 6d ago
  • Should member states have majority governance control in a CERN for AI?
    I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges: You can't guarantee that a bridge will never fall down, but you can say that, unless there’s an earthquake of a certain magnitude that only happens once every century, we're confident the bridge will still stand. Our bridges don't fall down often anymore. But for AI, we can’t do that at all as an engineering practice—it’s like alchemy. There’s no guarantee that any of it works. So, you could imagine an international consortium trying to either fix the current systems, which I think, in historical perspective, will seem mediocre, or build something better that does offer those guarantees. Many of the big technologies that we have around, from the internet to space ships, were government-funded in the past; it's a myth that in America innovation only comes from the free market. (2024) source Unverified
    Comment Comment X added 5d ago
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified
    Comment Comment X added 24d ago
Back to home
Terms · Privacy · Contact