We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-governance (3)
ai-policy (3)
international-relations (3)
×
ai (2)
ai-regulation (2)
ai-safety (2)
cern-for-ai (2)
public-interest-ai (2)
research-policy (2)
science-funding (2)
ai-risk (1)
ethics (1)
ethics-in-research (1)
eu (1)
existential-risk (1)
Top
New
-
Should a CERN for AI aim to build safe superintelligence?
Gary Marcus disagrees and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, let’s not leave it all in the hands of these companies. Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. It’s not going to happen right now given the current political climate. (2017) source Unverified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Gary Marcus agrees and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source Unverified -
Should a CERN for AI be completely non-profit?
Gary Marcus agrees and says:
I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges: You can't guarantee that a bridge will never fall down, but you can say that, unless there’s an earthquake of a certain magnitude that only happens once every century, we're confident the bridge will still stand. Our bridges don't fall down often anymore. But for AI, we can’t do that at all as an engineering practice—it’s like alchemy. There’s no guarantee that any of it works. So, you could imagine an international consortium trying to either fix the current systems, which I think, in historical perspective, will seem mediocre, or build something better that does offer those guarantees. Many of the big technologies that we have around, from the internet to space ships, were government-funded in the past; it's a myth that in America innovation only comes from the free market. source Unverified