We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (3)
ai-governance (3)
ai-regulation (3)
ai-safety (3)
existential-risk (3)
×
ai-policy (2)
ai-deployment (1)
ai-ethics (1)
ai-risk (1)
cern-for-ai (1)
democracy (1)
ethics (1)
eu (1)
international-relations (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Gary Marcus agrees and says:
Some of the most recent models maybe can help people make biological weapons. (2025) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Gary Marcus disagrees and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, let’s not leave it all in the hands of these companies. Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. It’s not going to happen right now given the current political climate. (2017) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Gary Marcus disagrees and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right. [...] I would agree. And, and I don't think it's a realistic thing in the world. The reason I personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe ai rather than just making a bigger version of something we already know to be unreliable. (2023) source Unverified