We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-governance (8)
ai-safety (5)
ai-regulation (4)
cern-for-ai (4)
policy (3)
research-policy (3)
ai (1)
ai-policy (1)
existential-risk (1)
future (1)
international-relations (1)
transparency (1)
Top
New
-
Require AI systems above a capability threshold to be interpretable
Gary Marcus votes For and says:
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] ... more Unverified source -
Create a global institute for AI, similar to CERN
Gary Marcus votes For and says:
-
Mandate third-party audits for major AI systems
Gary Marcus votes For and says:
-
Ban open source AI models capable of creating WMDs
Gary Marcus votes For and says:
Some of the most recent models maybe can help people make biological weapons. Unverified source (2025) -
Participate in shaping the future of AI
Gary Marcus votes For and says:
I worry about it. Sometimes, I think we’re going to wind up building better AI at some point no matter what I say and that we should prepare for what we’re going to do about it. I think that the concerns with over-empowered, mediocre AI are pretty se... more Unverified source (2023) -
Establish a UN-led body to oversee compute-intensive AI
Gary Marcus votes For and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. Unverified source (2023) -
Grant member states majority governance control in the CERN for AI
Gary Marcus votes For and says:
I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges:... more Unverified source (2024) -
The CERN for AI should be completely non-profit
Gary Marcus votes For and says:
I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges:... more Unverified source -
Mandate the CERN for AI to build safe superintelligence
Gary Marcus votes Against and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If... more Unverified source (2017) -
AI poses an existential threat to humanity
Gary Marcus votes Against and says:
-
Ban superintelligence development until safety consensus is reached
Gary Marcus votes Against and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, r... more Unverified source (2023)