We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-safety (5)
×
ai-governance (3)
ai-regulation (2)
cern-for-ai (2)
existential-risk (1)
policy (1)
research-policy (1)
Top
New
-
Create a global institute for AI, similar to CERN
Gary Marcus votes For and says:
-
Ban open source AI models capable of creating WMDs
Gary Marcus votes For and says:
Some of the most recent models maybe can help people make biological weapons. Unverified source (2025) -
AI poses an existential threat to humanity
Gary Marcus votes Against and says:
-
Mandate the CERN for AI to build safe superintelligence
Gary Marcus votes Against and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If... more Unverified source (2017) -
Ban superintelligence development until safety consensus is reached
Gary Marcus votes Against and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, r... more Unverified source (2023)