We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
Professor of Psychology and Neural Science
I am this person!
policy (9)
×
ai-governance (8)
ai-regulation (7)
ai-safety (6)
regulations (6)
ai-policy (4)
existential-risk (4)
ai (3)
ai-ethics (3)
ethics (3)
international-relations (3)
research-policy (3)
ai-risk (2)
cern-for-ai (2)
future (2)
Top
New
-
Gary Marcus votes For and says:
-
Gary Marcus votes For and says:
Some of the most recent models maybe can help people make biological weapons. Unverified source (2025) -
Gary Marcus votes For and says:
Yes, I agree [...] we need a new liability framework. [...] AI systems can produce harm at large scale [...]. Developing a framework for making companies responsible for harms [...] indirectly [...]. Unverified source (2023) -
Gary Marcus votes For and says:
Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics. Unverified source (2023) -
Gary Marcus votes For and says:
I have talked about having something like a CERN [European Organization for Nuclear Research] for AI, which might focus on AI safety. In some industries, we know how to make reliable [products], usually only in narrow domains. One example is bridges:... more Unverified source -
Gary Marcus votes For and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. Unverified source (2023) -
Gary Marcus votes Against and says:
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If... more Unverified source (2017) -
Gary Marcus votes Against and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, r... more Unverified source (2023) -
Gary Marcus votes Against and says: