We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
Professor of Psychology and Neural Science
I am this person!
ai-regulation (9)
×
ai-governance (8)
policy (7)
ai-safety (6)
ai-policy (5)
regulations (5)
ai-ethics (4)
ai (3)
existential-risk (3)
international-relations (3)
ai-risk (2)
future (2)
research-policy (2)
transparency (2)
cern-for-ai (1)
Top
New
-
Gary Marcus votes For and says:
-
Gary Marcus votes For and says:
Algorithmic transparency. When a driverless car has an accident, or a consumer’s loan application has been denied, we should be able to ask what’s gone wrong. The big trouble with the black box algorithms that are currently in vogue is that [nobody] ... more Unverified source -
Gary Marcus votes For and says:
-
Gary Marcus votes For and says:
Some of the most recent models maybe can help people make biological weapons. Unverified source (2025) -
Gary Marcus votes For and says:
I think the UN, UNESCO, places like that have been thinking about this for a long time. Unverified source (2023) -
Gary Marcus votes For and says:
Yes, I agree [...] we need a new liability framework. [...] AI systems can produce harm at large scale [...]. Developing a framework for making companies responsible for harms [...] indirectly [...]. Unverified source (2023) -
Gary Marcus votes For and says:
Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics. Unverified source (2023) -
Gary Marcus votes Against and says:
-
Gary Marcus votes Against and says:
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don't know that we need to pause that particular project, but I do think its emphasis on focusing more on AI safety, on trustworthy, r... more Unverified source (2023)