We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
existential-risk (4)
×
ai-governance (3)
ai-regulation (3)
ai-risk (3)
ai-safety (3)
policy (3)
agi (2)
ai (2)
ai-ethics (2)
ai-policy (2)
regulations (2)
cern-for-ai (1)
ethics (1)
future (1)
nuclear (1)
Top
New
-
Demis Hassabis votes For and says:
endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. Unverified source (2025) -
Demis Hassabis votes For and says:
I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible. You would also have to pair it with a kind... more Unverified source (2025) -
Demis Hassabis votes For and says:
Maybe it would be good to have a slightly slower pace, so that we can get this right societally. [...] Asked whether he would advocate for a pause in AI development if every company and country joined in, Hassabis responded: "I think so." He added th... more Unverified source (2026) -
Demis Hassabis votes For and says:
Scott Pelley: But is self-awareness a goal of yours? Demis Hassabis: Not explicitly. But it may happen implicitly. These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems to understand y... more Unverified source (2025)