We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-risk (4)
×
ai-governance (3)
ai-regulation (3)
ai-safety (3)
existential-risk (3)
policy (3)
regulations (3)
ai (2)
ai-ethics (2)
research-policy (2)
agi (1)
ai-policy (1)
cern-for-ai (1)
ethics (1)
nuclear (1)
Top
New
-
Demis Hassabis votes For and says:
endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. Unverified source (2025) -
Demis Hassabis votes For and says:
It may be a good thing that [AI development is] not as fast. There's a whole bunch of other things that we need to think through with this technology [...] We don't have a lot of time to sort out before we get to [Artificial General Intelligence]. Unverified source (2026) -
Demis Hassabis votes For and says:
I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible. You would also have to pair it with a kind... more Unverified source (2025) -
Demis Hassabis votes For and says:
Scott Pelley: But is self-awareness a goal of yours? Demis Hassabis: Not explicitly. But it may happen implicitly. These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems to understand y... more Unverified source (2025)