Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Clegg
Meta president of global affairs
ai (2)
ai-governance (2)
ai-policy (2)
×
ai-regulation (2)
ai-risk (2)
ai-safety (2)
public-interest-ai (2)
ai-deployment (1)
ai-ethics (1)
existential-risk (1)
international-relations (1)
Top
New
-
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Nick Clegg agrees and says:
The fundamental idea is, how should we as a world react if and when AI develops a degree of autonomy or agency? [...] Once we do that, we do cross a Rubicon. If that happens, by the way, there’s debate among experts; some say in the next 18 months, some say not within 80 years. But once you cross that Rubicon, you’re in a very different world. The large language models we’ve released are very primitive compared to that vision of the future. [...] But if it does emerge, I do think, whether it’s the IAEA or some other regulatory model, you’re in a completely different ballgame. (2023) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Nick Clegg disagrees and says:
My view is that the hype has somewhat run ahead of the technology. I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself. The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid. (2023) source Unverified