We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Connor Leahy
Conjecture CEO; AI safety researcher
ai (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-safety (1)
cern-for-ai (1)
ethics (1)
eu (1)
existential-risk (1)
international-relations (1)
research-policy (1)
×
science-funding (1)
Top
New
-
Should a CERN for AI aim to build safe superintelligence?
Connor Leahy strongly agrees and says:
The A.I. alignment field, the question of if we have superhuman intelligence, if we have superintelligence, if we have god-like A.I., how do we make that go well is a very, very important — and very importantly, this is a scientific problem. A scientific problem, an engineering problem that we have to understand. And also, a political problem to a large degree. These is this model — and I just want to know also what your company is doing — you know, the CERN model, the biggest particle physics lab in the world operates not necessarily on a profit module, but it's intergovernmental to do their experiments and research in sort of an island, not in the public until they have developed the right things. I would absolutely love this. I think this would be fantastic. I would love if governments, especially intergovernmental bodies, could get — come together and control A.I. and AGI research in particular. [...] But the type of superintelligence research, which is exactly what these large companies currently are doing, [...] there is currently more regulation on selling a sandwich to the public than there is to building potentially god-like intelligence by private companies. (2023) source Unverified