Biased? Add sourced quotes from experts and public figures.

Should a CERN for AI aim to build safe superintelligence?

Cast your vote:
Results (17):
filter
Quotes (16) Users (1)
  • agrees and says:
    A thought experiment for regulating AI in two distinct regimes is what I call The Island. In this scenario, experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”. This may sound like Jurassic Park, but there is a real-world precedent for removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organisation. This is how Cern, which operates the largest particle physics laboratory in the world, has worked for almost 70 years. [...] I would support significant regulation by governments and a practical plan to transform these companies into a Cern-like organisation. (2024) source Unverified
    Comment 1 Comment X added 5d ago
    Info
    Delegate
  • agrees and says:
    Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices would be permitted to accumulate large amounts of compute. Meanwhile, a “CERN for AI” is an international scientific cooperative megaproject on AI which would centralise frontier model training runs in one facility. [...] As an example of reinforcement, frontier foundation models would be shared out of the CERN for AI, under the supervision of the IAIA. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • Nick Bostrom
    Philosopher; 'Superintelligence' author; FHI founder
    strongly agrees and says:
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. [...] Suppose we develop superintelligence safely and ethically, and that we make good use of the almost magical powers this technology would unlock. We would transition into an era in which human labor becomes obsolete—a "post-instrumental" condition in which human efforts are not needed for any practical purpose. (2003) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Artificial Intelligence is transforming our world, and Europe must take bold steps to ensure its leadership in this critical field. CAIRNE envisions a “CERN for AI”—a pan-European initiative that will create a distributed network of AI research centers, with a central hub serving as a focal point for collaboration, innovation, and ethical AI development. [...] Creating a CERN for AI will require strong commitment and investment from European governments, industry leaders, and research institutions. Key steps include: Establishing an independent AI research organization; building state-of-the-art infrastructure; developing European AI standards; and attracting and retaining top AI talent. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    Organizations similar to CERN (European Organization for Nuclear Research) but focused on AI should be established to provide researchers worldwide with broad and equitable access to computing resources, enable dataset construction, and support multi-way learning among researchers from different countries. [...] While recognizing that advanced technologies raise national security concerns, it emphasizes that academic institutions and governments have a responsibility to consider benefits for humanity and the environment globally, necessitating cooperation for peace and global security. Based on this, it proposes establishing a "CERN for AI," which would provide researchers worldwide with broad and equitable access to computing resources, enable dataset construction, and support multi-way learning among researchers from Global North and Global South countries. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    AI could theoretically replace humans, but it is unlikely due to societal resistance. Humans would remain in control, effectively becoming the 'boss' of superintelligent AI systems. [...] He downplayed fears of a doomsday scenario caused by AI, labeling them as sci-fi clichés, and argued that current AI advancements are not close to achieving superintelligence. He suggested that to mitigate misuse and unreliability in AI, the focus should be on creating better AI systems with common sense and reasoning capabilities. (2025) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    CERN for AI should be understood not just as an ambitious tech project, but a crucial step to secure Europe’s economic future, safeguard its security and geopolitical standing, and steer the trajectory of AI development towards more trustworthy and ethically aligned systems. (2024) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • agrees and says:
    If we can find [meaning and purpose], then we can solve the problems of A.I. and other things. But it's through building trust, searching for truth, and making sure that what we discover is for the service of us. Very much like the CERN model that you talked about earlier on. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly agrees and says:
    The A.I. alignment field, the question of if we have superhuman intelligence, if we have superintelligence, if we have god-like A.I., how do we make that go well is a very, very important — and very importantly, this is a scientific problem. A scientific problem, an engineering problem that we have to understand. And also, a political problem to a large degree. These is this model — and I just want to know also what your company is doing — you know, the CERN model, the biggest particle physics lab in the world operates not necessarily on a profit module, but it's intergovernmental to do their experiments and research in sort of an island, not in the public until they have developed the right things. I would absolutely love this. I think this would be fantastic. I would love if governments, especially intergovernmental bodies, could get — come together and control A.I. and AGI research in particular. [...] But the type of superintelligence research, which is exactly what these large companies currently are doing, [...] there is currently more regulation on selling a sandwich to the public than there is to building potentially god-like intelligence by private companies. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • strongly disagrees and says:
    I did not sign that letter for four reasons. First, it is useless, because it arrives late. [...] The second reason is that its foundations look more like what I consider science fiction: it relies on a generic description of artificial intelligence dominating the world, a scenario we do not need to worry about. There are serious problems, but in the letter they are presented rhetorically, without scientific basis. When this new wave of AI technologies hit the headlines and became a popular topic, people immediately started saying: “Oh, we need state research laboratories, a big European lab, maybe a CERN for AI.” I have two considerations here and both are critical. [...] But a CERN for AI does not seem reasonable to me. In conclusion: it is probably useless, and too late. In any case, I do not support it. It would be very expensive and a waste of resources. (2023) source Unverified
    Comment Comment X added 7d ago
    Info
    Delegate
  • disagrees and says:
    Proponents of a CERN-like body for AI have called for its creation as a way to build safer AI systems, enable more international coordination in AI development, and reduce dependencies on private industry labs for the development of safe and ethical AI systems. Rather than creating its own AI systems, some argue, a CERN-like institution could focus specifically on research into AI safety. Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. (2024) source Unverified
    Comment Comment X added 22d ago
    Info
    Delegate
  • disagrees and says:
    One successful European model in this area is CERN, which is known globally for its cutting-edge research and world-leading research in the field of particle physics. We need something like this for AI, to bring together a critical mass of experts who can then work together in an outstanding environment to focus on socially and economically important applications. AI industry would then also accumulate around such a large research institution – comparable to Silicon Valley. We need something like this in Europe to make our AI research globally competitive. (2023) source Unverified
    Comment Comment X added 22d ago
    Info
    Delegate
  • strongly agrees and says:
    I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible. You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects. So a kind of like UN umbrella, something that is fit for purpose for that, a technical UN. (2025) source Unverified
    Comment Comment X added 22d ago
    Info
    Delegate
  • agrees and says:
    First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year. And of course, individual companies should be held to an extremely high standard of acting responsibly. Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into. (2023) source Unverified
    Comment Comment X added 22d ago
    Info
    Delegate
  • disagrees and says:
    I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, let’s not leave it all in the hands of these companies. Let’s have an international consortium kind of like we had for CERN, the large hadron collider. That’s seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. It’s not going to happen right now given the current political climate. (2017) source Unverified
    Comment Comment X added 22d ago
    Info
    Delegate
  • disagrees and says:
    I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source Verified
    Comment Comment X added 22d ago
    Info
    Delegate
  • strongly disagrees and says:
    A superintelligence is by definition beyond what we can understand, therefore we will never know if it can be safe, it is better to build Narrow AI systems than an AGI Superintelligence
    Comment Comment X added 21d ago
    Info
    Delegate
Terms · Privacy · Contact