In my (currently) preferred version of a CERN for AI, an initially small but steadily growing coalition of companies and countries would: * Collaborate on designing and building highly secure chips and datacenters; * Collaborate on accelerating AI safety research and engineering, and agree on a plan for safely scaling AI well beyond human levels of intelligence while preserving alignment with human values; * Safely scale AI well beyond human levels of intelligence while preserving alignment with human values; * Distribute (distilled versions of) this intelligence around the world. In practice, it won’t be quite this linear, which I’ll return to later, but this sequence of bullets conveys the gist of the idea. [...] This suggests that the most dangerous AI should not be developed by a private company making decisions based on profit, or a government pursuing its own national interests, but instead through some sort of cooperative arrangement among all those companies and governments, and this collaboration should be accountable to all of humanity rather than shareholders or citizens of just one country. The first practical motivation is simply that we don’t yet know how to do all of this safely (keeping AI aligned with human values as it gets more intelligent), and securely (making sure it doesn’t get stolen and then misused in catastrophic ways), and the people good at these things are scattered across many organizations and countries. The best cybersecurity engineer and the best safety research engineer currently only helps one project, and it may not even be the one with the most advanced capabilities. This fact suggests the value of combining expertise across the various currently-competing initiatives. Unverified source (2024)
Comment X 6h ago
Polls
replying to Miles Brundage
Terms · Privacy · Contact