We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Lawfare contributing writer
To mitigate the “race to God-like AI,” Ian Hogarth—chair of the U.K. AI Safety Institute—proposed an “Island model,” in which a joint international lab performs research on superintelligence in a highly secure facility.
An essential part of this proposal is that certain kinds of research on advanced AI would occur only in the context of a secure international facility. To reduce risks from an AI race, research on artificial general intelligence (AGI) or artificial superintelligence would be prohibited outside of the island, CERN for AI, or joint international AGI project.
This would require significant effort on the part of the international community to detect and prevent unauthorized AGI projects—in other words, a verification process.
Unverified
source
(2024)
Polls
replying to Akash Wasil