Long timelines and cost overruns often plague ambitious big science collaborations. Physics breakthroughs have required enormous hardware investments over years. For example, to build CERN’s Large Hadron Collider, over 10,000 scientists and engineers from hundreds of universities and labs contributed to its design and construction over a decade. But while current computer clusters for AI research have yet to require such large workforces, constructing data centres and network infrastructure at scale for a new institute will still take time, investment, and reliable access to currently undersupplied specialized chips for AI development. That said, the modular nature of graphics processing units (GPUs) and servers could allow for much faster scaling up of AI infrastructure than has been feasible in previous science megaprojects. Challenges in AI safety also differ from those of particle physics, so addressing them may require more dynamic, distributed initiatives. Care would need to be taken to involve diverse stakeholders, and to balance capabilities against controls. Inflated expectations for AI governance via a CERN-like model could backfire if they are not realistic about such an organization’s inherent limitations. (2024) source Unverified
Comment X 2d ago
Polls
replying to Chatham House
Terms · Privacy · Contact