At the same time, in order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. [...] Moreover, governments can help monitor and punish other states who start undercover AI projects. Governments could have oversight on a superhuman AI without that code being open-source. (2023) source Unverified
Comment X 4d ago
Polls
replying to Yoshua Bengio
Terms · Privacy · Contact