Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Harvard Law professor
At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.”
The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing.
But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver.
(2024)
source
Unverified
Polls
replying to Lawrence Lessig