Comment by Dean W. Ball

There is much to recommend these laws over the Nevada and Illinois bills I discussed above. Unlike those laws, SB 53 and RAISE are technically sophisticated, reflecting a clear understanding (for the most part) about what it is possible for AI developers to do. Here is what SB 53 does: 1. Requires developers of the largest AI models to publish a “safety and security protocol” describing the developers’ process of measuring, evaluating, and mitigating catastrophic risks (risks in which single incidents result in the death of more than 50 people or more than $1 billion in property damage) and dangerous capabilities (expert-level bioweapon or cyberattack advice/execution, engaging in murder, assault, extortion, theft, and the like, and evading developer control). 2. Requires developers to report to the California Attorney General “critical safety incidents,” which includes theft of model weights (assuming a closed-source model), loss of control over a foundation model resulting in injury or death, any materialization of a catastrophic risk (as defined above), model deception of developers (when the developer is not conducting experiments to try to elicit model deception), or any time a model first crosses dangerous capability thresholds as defined by their developers. Unverified source (2025)
Like Share on X 13d ago
Polls
replying to Dean W. Ball