Putting liability on users feels most incentive-compatible. While the link between how a model is developed and how it ends up being used is often unclear, the user decides exactly how the AI is used. Liability on users creates a strong pressure to do AI in what I consider the right way: focus on building mecha suits for the human mind, not on creating new forms of self-sustaining intelligent life. The former responds regularly to user intent, and so would not cause catastrophic actions unless the user wanted them to. The latter would have the greatest risk of going off and creating a classic "AI going rogue" scenario. (2025) source Unverified
Comment X 3d ago
Polls
replying to Vitalik Buterin
Terms · Privacy · Contact