Comment by Sam Gregory

Users should have control over how their personal data, or data generated while interacting with a service, is used in the training of AI systems. This is consistent with a rights-based approach to regulating AI that emphasizes rights such as privacy. I have highlighted in my written statement that a rights-based approach to AI regulation, as opposed to exclusively a risk-based approach, is fundamental to reflect the human rights impact of AI. Legislative efforts that protect users from having their data collected to train AI models would also be aligned with existing data protection laws in the U.S. Following a rights-based approach that centers U.S. Constitutional rights and fundamental human rights would mean to protect `by default' these rights. Accordingly, users should have the choice to indicate whether they want their data included in the training--instead of having this information collected as a feature of the system, even with the possibility to opt-out. Government testing with `nudging' techniques also suggests that in order to protect consumers' rights, it would be more desirable in this situation to give users the choice to opt-in. Unverified source (2024)
Like Share on X 12h ago
Polls
replying to Sam Gregory