Comment by Future of Life Institute

The Roadmap emphasizes the need to “hold AI developers and deployers accountable if their products or actions cause harm to consumers.” We agree that developers, deployers, and users should all be expected to behave responsibly in the creation, deployment, and use of AI systems, and emphasize that the imposition of liability on developers is particularly critical in order to encourage early design choices that prioritize the safety and wellbeing of the public. The Roadmap also correctly points out that “the rapid evolution of technology and the varying degrees of autonomy in AI products present difficulties in assigning legal liability to AI companies and their users.” Under current law, it is unclear who is responsible when an AI system causes harm, particularly given the complexity of the AI supply chain. Shielding the developers and operators of advanced AI systems from liability for harms resulting from their products would provide little incentive for responsible design that minimizes risk, and, as we have seen with social media, could result in wildly misaligned incentives to the detriment of the American public. [...] Congress should clarify that Section 230 of the Communications Decency Act does not shield AI providers from liability for harms resulting from their systems, even if the output was generated in response to a prompt provided by a user. Unverified source (2024)
Like Share on X 3d ago
Polls
replying to Future of Life Institute