Center for Democracy & Technology deputy director
Principles for responsible use of AI technologies should be applied broadly across development and deployment. In particular, Government use of AI should be: (1) Built upon proper training data; (2) Subject to independent testing and high performance standards; (3) Deployed only within the bounds of the technology's designed function; (4) Used exclusively by trained staff and corroborated by human review; (5) Subject to internal governance mechanisms that define and promote responsible use; (6) Bound by safeguards to protect human rights and Constitutional values; and (7) Regulated by institutional mechanisms for ensuring transparency and oversight. Independent Testing.--Requiring independent testing and high performance standards is a key safeguard against adoption of low-quality systems. Such measures are important because poor algorithm design or flawed training data are not always readily apparent, and AI technologies are frequently being applied to new situations and circumstances. Testing should be conducted by independent experts, with transparent methodology that allows for peer review and improvement. Unverified source (2024)
Comment X 7h ago
Polls
replying to Jake Laperruque
Terms · Privacy · Contact