We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Former NTIA administrator, attorney
We set out to answer an important question: If we want responsible innovation and trustworthy AI, how do we hold AI systems — and the entities and individuals that develop, deploy, and use them — accountable? How do we ensure that they are doing what they say? For example, if an AI system claims to keep data private, or operate securely, or avoid biased outcomes – how do we ensure those claims are true?
The Report calls for improved transparency into AI systems, independent evaluations, and consequences for imposing risks. One key recommendation: The government ought to require independent audits of the highest-risk AI systems – such as those that directly impact physical safety or health, for example.
(2024)
source
Unverified
Polls
replying to Alan B. Davidson