We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
NYU institute on AI policy
1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards.
This would represent a significant shift: our recommendation reflects the major decisions that AI and related systems are already influencing, and the multiple studies providing evidence of bias in the last twelve months (as detailed in our report). Others are also moving in this direction, from the ruling in favor of teachers in Texas, to the current process underway in New York City this month, where the City Council is considering a bill to ensure transparency and testing of algorithmic decision making systems.
(2017)
source
Unverified
Polls
replying to AI Now Institute