We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should we ban predictive policing?
Cast your vote:
Results (28):
filter
Quotes (27)
Users (0)
-
Electronic Frontier FoundationWe're a nonprofit that fights for your privacy and free speech online.strongly agrees and says:Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Open Rights GroupDigital rights advocacy groupstrongly agrees and says:Many AI systems have been proven to magnify discrimination and inequality. In particular, so-called ‘predictive policing’ and biometric surveillance systems are disproportionately used to target marginalised groups including racialised, working class and migrant communities. These systems criminalise people and infringe human rights, including the fundamental right to be presumed innocent. source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sarah ChanderDigital rights policy adviserstrongly agrees and says:I can’t profess to have researched every particular high-risk use in detail, but many of them should be banned. Predictive policing is a really good example. Many people believe that if you de-bias predictive policing systems, they will no longer profile and lead to the over-policing of racialized and poor communities. I disagree. Because such systems are steeped in a broader context of racial inequality and class inequality, there is no way you can make a technical tweak or slightly improve the dataset such that discriminatory results will not ensue from the use of the system. And this leads me to believe that it should be banned. This is one of the areas where the bias debate can be a little bit obscuring. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Corey O’ConnorPittsburgh City Council memberagrees and says:These are things we are seeing -- a trend across the country that the technology is not up to speed enough and people are getting arrested that should not be. [...] You use crime data and you use mathematics to basically say where to put police. It's not fair and because you want to have some common sense in process. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
ARTICLE 19Global free-expression NGOstrongly agrees and says:Notwithstanding the limited reach of the ban on predictive policing — which excludes event and location-based predictions — the guidelines must clarify that predicting ‘risk of committing a criminal offence’ includes all systems that purport to predict a wide range of behaviours that are criminalised and have criminal law and administrative consequences. As such, the guidelines should specify that systems making predictions about the likelihood of being registered in a police system [...] are within the scope of the prohibition, [...] In these cases, such systems must be covered by the ban as they amount to criminal risk assessments, and systems such as risk assessments included in ETIAS shall also be banned. The current ban on non targeted scraping of facial images leaves room for problematic loopholes. The guidelines must clarify that any derogation from the ban must be in line with the case law of the Court of Justice of the EU, and that any face scraped from the internet or CCTV footage must have a link to the commission of a crime. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lindsey BarrettLaw scholar and attorneystrongly disagrees and says:This article will argue that the use of predictive policing algorithms at the border should not be barred outright, as the government should permit potentially beneficial uses of the technology to develop. However, use of these algorithms should be carefully limited by statute to prevent the wholesale trammeling of privacy and civil liberties. This article argues that the use of predictive policing programs should be limited, but not banned, in the border context. [...] Predictive policing cannot and should not be implemented without rigorous safeguards, but neither should its potential to improve law enforcement methods go wholly ignored. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
David O’ConnorCFR Net Politics guest bloggerdisagrees and says:That’s not to say that police departments shouldn’t use software to analyze their data. [...] Further development of the technology is inevitable, so local governments and police departments should develop appropriate standards and practices. [...] Person-based algorithmic forecasts should never be accepted as meeting the reasonable suspicion requirement for detaining an individual, and only data specialists should have access to the software to reduce the chances of abuse. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ricky BurgessPittsburgh City Council memberstrongly agrees and says:I'm in favor of banning predictive policing: This bill does not ban predictive policing. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dragoș TudoracheMEP; EU AI Act co-rapporteurstrongly agrees and says:We don't want mass surveillance, we don't want social scoring, we don't want predictive policing in the European Union, full stop. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Liberty (UK)UK civil liberties organizationstrongly agrees and says:Ban predictive policing algorithms. source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
William J. BrattonFormer NYPD and LAPD police chiefstrongly disagrees and says:We have a number of predictive-policing initiatives underway [...] It is 21st-century policing – it’s great. (2016) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brando BenifeiItalian MEP, AI Act co-rapporteurstrongly agrees and says:The committee bans discriminatory biometric categorisation, predictive policing, emotive recognition and the mass scraping of images. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Justin CummingsFormer Santa Cruz mayoragrees and says:Understanding how predictive policing and facial recognition can be disproportionately biased against people of color, we officially banned the use of these technologies in the city of Santa Cruz. [...] "help eliminate racism in policing". (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
CAIR San Francisco Bay AreaCivil rights organizationstrongly agrees and says:We thank the Oakland City Council for their unanimous vote to protect our dignity and privacy through banning predictive policing and biometric surveillance technology. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andy MillsSanta Cruz police chiefagrees and says:Predictive policing has been shown over time to put officers in conflict with communities rather than working with the communities. [...] "until such time that it can be peer reviewed and scientifically proven." (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European ParliamentEU legislative bodyagrees and says:To respect privacy and human dignity, MEPs ask for a permanent ban on the automated recognition of individuals in public spaces, noting that citizens should only be monitored when suspected of a crime. Parliament calls for the use of private facial recognition databases (like the Clearview AI system, which is already in use) and predictive policing based on behavioural data to be forbidden. MEPs also want to ban social scoring systems, which try to rate the trustworthiness of citizens based on their behaviour or personality. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Electronic Privacy Information Center (EPIC)Privacy and civil liberties NGOstrongly agrees and says:DOJ and DHS Should Not Employ Predictive Policing Technologies Because They Are Untested, Riddled with Bias, and Rife with Systemic Issues. EPIC strongly urges DOJ and DHS to reconsider their funding and use of predictive policing technology due to the severe and systemic risks associated with the tools. Predictive policing technology does not indicate commission of a crime, it is merely a tool that increases situational awareness and alerts law enforcement to potential threats, so it should not be sole basis of an arrest. Predictive policing technologies should not constitute probable cause, and search warrants based solely on outputs from these technologies should not be approved. source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Madison CutlerPolicy brief author, Ford Schoolstrongly agrees and says:Predictive policing programs utilize algorithms to allocate law enforcement resources to areas and persons identified as having a higher risk for crime, despite a lack of evidence showing the efficacy of these determinations. Commercially-produced and in-house predictive policing software varies greatly in design and is not subject to standardization or government accountability. Predictive policing programs rely on biased historical data, threaten the civil rights of community members, and do not provide for transparency or accountability regarding their practices. For these reasons, predictive policing tools should not be used in any law enforcement agency. If agencies are using these tools, they must be subject to frequent evaluation and practice transparency regarding data. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
The People’s Plan NYCNYC multi‑group policy platformstrongly agrees and says:Problem: Predictive policing is software that is sold to the public as a way for police to “predict crime.” In practice, this software automates racial profiling, harvesting data from sources like the gang databases, which is 99 percent Black and Latinx and people of color, according to The Appeal, baking in the bias from past abuses like stop-and-frisk. In Oakland, predictive policing led to the predictable outcome, focusing drug enforcement in poor, minority communities. These predictive policing tools are also completely opaque and ineffective in actually combating the socioeconomic and social factors that contribute to violence and harm against New Yorkers, some of which is perpetrated by police. Recommendation: The City must ban the acquisition and use of surveillance technologies and predictive policing software, especially for the enforcement of suspected drug offenses. New York City and State must ban predictive policing software including the NYPD’s use of Patternizr. This software condemns those with past criminal legal systems involvement to be viewed as perpetual suspects, which continues cycles of criminalization against Black and Latinx New Yorkers and youth who are more likely to be racially profiled by these systems in the first place. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Hannah SassamanPolicy director, Movement Alliance Projectstrongly agrees and says:But instead of actually predicting and reducing crime and violence, these algorithms promote systems of over-policing and mass incarceration, perpetuating racism and increasing tensions between police and communities. Designers claim that predictive policing can save money through 'smart' targeting of police resources, but algorithms meant to foresee where crime will occur only justified massive and often violent deployment to neighborhoods already suffering from poverty and disinvestment. Ultimately, these algorithms didn’t reduce the money taxpayers spend on the cops. These promising signs underscore the importance of breaking with algorithmic decisionmaking, whether through 'predictive policing' or other algorithms used in the criminal legal system. As our local governments return to even emptier coffers and major municipal budget pressures, we should quickly abolish these models across all criminal legal system contexts. We have a narrow window to ban this form of 'e-carceration' and to keep new tech out of the hands of police, instead of following the path the country took after the last economic crash. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Petar VitanovBulgarian MEP, LIBE rapporteurstrongly agrees and says:Fundamental rights are unconditional. For the first time ever, we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results. We are clearly opposed to predictive policing based on the use of AI as well as any processing of biometric data that leads to mass surveillance. This is a huge win for all European citizens. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Fair TrialsGlobal criminal justice watchdog NGOstrongly agrees and says:Today, Fair Trials has launched an interactive online tool designed to show how the police and other criminal justice authorities are using predictive systems to profile people and areas as criminal, even before alleged crimes have occurred. Our research has shown that more and more police forces and criminal justice authorities across Europe are using automated and data-driven systems, including artificial intelligence (AI), to profile people and try and ‘predict’ their ‘risk’ of committing a crime in the future, as well as profile areas to ‘predict’ whether crime will occur there in future. There is growing opposition to predictive policing and justice systems across Europe, with many organisations and some Members of the European Parliament (MEPs) supporting a ban. Fair Trials is calling on MEPs to ban predictive systems when they vote on the Artificial Intelligence Act in the upcoming months. We hope that our example predictive policing and justice tool will raise awareness of the discriminatory outcomes generated by these systems. Using information about someone’s school attendance, family circumstances, ethnicity and finances to decide if they could be a criminal is fundamentally discriminatory. The only way to protect people and their rights across Europe is to ban these systems. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Griff FerrisFair Trials legal and policy officerstrongly agrees and says:Age-old discrimination is being hard-wired into new age technologies in the form of predictive and profiling AI systems used by law enforcement and criminal justice authorities. Seeking to predict people’s future behaviour and punish them for it is completely incompatible with the fundamental right to be presumed innocent until proven guilty. The only way to protect people from these harms and other fundamental rights infringements is to prohibit their use. (2022) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Matt CagleACLU tech and civil liberties attorneystrongly agrees and says:As Santa Cruz rightly recognized, predictive policing and facial recognition are dangerous, racially biased technologies that should never be used by our government. [...] Lawmakers across the country have a responsibility to step up and dismantle surveillance systems that have long been used to repress activism, target communities of color, and invade people's private lives. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Amnesty International UKHuman rights organizationstrongly agrees and says:We call on the UK Government to prohibit the use of automated and ‘predictive’ policing systems in England and Wales. Almost three-quarters of UK police forces are using data-based and data-driven systems to attempt to predict, profile and assess the risk of crime or criminalised behaviour occurring in the future. These so-called predictive policing tools violate people’s rights and disproportionately impact Black and racialised communities. We urge the UK Government to do the right thing and prohibit use of these technologies. In moving towards this, we call for there to be transparency in how these systems are being used, and for the creation of meaningful routes to challenge policing decisions made using them. Governments across the UK must prohibit the use of these technologies. Right now, they can demand transparency on how these systems are being used. People and communities subjected to these systems must have the right to know about them and have meaningful routes to challenge policing decisions made using them. These discriminatory and racist systems must be banned. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
AlgorithmWatchAlgorithmic accountability nonprofitstrongly agrees and says:AI systems used by law enforcement often have technological or commercial barriers that prevent effective and meaningful scrutiny, transparency, and accountability. It is crucial that individuals affected by these systems’ decisions are aware of their use and have clear and effective routes to challenge it. The signatories call for a full prohibition of predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act. Such systems amount to an unacceptable risk and therefore must be included as a ‘prohibited AI practice’ in Article 5 of the AIA. (2022) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Birgit SippelGerman MEP, civil liberties advocatestrongly agrees and says:It is crucial to acknowledge that elements of structural injustice are intensified by AI systems and we must therefore ban the use of predictive systems in law enforcement and criminal justice once and for all. Any AI or automated systems that are deployed by law enforcement and criminal justice authorities to make behavioural predictions on individuals or groups to identify areas and people likely to commit a crime based on historical data, past behaviour or an affiliation to a particular group will inevitably perpetuate and amplify existing discrimination. This will particularly impact people belonging to certain ethnicities or communities due to bias in AI systems. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai
votes
More
ai-policy
votes
More
ai-regulation
votes
More
ai-ethics
votes
| Does AI pose an existential threat to humanity? |
| Should humanity build artificial general intelligence? |
More
ai-governance
votes
| Should big AI companies spend a third of their compute resources on AI safety? |
| Should we create a global institute for AI, similar to CERN? |