Philosophical Reflections on Predictive Policing and the Nature of Bias

audio-thumbnail
The Nazi Ghost in the Machine
0:00
/431.542857

  • Predictive policing algorithms, like LAPD’s PredPol, inherit and amplify societal biases, reinforcing racial and socioeconomic injustices through flawed historical data.
  • AI in law enforcement perpetuates discriminatory policing by over-surveilling marginalized communities, creating a self-fulfilling cycle of increased arrests and criminalization.
  • The myth of algorithmic neutrality is dangerous, as machines reflect the biases of their creators and lack the moral framework to contextualize data ethically.
  • Ethical AI development requires incorporating principles of care and social awareness to prevent reinforcing systemic discrimination.
  • If predictive policing is inherently biased and harmful, it should be abandoned in favor of technologies that promote justice and equity.

We created AI in our image. And the mirror is a bit ugly. As we stare into the void of our own reflection a racist and sexist phantom looks back at us.

The ghost in the machine.

You've probably heard the phrase from the critically-acclaimed anime, but it's also a concept coined by British philosopher, Gilbert Ryle (1900-1976), as a critique of dualism. He disagreed with the Cartesian notion of his day that the mind and body were distinct. A mistake that conflates the mind with things. Ryle instead asserted that the mind is a function of what we do.

In light of Ryle’s philosophy, the machine isn’t merely haunted by its own cold objective judgement. Rather its mind is a projection of our own, multiplied in magnitude by big data. And that the definition of that mind is a consequence of what it does.

The emergence of artificial intelligence has folks both marveling at its capabilities, and questioning what distinguishes their thought process from the mental clockwork of their human creators. Pseudo-intellectuals have their own conventions to discuss the conscious potential of AI, but I tend to find the most pragmatic examples playing out in real life tell the fuller story. Like AI in the police force for example. And you'd be surprised how long this tech has been paraded with sirens. LAPD has been using PredPol since 2011 to predict crime and surveil suspects before they do anything wrong. Except tools like PredPol show us that they're not above exacerbating social injustices that police already create on their own.

Predictive policing models reveal that artificial intelligence isn't just a tool, but something that's imbued with the specters of human history, values or lack thereof, and injustices. Molded in our bias, the machines have inherited our collective bigotry. And we as a society have yet to reckon with whether that's something we should allow in our streets.

a couple of women sitting next to each other holding signs
Photo by Stewart Munro / Unsplash

I.

Predictive policing algorithms forecast criminal risk in two ways: location-based and person-based. Location-based indicators depend on areas where crimes have been committed in the past. When those areas flare up in forecast police respond by adding surveillance and patrols to the perimeter and stay vigilant for offenses.

Person-based indicators depend on the criminal history of individuals and their likelihood to reoffend. On paper this all seems like a liberation of law enforcement to turn to objective cold hard data to anticipate problems before they happen.

Except there's nothing objective about the data they're being fed. Historical law enforcement data reveals over-policing in marginalized neighborhoods as well as particular racial and sex demographics. Crime-forecasting algorithms consume this biased data, and regurgitate a perpetuation of unethical policing exploits: garbage in, ghosts out.

Algorithmic bias can manifest in a variety of ways. For example, predictive policing platforms might indicate that grand theft auto is more likely between 8:00 and 9:00 AM but not make the distinction whether that's when the crime occurred or if that's just when the owner noticed that, that's when their car went missing.

We're also feeding these algorithms racially skewed data. ABC News discovered in 2018 that there are 800 jurisdictions where black folks are 5 times more likely to be arrested than their white counterparts, and in 250 jurisdictions they were 10 times more likely to be arrested. Such forecasts lead to increased police presence in more marginalized neighborhoods creating a self-fulfilling prophecy as arrest reports in that area increase due to the hyperstition of racially biased data in the machine, and police who are willing to confirm this bias. Hyperstition is a phrase coined by Nick Land, philosophical father of Accelerationism, to describe self-fulfilling ideas that become a reality through collective belief and action in the past, particularly through cyber space.

Why It’s Called a Coup: Dark Enlightenment Influence On Trumpian Musk Philosophy
How The Core Values of MAGA Reflect Dark Enlightenment Philosophy Why Its Called a Coup: Dark Enlightenment Influence On Trumpian Musk Philosophy0:00/1263.0465311× * Trump’s administration and Musk’s growing influence reflect Curtis Yarvin’s neo-reactionary philosophy, which advocates for a technocratic monarchy over democracy. * Peter Thiel, a

Biased crime forecasts also make the software dangerous for police by giving them a false sense of security where dangerous crime may be developing because predictive policing AI told them they're in a low risk area. The AI could falsely forecast that white neighborhoods are safer, however the "rates of drug use are essentially the same across Oakland neighborhoods" according to a simulation study by the U.S. National Survey on Drug Use and Health. Even though predictive policing models warned police to dispatch and monitor more around black and latino neighborhoods in Oakland.

Suspects, criminals, and police aren't the only ones adversely affected by the biased crime seer tech. The majority of people in affected neighborhoods aren't guilty of any crimes. And they have to feel the brunt of police presence in their neighborhoods even though they have no control over where crime occurs or the biased machine sending scourges of police around their homes.

A presence which ironically induces a feeling of a lack of safety through Consequentialism. An ethical theory that the morality of an action is justified by its consequences. Innocent people that see a swarm of police cars in their neighborhood take that as an indicator to be alert for a dangerous disturbance in the area making them feel unsafe. They have no idea that law enforcement have no report to base gathering in front of their home except for a biased predictive machine forecast.

people inside a building with a white concrete building
Photo by Daniel Tran / Unsplash

II.

The myth of algorithmic neutrality and objectivity is dangerous for citizens and police alike. Algorithms inherently come with the biases of their creators out of the box. Whether by design or "self-learning" through human-controlled processes, machines will internalize bigotry we feed them without the moral compass to navigate what bits of information need further context or sensitivity. Debunking the myth of machine objectivism helps us see why "cold hard data" leaves a vacuum for bigotry to ensue. And how care needs to be programmed into AI to give them much-needed context around the data.

Professor for Gender Studies with a Focus on Digitalization at the University of Basel, Bianca Prietl, invites us to ask ourselves questions around the principle of care when developing these technoscientific futures. What do we care about collectively as a society? What do we wish to care for? And which technologies can we develop and deploy in pursuit of those aims?

AI’s Trolley Problem: Routing Tech Ethics Through Political Philosophy For Driverless Cars
Navigating the Intersection of Technology and Collective Ethics AIs Trolley Problem: Routing Tech Ethics Through Political Philosophy For Driverless Cars0:00/359.0269391× * Previously the logic for policing and regulating tech has mostly been in the scope of moral philosophy and social science. * Political philosophy introduces three important values to

By training machines on social issues and power structures we can better prepare our algorithms to face datasets like decontextualized police-records. The philosophy of ethics and the complexity of real social issues aren’t reconcilable through a cold undiscerning computer. So empathy and perspective need to be dialed in for the protection of not just home owners, but police as well. And if predictive policing is incompatible with the principle of care, then maybe we need to throw it out to make space for tech that does.

woman in blue and white polo shirt standing on yellow flower field during daytime
Photo by Luke Jones / Unsplash

III.

By acknowledging the ghost in the machine rather than viewing it as a neutral fact-based judge we’re better prepared to meet the challenges at the intersection of tech and society. The machine we’ve created in our image carries with it the flaws of our humanity; biases, bigotry, and all. Careful considerations have to be made in its development for us to justify deploying it in spaces that have real lifelong consequences for the every day minority.

And as AI begins to make more decisions in society it’s more imperative now than ever to anticipate how these problems can build so we can remedy them before they cause irreversible social damage. We as humans should be stewards of futuristic tech and predictively police the ghost in the machine before our biases are projected on big data.


by Derek Guzman

Independent journalist in tech, art, and philosophy

Subscribe

The link has been copied!