Cover Image for Police in Oklahoma are using algorithms to predict your behavior

Police in Oklahoma are using algorithms to predict your behavior

Sabah Khalaf
8 minute read

If Oklahoma police pulled you over in recent years, it may not be because you’re a bad driver. Or maybe you’ve noticed more police cars in your neighborhood. That may not be because more crimes are occurring. Police in your community could be using big data tools to identify “hot spots” and predict crime before it occurs.

“Predictive policing,” also called “intelligence-led policing,” is one of the trendiest technologies in law enforcement today. Authorities say it’s revolutionizing public safety.

A growing chorus of critics -- even those who once supported predictive policing -- worry now that computer algorithms designed to predict future crimes are being built on decades of junk data and entrenched human biases.

Law enforcement agencies across the nation - including Oklahoma - are contracting with private companies to analyze sprawling government databases stuffed not just with details about serious offenders but also oceans of other bits of information about everyday people.

Powerful analytics software then generates lists of “high-risk” people and neighborhoods. Law enforcement and the courts have excitedly embraced these innovations, but do they actually improve public safety?

From patrols to passwords

Police once gathered intelligence by working sources and talking to residents, business owners, and local church leaders. Today, community patrols are being replaced by computer driven research, vast databases, and powerful surveillance devices.

Police in Tulsa launched citywide crime sweeps in 2017 to “curb violent crime” by targeting dozens of people with arrests and warrants. Despite the large dragnet, the operation only seized four pounds each of marijuana and methamphetamines.

No matter. Tulsa’s Police Chief told the media the sweeps were the first of many such planned operations and “part of his department’s focus on intelligence-led policing,” according to the Tulsa World.

Oklahoma City police created a similar initiative in 2013 promising to “take back” neighborhoods perceived as lost to crime. They issued citations for properties deemed blighted and boosted police patrols in a 4.4-square-mile area of the city. Authorities said “intelligence-led policing tactics” were fueling the efforts and revealing hot spots for increased attention.

The Tulsa County Sheriff’s Office purchased advanced software in 2016 from a Texas company whose products “kind of predict what’s going to happen based on known, historical data,” according to its CEO. The Claremore Police Department purchased similar software that year.

The towns of Bethany, Edmond, Moore, Norman, and Midwest City swap data with one another and participate in Oklahoma City’s predictive policing database. The veteran Police Chief of Midwest City calls his department’s move toward intelligence-led policing a high mark of his career.

Research studies of predictive policing, however, are beginning to reveal ugly truths about Big Data and artificial intelligence. The Justice Department’s own research shows that black people are twice as likely to be arrested as white people in the United States. If machines learn what we teach them, coding errors and human biases become baked into prediction algorithms. This then perpetuates, rather than alleviates, implicit police and court biases.

When machines learn bias

Unconscious biases exist in us all. But biases become dangerous when accompanied by the government powers of arrest, imprisonment, and deadly force.

At one time, many people agreed that Big Data held the promise of improving public safety by lessening the reliance on human judgment-making. Residents of disadvantaged communities would get a fairer shot, and the courts and police could spend less time determining where true threats existed.

Predictive policing typically takes one of two forms. Both rely on troves of data that are easier than ever to store and analyze.

  • Police agencies target neighborhoods with increased enforcement scrutiny after analyzing innumerable data points as varied as chatter on social media, education and income levels, proximity to liquor stores, 911 service calls, and arrest records.
  • Courts and prosecutors use dozens of variables to automatically compute risk scores for offenders and defendants. These scores then inform bond amounts set by judges and sentencing and parole decisions.

Examples are beginning to accumulate of predictive policing tools gone awry:

Police across California networked for years to create a shared gang database for tracking high-risk people and areas. An audit later said the process for entering names was “haphazard at best.” An untold number of people were added who had no clear association with gangs.

Reporters in Florida spent months investigating a predictive policing initiative there. Residents of designated “hot spot” areas were written tickets for overgrown yards and missing mailbox numbers. A teen was visited by police nearly two-dozen times after he stole a bike. A woman was fined $2,500 for having chickens in her backyard.

A system in Broward County, Florida, predicted that hundreds of people arrested in a single-year period would go on to commit violent crimes. Only a fraction did so. The system was also twice as likely to falsely label black defendants as future criminals than white defendants.

Cities that at one time were among the biggest boosters of predictive policing are now restricting it or abandoning it altogether. Santa Cruz, California, was one of the first cities in the nation 10 years ago to experiment with predictive policing. In June of this year, it became the first American city to ban its use.

Bad data in, bad data out

With time, scholars and researchers have looked more closely at predictive policing in action.

Quarreling neighbors, scorned lovers, and overzealous community watchers call 911 for myriad reasons. Data captured from every 911 incident isn’t evidence that a community needs more police attention, nor are more arrests evidence that crime is rising.

The increased law enforcement attention creates a feed-back loop, in which “smart” police tools fail to consider socioeconomic conditions and inequality in given areas.

Even innocent coding mistakes and data entry errors can lead to fateful and unintended outcomes that harm more than help communities. A local police agency in Oklahoma for years mislabeled simple domestic disturbance calls as much more serious “domestic violence incidents.” This led the jurisdiction to erroneously believe it had a severe domestic violence problem.

Former U.S. Attorney General Eric Holder even warned that predictive risk scores may be reinforcing, rather than relieving, biases in America’s courts. A group of 1,500 mathematicians this year signed an open letter urging their colleagues to cut ties with predictive policing programs.

Math can still solve problems

Although imperfect now, experts say better-calibrated prediction tools could be made fairer in contrast to human biases alone.

A data-driven tool launched by New Jersey in 2016 has successfully pushed down the number of no-and-low risk people needlessly awaiting court hearings behind bars by 50 percent. A similar tool in Kentucky is showing signs of promise.

A machine-learning tool used in Pennsylvania to help courts make decisions about parole releases likewise succeeded in studies at distinguishing between high and low-risk individuals.

Despite wider adoption, it remains to be seen just how big a role predictive policing will play in the future.