News

The Future of AI and Predictive Policing

February 28, 2017 by Chantelle Dubois

A report from Stanford identified public safety and security as one of the domains in which AI will become much more prevalent in the next few decades.

Are concepts from science fiction beginning to materialize in real-life? As neural networks are increasingly utilized to capture and process data, we look at the benefits and dangers of future AI systems used in policing and security.

A 2016 report from Stanford titled "Artificial Intelligence and Life in 2030" identifies public safety and security as one of the domains in which AI will become much more prevalent in the next few decades.

Of course, aside from the technical and analytical questions on how cognitive computing technology can be used for security purposes, there's a wealth of legal and ethical questions to answer about the use of AI, machine learning, and big data analysis in preventing crime.

For fans of the Philip K. Dick short story "The Minority Report" (or the Tom Cruise movie based on it), the concept of predictive policing may already be familiar. In the universe presented in the story, technology is used (in conjunction with psychics, mind you) to predict a crime before it happens, enabling law enforcement to stop a would-be perpetrator.

Technology is already being used in predictive policing, although not quite to the degree that Tom Cruise’s character uses it.

 

Predictive Policing in Today's World

In a thorough report compiled by RAND, a non-profit public policy research organization, four methods were identified that are currently used to predict crimes:

  • forecasting places and times that crimes will occur
  • predicting which individuals will commit a crime
  • predicting the profile of a would-be perpetrator
  • predicting individuals or demographics most likely to become victims of crime

In order to make these predictions, a wealth of information is collected and analyzed, such as historical crime data, 911 call records, economic data, and geographic information. All of this data is then analyzed and statistical models are created from it.

 

An example of PredPol's mapping system. Image courtesy of PredPol.

 

What is important to know is that, unlike in popular movies, predictive policing does not provide 100% certainty in the occurrence of a crime; it only provides statistics on the likelihood of a crime. Ultimately, it is a tool that law enforcement uses and they must choose which way to most effectively act on their own.

There are a few different types of software currently being used by police departments across the USA. PredPol—used by the LAPD and the Atlanta Police Department—uses place, time, and type of crime to create hot spot maps that police can use to decide nightly patrol routes.

Another example, HunchLab, focuses more on social and behavioral analysis to create predictions. It's currently used by the NYPD, Miami Police Department, and the Philadelphia Police Department.

 

An example of HunchLab's interface. Image courtesy of HunchLab

 

Police departments have reported that predictive policing technology has assisted in reducing crime rates in certain areas and enabled them to use limited resources more effectively.

The Risks of Predictive Policing Going Too Far

One of the regularly touted benefits of using technology in predictive policing is that it can remove human bias in the conditions and locations of crimes by relying only on statistical modeling. However, this freedom from bias is only possible if the input data is also free from bias.

Additionally, there are questions on whether or not police will be more likely to identify an otherwise innocent situation or individuals as criminals when being deployed in an area identified as being at risk. Police are also using this technology to make decisions, even though it is relatively new, without knowing the full impact of this type of decision-making.

There are also many concerns over how predictive policing technology can rationalize or perpetuate profiling potential criminals, since police may feel more justified in deciding that a certain demographic is more likely to commit crimes because their computer modeling says so.

On the last point, the concern about predictive policing violating civil rights is so great that 17 organizations signed a joint statement (PDF) stating that such technology is “is profoundly flawed: it is systemically biased against communities of color and allows unconscionable abuses of police power”.

 

Many companies came together to put forth the joint statement. Image courtesy of CivilRightsDocs.info (PDF)

 

The statement cites that there is not enough transparency in how the predictions are made and that a computer prediction cannot be used as evidence to allow police to stop and investigate someone, among other things.

The Internet and social media have also opened up new doors on how predictive policing can be used. While there is a handful of special individuals who end up outing themselves online by posting about their crimes, it has also been reported that police have used social media networks to collect data and observe social connections to try and collect evidence on suspected criminals. It is also quite easy for a massive amount of data to be collected and analyzed from social media sources, something not possible just several years ago.

The 2015 Data & Civil Rights Conference covered this topic in a publication called Social Media Surveillance and Law Enforcement in great detail. When it comes to the way social media is used in law enforcement and the way data is collected and analyzed, there are still no real guidelines on how to do so in a responsible and ethical manner.

The only way to keep your online information safe from surveillance is to put as little information online as possible that is not necessary, ensure that you are aware of all privacy settings for the things you post online, be aware of who is in your social media networks and who will see what you share. You should also be aware of policies for how your information and data is shared by social media platforms.


 

As neural networks gain popularity (and research funding), these questions and concerns will only become more urgent. This is an area where the tech sector presents difficult issues: How can we weigh civil liberties against public safety? How can data gathering be used ethically? We're increasingly seeing neural networks and cognitive computing raise such questions—and it's a trend more likely to grow than wane.

 

Feature image courtesy of HunchLab.