Published on March 20th, 2018 | by Emergent Enterprise0
Predictive Policing and the Bias of Artificial Intelligence
Predictive policing. At face value it sounds like the stuff of science fiction. But predictive policing is not only part of law enforcement strategy in places like Chicago , it is being practiced daily. It’s not as intriguing as using “precogs” to predict crime before it happens but it is just as mind boggling. Artificial intelligence is being consulted to help identify citizens who have a high potential to become criminals. How? When a person is arrested in Chicago they are assigned a risk score of 0 to 500. Then different factors add to their score: their arrest history, where they live, who they associate with and gang affiliation. This number will even appear on a police dashboard as the officer checks on or confronts an individual. Essentially the data is fed to an algorithm that presents a number that “predicts” the potential for problems. Chicago is even going as far as meeting with individuals who have a high “heat number” before they get into trouble in the hopes of averting a criminal future for that person. Advocates of the system point to lower crime numbers in the areas where the predictive policing is in place.
There are opponents of the system, of course, especially with concerns about bias and stereotyping being built into the algorithms. There is a much cited report from propublica.org that contends that predictive policing is biased against blacks. There is an equally compelling report from The Marshall Project and FiveThirtyEight. The fact is that artificial intelligence programs and applications are created by humans and those humans have bias.
All AI will have bias, no matter the organizational goal. In fact, the bias may even be intentional such as a chatbot that makes recommendations for purchase. But if the bias is leading the organization or company to make poor decisions or unfairly treat people that is obviously a problem.
How can you fight against bias in your artificial intelligence?
- Check the validity of your data. You should know and trust the sources. AI is only as good as the data that gets programmed into it. And then use that data fairly. Your “traning” of the AI should not be based on statistical generalizations.
- Include a wide scope of data. The more accurate data the better as the AI will learn as time goes on and it should improve as it continually interacts. Insufficient data can cause the algorithm to make illogical and poor decisions and lead to greater problems like bias.
- Monitor and evaluate the intelligence constantly. AI has a tendency to feed upon itself. An anomaly can become a persistent error if the algorithm believes it is consistently making a right choice when, in fact, it is not. Any AI effort must go forward knowing that mistakes will be made. It’s important to act upon the mistakes with a quick response and update the data.
Artificial intelligence is becoming more prevalent in business, healthcare and other areas like public service. Chicago is learning from its experiences with predictive policing and seeking to improve as it continues to fight crime. A case could be made if one person is prevented from committing a crime the program is worth it. But that goal can not be at the expense of others who might be unjustly targeted as “future criminals.” Throwing out the technology is not the answer but using it ethically and responsibly has to be a priority.