Prediction of crimes: algorithms and false notes

According to scientists, it is possible to predict crimes from complex calculations based on past data. However, the reliability – and interest – of these algorithms is regularly called into question.

Last June, researchers from the University of Chicago published an article in Nature Human Behavior to present their algorithm, capable according to them of predicting future crimes with a very high reliability, but also of highlighting systemic biases in the repression of crime. They tested the accuracy of the model in eight major US cities by dividing the metropolises into a multitude of zones with an area of ​​approximately two city blocks (radius of 300 m).

The science team started with Chicago and analyzed historical data on violent crimes and burglaries, as well as fluctuations over time to predict events. Result: According to their findings, the algorithm is able to predict crimes with approximately 90% accuracy in Chicago. They then applied this methodology, with roughly the same success rate, in seven other American cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland and San Francisco. However, the principle of the predictive algorithm remains highly controversial.

The quest for the perfect algorithm since the 1980s

For a very long time, long before the movie Minority Report by Steven Spielberg, moreover, the authorities sought to find solutions to anticipate the crimes and, possibly, to avoid them. Since the 1980s, with the development of computing, algorithms have entered the fray. At that time, British criminologist Ken Pease noticed that “most burglaries are repeated on a small number of victims”. Based on this principle, scientists have sought to use data from past offenses to predict the areas most at risk, also called hotspots.

“The benefit of AI would be to support [aux forces de l’ordre] to predict the possibility of homicides in cities. »

Admittedly, the algorithms can identify areas where crime has been high in the past and where it is therefore possible that it will still be so tomorrow. The problem is that these data also highlight important systemic biases on the part of the authorities. Models that seek to predict crime before it occurs have been found to actually lack accuracy and reinforce biases. This is for example the case of the algorithm of the company PredPol, which has been widely questioned. Used for a time by the Los Angeles police (LAPD), the collective Stop LAPD Spying revealed following a study that “When the police target an area, they generate more criminal reports and arrests there. The resulting crime data leads the algorithm to redirect police to the same area. »

PredPol was designed with the aim of predicting, thanks to artificial intelligence and long years of compiled data, the occurrence of crimes up to 12 hours before the facts occur. The problem? This software only reinforced the a priori of the police forces. Worse, it ultimately proved incapable of having an impact on crime reduction!

Racial biases and… the impossibility of predicting the future

Although capable of offering relatively reliable predictions, these algorithms tend above all to exacerbate racial disparities and to justify recurring police interventions in certain areas. And therefore to multiply the arrests there. However, the “hot” neighborhoods are already known to the police, it is not an algorithm that will reveal them.

A real vicious circle that targets African Americans and Latinos in the United States in particular. This was brought to light when a secret Chicago police list was revealed to the general public. From 2012 to 2016, the authorities had created a list of “at risk” individuals. The aim was to identify potential criminals based on the police arrest register. Looking at the details of this file, it was established that it contained no less than 56% of black men aged 20-29 of the city… of which only 3.6% were actually involved in acts of violent crime (murder or use of a firearm).

“We know, since Aristotle, that formally, and still today, “uncertainty cannot be modelled”. »

Xavier Rauffer

criminologist

The algorithm developed by scientists at the University of Chicago strives precisely to fight against these systemic biases. For this, artificial intelligence does not target potential suspects but “at risk” places, places where crimes are likely to occur. Professor Ishanu Chattopadhyay, one of the authors of the article published in Nature Human Behaviorspecifies that “The resources of law enforcement are not infinite, so the interest of AI would be to come in support to predict the possibility of homicides in cities”. The data has been made public so that other researchers can verify his reasoning and conclusions. But, if these new algorithms correct certain errors, remains the question of their true fundamental utility.

The abandonment of predictive algorithms?

In France, the French criminologist Xavier Raufer refutes the principle of predictive algorithms. “These stories are pure and simple scams. We know, since Aristotle, that formally, and still today, “uncertainty cannot be modelled”. It’s basic physicshe assures us. If knowing the past could predict the future, everyone would win the lottery…” Predicting a crime at a specific address, on a given day and at a given time, is impossible. Predicting also that such an individual, because he has committed an offense in the past, will again act on it is just as uncertain, even if the statistics can give trends.

Faced with various outcry, the Los Angeles police announced in April 2020 the cessation of the use of the computer program PredPol. Before Los Angeles, other cities (Palo Alto and Mountain View in California, Hagerstown in Maryland, Kent County in England, etc.) had already terminated the contract with PredPol, for lack of convincing results. If the authorities do not reject artificial intelligence or algorithms outright, they are still waiting for the real usefulness of these tools on a daily basis and for proof that they help to reduce crime. Some of the cities that have abandoned PredPol have since opted for another strategy: working upstream on the socio-economic difficulties in “at risk” neighborhoods. And thus reduce the causes of crimes.

We wish to give thanks to the writer of this post for this outstanding web content

Prediction of crimes: algorithms and false notes


Explore our social media accounts and also other related pageshttps://www.ai-magazine.com/related-pages/