ABC ran a story about Santa Cruz, CA using software to help identify areas where crime may take place ( see Santa Cruz Police Using Computer Program to Predict, Prevent Crime). This kind of prediction is very different from what futurist do because it is constrained in both time and place. Those constraints are why it works. Those constraints are also the reason that short term prediction of stock market moves usually work, and the reasons they sometimes fail spectacularly.
Time constraints on these predictions are meant to prevent crime immediately, not over weeks and months. These are more like daily deployment tools that help pinpoint where something is going to happen in the next few hours. They look for patterns of burglaries or drug dealing and suggest likely next moves by the perpetrator. It would be interesting to see if they reduced the 8-years of data in the model to 6-months if the predictions would be any less accurate. My sense is they would not, because endemic crime would present as an even pattern over time. The reinforcement that a particular area is relatively crime heavy does nothing to help particular crimes that may occur in the near term. The only time long-term data would be useful is when looking for anomalous crimes, such as serial killers or other sociopaths whose patterns may include periods of dormancy followed by recurring patterns.
Place constraints mean that models don’t need to look at wide ranges of data, just data about local areas. This may still be a lot of data, but the constraint makes it meaningful and actionable. If you look at crime across California, it offers interesting statistics, and it may focus on hotspot cities, but it doesn’t provide information on neighborhoods where local law enforcement can take action. By placing the software in the local jurisdiction, it becomes a useful tool for daily deployment.
Unfortunately, the constraints that make this software useful over the short term reinforce the fragility that has always plagued artificial intelligence: the edges of the problem. These tools typically look at crime data as input, not broad social/economic data. They cannot, for instance, predict areas that may eventually become high crime areas because of demographic or economic shifts. If these limitations are understood, the larger shifts are not relevant to the short-term, even over the long-term because crime occurs where it occurs, and if it shift location, then daily deployments will shift with it. What it doesn’t do is help anticipate, over the long-term, which areas may need new field stations, better local recruiting and investment in preventative measures. Those insights require a more strategic approach than these tools offer.
Some work in the UK suggests (see the links below) that combining city design data with crime data may help discover flaws in design that lead to crime, but again, this is place-constrained. By looking at current crime against a backdrop of design, they discover the kinds of environments that facilitate crime and thus identify similar areas that may be undiscovered crime hubs, or hopefully, and stop the construction of crime-facilitating buildings and neighbors by feeding that data into review and permit processes.
Pattern recognition a powerful tool, but those using it need to understand its limitations, as well as its benefits, in order to effectively employ its use.
You can find additional information on crime prevention and software in these New Scientist articles, which go all the way back to 1993: