Watching the Watchers: Using AI To Empower The People

Our recent paper In Nature Human Behavior (https://www.nature.com/articles/s41562-022-01372-0) presents a new learning framework to precisely forecast crime in the urban environment, and simultaneously demonstrates how such precise predictors may be used to reveal signatures of enforcement biases.
Published in Social Sciences
Like

While AI enables pervasive surveillance, done right, it also can be a way to audit the state, leading to a more free, more fair, and more equal society

Socially disadvantaged communities have often raised legitimate concerns about being over-policed and under-protected. Now, the rise of AI algorithms driving a myriad of “predictive policing” attempts has threatened to exacerbate the problem. The use of automated algorithms in policing does not do away with inequity; biases might be introduced through how such machines are trained. Indeed, automating away human decision-making might bake in systemic biases under the guise of objectivity. The black-box nature of state-of-the-art AI algorithms that do not consider the underlying social mechanics of crime, fosters little confidence that such schemes can ultimately thwart crime in any meaningful manner. To make things worse, AI algorithms are demonstrably an effective force-multiplier for the state, manifesting an evermore intrusive control and surveillance apparatus to monitor all aspects of our lives.

While the emergence of powerful predictive tools raise concerns regarding the unprecedented power they place in the hands of over-zealous states in the name of civilian protection, a new approach demonstrates how sophisticated algorithms can also empower us to audit enforcement biases, and hold states accountable in ways previously inconceivable. 

The issue of how enforcement interacts with, modulates, and reinforces crime has been rarely addressed in the context of precise event predictions. In a recent study in Nature Human Behavior (NHB), the authors take a new look at these issues. They demonstrate that predictive tools can enable surveillance of the state by tracing systemic biases in urban enforcement. Thus AI technology does not HAVE to be the all-seeing all-knowing enforcer for absolute state control; done right it can empower the community realizing a more free, more fair and more equitable society. 

Unique to this current attempt is its open nature: the data is public, the algorithm is open source, and consequently the results may be replicated by anyone with access to a moderately powerful computing setup. This might be seen as a step towards democratization of AI: there are no hidden inputs, no data annotations that are privy to the “authorities”, and no one is sitting down and keying in parameters that would then be used to define and identify bad or “at risk” behaviors. An approach that eliminates expected human inputs that might let in implicit biases, is arguably more fair, and perhaps more importantly, has the perception of fairness.

Technically, what this means is that there is no need to manually choose “features”. Circumventing the need to choose features is an intriguing new addition to the AI/ML toolbox, but more importantly, it makes a crucial difference in the context of fair enforcement. The recent foray of Chicago Police into predictive policing provides a pertinent example.

The Chicago Police department implemented a tool in 2012, seeking to forecast the likely victims or perpetrators of gun crime, using a “formula” developed by academic researchers. This “formula” created a list of people (the “Strategic Subject List”), relying on factors such as age during an individual’s latest arrest, and arrest histories. 

The list was not public. 

Turned out that the system had targeted people who were never charged with a serious crime, only revealed after a lengthy legal battle to access the list

Magic formulae used to generate secret lists do not produce good outcomes — who knew? 

The program ended after a RAND Corporation review in 2019. 

This is a teachable moment of sorts: trying to figure out the “good features” of a complex problem is a recipe for failure. It is unlikely that the authors of the “formula” has anything but good intentions. But in practice there were way too many subtleties to account for manually: the program was too dependent on arrest records, and the Inspector General’s office commented that it had “effectively punished individuals for crimes for which they had not been convicted.” The solution offered by the NHB article is to eliminate the manual pre-selection of features on which the predictions depend. 

No secrets make better — fairer — predictors.

To realize this goal, the authors reworked the prediction framework from scratch, formalizing a new approach for learning stochastic phenomena, one that does not need fixed features to be keyed in. Instead of simple neurons networked together, the “Granger Net” assembles units that are locally trained and, while more complex, are generative, and can directly learn non-trivial aspects of stochastic processes. Ultimately this leads to highly precise predictions of individual crimes — precise both in space (within 2 city blocks), and time (1–2 days), made sufficiently in future (~1 week) for appropriate actions or interventions to be actualized. 

But being more precise in forecasting events is not the main point here.

This new framework doubles up as a precise high fidelity simulator — we can evaluate effect of different potential enforcement policies, evaluate the impact of current policies in different neighborhoods, and investigate the impact of perturbations in crime rates and effects of increasing property crimes on violence — providing a framework for identifying biases, and inferring policy changes that can minimize such effects.

Unfortunately, this new tool, while being a step in the right direction, does not eliminate the possibility of biases in AI predictions. The data that the algorithm trains on can itself be biased. Indeed, over-policing a neighborhood will ramp up the number of law enforcement contacts, which will falsely manifest a higher crime rate. The NHB article aimed to partially counter this effect by only considering events that are either not officer-initiated, or are serious violent crimes. However, it is difficult to wash the data clean of any and all biases, and continues to be a challenge.

The inescapable reality is that AI is here to stay, and its impact on our daily lives is only going to increase. We must ensure that this technological revolution does not lead us to dystopia. Democratizing this incredible power, so that it works for and not against the people is key. And this approach is a small step in this direction.

The NHB study was partially supported by the Neubauer Collegium for Culture and Society, and the United States Defense Advanced Research Projects Agency (DARPA) as part of an AI initiative. The opinions expressed in this article are that of the author alone, and do not necessarily represent official positions of the University of Chicago, the sponsors, or any other entity. The sponsors were not involved in designing the study, and no endorsements of any kind should be assumed.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in