The emerging field known as interpretable artificial intelligence can solve the most important problems of today’s algorithms.

Artificial intelligence (AI) has progressively entered our routine, with operations ranging from personalizing the advertising we see on the Internet to determining if our last credit card purchase is suspected of fraud. In the vast majority of cases, these decisions are delegated to so-called “black box algorithms”: those in which it is not possible to explicitly express the rule used for the decision. We know what the machine responds to each case, but never why. As AI expands its scope, this “blind” procedure is creating increasingly serious problems.

flight technology tools astronaut
Photo by Pixabay

The case of ProPublica, a company that designed a black box to predict whether U.S. prisoners would end up reoffending or not, first came to the public’s attention, information that was then used to decide whether they would be granted parole. Upon close analysis, it became apparent that the algorithm was somehow “racist”: African-American prisoners were assigned, regardless of their history, worse predictions than whites, assuming they would reoffend more frequently. To understand this problem, known as “algorithmic bias,” you need to dig deeper into what AI and machine learning methods really are.

The first problem with these tools is their very name. However useful their contributions are, it is essential to recognize that there is no intelligent subject in them and nothing to learn, since they are based on purely mechanical techniques. Machine learning is nothing more – and nothing less – than a process of identifying patterns in input data and applying it to new cases. The machine is given data in the form of examples of solved problems, and then generalizes to new ones using the identified patterns.

In most of the methods used today, these patterns are never made explicit, so we find black boxes. These include, among other techniques, the so-called artificial neural networks. The technique known as deep learning adds an additional confusion factor, since in essence it is identical to machine learning, only quantitatively greater: with more elements and greater calculation needs. There is nothing of depth or understanding. A neural network with few neurons is machine learning; with many, deep learning.

ProPublica’s problem was because, in the training data (which took the form of prisoner histories, including information on whether or not they ended up reoffending), repeat African Americans were overrepresented. This led the algorithm to conclude that African Americans reoffend to a greater extent, and so incorporated it into its predictions. Bias against black people is introduced by the same algorithm in its correct functioning, hence the name algorithmic bias.

Another recent example is the one that emerged on a social network that much more frequently singled out photos of non-white women as inappropriate content. Apparently, the algorithm was trained with advertising photographs as an example of valid content, and with pornography as an example of disallowed content. Apparently the pornography database was much more racially diverse than the advertising database, which led the algorithm to more easily generalize that a photo of a non-white woman corresponded to inappropriate content.

In black box algorithms it is only possible to detect bias a posteriori and after having carried out specific analyses. For example, in the case of ProPublica, we would have to submit similar profiles of prisoners of different races and then compare the responses. But, if we didn’t realize that race was a problem when building our database, how could we think of carrying out such tests? What if bias hurts not by race, but by some more complex combination of traits, such as young and poor African Americans?

To this we must add the risk of ‘overfitting, also derived from the generalisation of examples. Let’s imagine that we train the ProPublica algorithm with a relatively small number of histories and that, by pure chance, it turns out that the convicts whose photo has a slight reflection in the upper left corner are exactly the repeat offenders. If, by mere coincidence in the data, that pattern existed, the machine would detect it and use it to base its predictions. By studying its hit rate, it would seem that the algorithm anticipates recidivism with great precision. However, it would fail miserably when applied to new histories, since the patterns you have extracted do not make sense; that is, they do not correspond to any known or verifiable phenomenon.

Abandoning a decision as important as granting parole to an automatic system can lead to injustices that, as if that were not enough, will remain hidden in the opaque bowels of the black box. These injustices, we must not forget, are the responsibility of human beings who rely on such a machine to make such decisions.

A relevant decision should never simply be delegated to a machine. Rather, we should see AI only as an aid. A valuable help but only that: an additional fact that can serve as support when making a decision or that increases the speed with which we can undertake it.

If AI were just another technique to help humans make decisions, machine learning doesn’t learn and black boxes don’t understand shouldn’t be an inconvenience. However, it does present an additional problem that often goes unnoticed: it is not only that machines do not understand, but that they also do not let us understand.

At this point it is key to distinguish between the understanding of the algorithm and the understanding of the decision it generates. We can have full access to the algorithm code and the steps it sets to solve a problem. However, from its inner complexity will soon emerge a total darkness. Although its mechanisms are simple, the result is “complex,” in the sense that the complexity sciences employ this word: a complex system is one in which a few simple rules give rise to phenomena that cannot be deduced from those same rules. We know how the algorithm works, but we are not able to predict or explain its results.

All this could lead us to discouragement. To assume that either we renounce our need to understand and accept to leave our destiny in the hands of the black boxes, or we become neo-Luddites closed to the progress of technology and lose the opportunities that AI – despite the misguided name – proposes to us. The good news is that the above is not true. There is a third way: ask the AI for explanations. There are technically viable alternatives to black box algorithms, which will be developed at the rate that society demands them. In fact, there are two main ways to “open” the black box.

The first consists of the use of “surrogate models”. These are simplified versions of the black box, but they have the potential to be understood by people. However, the surrogate model is distinct from the black box and can therefore lead to different decisions, so understanding one’s decisions does not mean having understood those of the other. In addition, the techniques for building surrogate models are tremendously complex.

So why not use a simple model that we can understand directly? This is precisely what interpretable AI aims to do. The alternatives with the best potential for interpretation include score-based models, in which we roll over a series of criteria and assign scores according to the answers as a test, which we add at the end. We also have logistic regressions, very useful when dealing with binary problems (those whose solution is “yes” or “no”), in which a weighted average of scores is transformed by a function that makes the results fit within the range between 0 and 1, so they can represent the probability of an event. Decision trees are also extremely useful, in which several chained questions lead us to what will be the final result.

Often, developing these simple models requires laborious work by an interdisciplinary team, in which not only is a machine trained (which is usually done mechanically in the algorithms to be used), but repeatedly interacts with the model until a solution is found that is not only statistically acceptable, but also makes sense to the expert.

In some cases, developing interpretable models is technically difficult, if not impossible, as is the case with image or voice processing. However, these are not the applications of AI with the greatest impact on our lives. In structured data, such as criminal or medical records, it is often possible to develop interpretable models that work almost as well as black boxes. In addition, it should be borne in mind that, often, when it seems that black boxes make almost perfect predictions, behind the results hide the ghosts of algorithmic bias and overfitting.

For example, Cynthia Rudin of Duke University developed an alternative to ProPublica’s black box in 2019. The system consisted of a simple scoring scheme for transparent variables such as age, total number of crimes and, especially, violent crimes. This model has a predictive power very similar to that of ProPublica’s black box, but it is completely transparent and free of algorithmic bias.

Interpretable models offer us something that we must begin to demand from AI: that only the model we can trust is valid; and that we can only rely on something that we understand and that, in addition, makes sense when we put it in relation to previous experience and common sense. We have the right — and it is our responsibility — to ask for explanations.

This new stage of AI requires a paradigm shift. From the delegation of decisions, to decision support tools; from black boxes, to transparent models; from the automatically generated model, to the work of a multidisciplinary team. Only from this perspective can we overcome the confusion we face and reorient the advances of this field so that they serve the human being in freedom and responsibility.
Topic: Sara Lumbreras
, Translation: AMIT MM, Text: investigacionyciencia, Image Credit: Pixels

Leave a Reply

Your email address will not be published. Required fields are marked *