The tech world has awoken to the issue of black boxes. As the development of deep neural networks move forward rapidly, the evolution of AI’s explainability and transparency must keep up. In our latest expert Q&A, Jani Turunen, Solita’s AI Lead and contributor to our recent report The Impact of AI, takes a peek inside the black box problem. Is it as scary as it sounds?
Why and how does the black box problem occur?
The black box problem is more prevalent in use of artificial neural networks, and less so with more simple learning algorithms.
Artificial neural networks can be imagined to be like a bunch of nerve cells communicating with each other, processing each other’s signal and transmitting it to each other. The more artificial cells, or units, you have, the more computing power the neural network has.
If we imagine a bunch of these units inside a box, give it input and start teaching the box with some expectations on the output, we do not really know what happens inside the box and what kind of representations the units learn from the data.
Although it is possible for us to extract the ways the units communicate with each other inside the box, the information means very little to us.
Are there some concrete ways to tackle the black box problem?
Yes, luckily in the last year or two we have seen some technological developments that allow us to tap into black boxes. For example, technologies like the Shapley Additive Explanations, or SHap. Technologies like these make it possible for us to understand what parts of the input data are more important in the decision-making process than others. It does not mean that the entire black box problem has gone away, but I certainly welcome these sorts of developments.
This has been a big topic in international AI forums and certification processes. Companies and organisations in many fields of business currently are eager to build deep learning solutions and put them into production. But, in many cases, there is significant friction as it is imperative to have aspects of AI transparency and explainability in place for solutions to gain, for example, regulatory acceptance.
In order for us all to learn to accept AI to be an integral part of our daily lives, we need to invest into AI literacy, transparency and explainability, using both voluntary but also regulatory means.
Is the black box problem something that AI developers, companies and people should be worried about?
Certainly, if we talk about automated systems, like an automated social security system, I think it is important for people to understand why a given decision was made and if it was made by a human or a machine. People should be empowered to say that “I want that decision to be made by a human”. I also think that although AI as a technology is not new, as a society, we are only taking our first steps in our journey using AI technologies at large. Letting AI solutions act on their own, making automated decisions on important human related matters, would not yet make everyone be at ease.
Humans have an important role to play interpreting AI systems’ decisions, but as President Kersti Kaljulaid of Estonia recently said, we cannot expect engineers to perpetually explain machines’ decisions, but machines need to be able to explain themselves. This is an interesting statement worthy of serious consideration.