Who is monitoring the monitors? How can we hope to penetrate the mystery that is consciousness? Pii Telakivi, researcher of philosophy of mind at the University of Helsinki, and contributor to our recent report The Impact of AI, ponders the pitfallsdangers of bias in the age of automated decision-making.
Do you think it is ethically acceptable if we do not know the basis of AI systems’ decisions or should it always be fully transparent?
The vast majority of the population does not understand how computers work. Or how the internet works. The thing is, we do not even understand how the human mind works. We have no idea how consciousness is realised from the physical substrate. In this sense, AI is no exception.
I think it is not a problem that laymen do not understand the technical details of AI. But everybody should understand for what purposes it is used.
So yes, I think it can be problematic. The awareness of it should be raised in the same way as awareness of risks of giving too much personal information to companies such as with Facebook has been raised – although I am not sure the latter warning has been taken very seriously either.
In recent years, we have become very accustomed and reliant, even addicted, on many technical devices such as smartphones. They have become part of ourselves. We would lose many normal everyday functions without them. The same thing has happened and is happening with AI based technologies. Everybody should understand what role it plays, at least for him- or herself.
What kind of public discussion about AI ethics and concrete actions would you wish to see in the future?
It is very important to spell out the real, actual risks. The sci-fi scenarios are not helping in any way.
The most important thing is perhaps to make it clear that AI ethics is always decided by humans.
As already became clear, it is not easy to track who has the responsibility. It is difficult already, but as AI systems develop, it will be even harder to track.
This should be discussed in public – for example, politicians could state what is their take on this. Of course, politicians should also listen to what researchers say about this matter, and not let their views be influenced by short-term political trophies.
The AI systems can become – intentionally or unintentionally – biased, and therefore used for discriminative purposes, for example, in healthcare, insurance, loan decisions and so on. That is why they should be monitored, and the algorithms and the data it uses checked repeatedly.