AI can be harnessed for the common good. It offers immense advantages to society as a whole. But there’s a paradox in this promise. The use of AI solutions carries an inherent risk of infringing upon individual freedom. Can automated decision-making ever be relied upon to make ethical choices? Will the implementation of AI differ from one society to another?
We spoke to Mianna Meskus, sociologist of biomedicine, and contributor to our recent report The Impact of AI, about navigating the moral maze of AI.
How do you see AI models impacting individual freedom?
It is true that AI-based solutions can be used to govern people and their behaviour, like we have seen happening in China. There will be cultural differences in the ways that AI solutions will be implemented and in the ways that AI-based products are designed. For example, the products developed in the US will be different than the ones built in Europe, since they are based on different data sets, algorithms and manufacturing processes.
We will see different kinds of justifications and policy-level strategies on how AI applications will be used. That might bring AI ethics into the international political forums like the EU and UN, because we need to have some kind of shared guidelines on which ways AI models can be applied, for example, to surveillance. But that does not take away from the fact that we see individual freedom in very different ways in different cultures and political environments.
How will AI affect the way individuals make decisions?
I personally do not like to be given too many personalised recommendations. I want to go and explore the world – not just go to the first restaurant that my app suggests for me. I do not think we can go back to nonpersonalised customer experiences, but I do think that in some time, people will start to question these options and recommendations they are given.
AI is kind of like a two-edged sword: on the one hand, it can be used to give you more options and so you think it is broadening your personal freedom. But then again, the calculations are based on certain data properties and assumptions which means that you might not end up venturing to some new experiences and choices.
If we think that learning in the human condition happens through experience, it means we need to have trials and errors. If the amount of human errors will decrease due to AI-based services, it will change the way we learn and how much we go through these trial-error paths. So yes, I am concerned about the future that AI will bring us: I do not want my children to always follow the predefined path and always choose the personalised recommended options based on profiling. I want them to wander about and make their own adventures and also to gather information from multiple sources and come to their own conclusions.
What are the effects of AI on cultures in different parts of the world?
I do think that there is a danger that AI solutions will make citizens’ preferences, values and actions more homogeneous. If we think about the companies that are the most capable of making the best AI solutions at the moment, it looks like it is the big global companies with access to massive amounts of data. They are the leaders in designing these products and algorithms. Using vast amounts of data to train the AI solution and applying that solution internationally means that these companies take part in narrowing down the plurality of life in some ways. We need to be very aware of these questions and think about which parts of our lives do we want to assign to automated decision making.
Should there be regulations to guide AI’s decision-making process?
I think we cannot have technological development without some kind of regulation in the end. It is very important to keep the regulators in the process and engaged in the development of AI technology. I do think that legislation makes different stakeholder accountabilities visible and more concrete. In the current situation where we do not have shared regulation, the situation is actually quite wild, and it is not good for companies, citizens or politicians either.