Machines are already deciding for us – but based on which values?
Technology, strategy and design company Solita today published a report on the ethics of artificial intelligence (AI). With the report, Solita wants to spark more conversation around the topic, knowing that in designing AI systems, human values, human rights and democracy need to be at the forefront. In the future, machines will make decisions for us. But first, we need to decide which values those automated decisions should be based on.
For the report, Solita interviewed notable AI and ethics experts and influencers in Finland: Jani Turunen, AI Lead at Solita; Antti Rannisto, Design Ethnographer at Solita; Anni Ojajärvi, Design Ethnographer at Solita; Henrik Rydenfelt, Docent of Philosophy and Communications at University of Helsinki and Postdoctoral Researcher at University of Oulu; Pii Telakivi, Researcher of Philosophy at University of Helsinki; Sonal Makhija, Anthropologist of Law, Lecturer and Project Researcher at University of Turku; Mianna Meskus, Associate Professor of Sociology at University of Tampere; and Osmo Soininvaara, prominent influencer who has also been the Chairman of the Artificial Intelligence Division in the Transforming Working Life Committee at the Ministry of Economic Affairs and Employment of Finland.
“Throughout history, each introduction of a new technology has included an ethical dimension. With AI, however, the need for discussion on ethics is more necessary than ever. As decision-making becomes more automated and machine-driven, we need to be fully aware of the values and ethics behind each decision. Through AI, we can improve people’s lives, but first, we need to understand the consequences of automated decision-making. The time for conversation is now,” said Jani Turunen, AI Lead, Solita.
Three ethical dilemmas
Solita’s report points out three ethical dilemmas related to AI: Individual freedom vs. common good, the so-called black box problem and AI’s potential tendency for discrimination.
One central question is: Should AI prioritize individual freedom over the common good – or vice versa? The answer may vary depending on the society. A Scandinavian view of what constitutes a good society and what common good means can be quite different from popular definitions in the U.S. or China. In a recent survey, the largest Finnish newspaper Helsingin Sanomat and Solita asked if AI algorithms should be allowed to choose a slower route for an individual driver in order to make overall traffic flow better. 67 per cent of the respondents said yes while 23 per cent opposed decision making that dismisses individual freedom. The percentages might look very different in another country or culture.
AI systems are becoming more complex and, similarly, it is getting more difficult to understand the basis on which their decisions or analyses are made. This leads to the black box problem. Can we trust artificial intelligence? Do we need better transparency into automated decision making?
”If we do not understand the reasoning behind a decision, we cannot adequately evaluate how ethical it is. We constantly test our choices and actions through criticisms and justification in our social groups, families, workplaces, and society in general – and for this we need transparency and explainability,” said Antti Rannisto from Solita.
Transparency in using AI decreases the risk of biased and discriminating behaviours. Numerous examples have come up in recent years where a machine has made a biased decision, thus discriminating certain groups of people. It all boils down to the data the AI system relies on, and what kind of data is used to teach the algorithm to make decisions.
Four key points for conversation
In the report, Solita proposes four action points to trigger conversation on the ethics of artificial intelligence:
- All relevant stakeholders need to be involved in the discussions. AI ethics is not only about technology, but it is also a conversation that involves some of the most profound human questions that concern us all.
- We should move forward with shared responsibility. Organisations and companies need to make sure they are utilising AI in an ethically sustainable way. Shared standards are needed.
- There needs to be efforts to increase AI literacy. Since AI applications are already a part of our daily lives, we should educate people on their effects on our everyday lives.
- Transparency is needed for the ethics of AI decision-making. It is important to address and disclose what and how decisions are made by AI.
Solita is a digital transformation company driven by data and human insight. We create culture, services and tech solutions that help us reinvent businesses and society for the better. Our services range from strategic consulting to service design, digital development, data, AI & analytics and managed cloud services. Established in 1996, Solita employs 750 digital business specialists in Finland, Sweden, Estonia and Germany.