Blog

Authority is increasingly expressed algorithmically

Antti Rannisto Insight Lead, Solita

Published 04 May 2021

Reading time 6 min

Who knows? Who decides? Who decides who decides?

Artificial intelligence has become so mundane we hardly notice it – nor are we supposed to. When you take a picture with your phone, search or translate with Google, or use voice control on your devices, there’s an embedded AI system assisting you. AI algorithms control your social media: the news, the advertisements, and the selection of posts highlighted in your feed. Algorithms recommend whom to connect with, whom to date, what to watch next, how to sleep, eat and exercise. Your loan application, even your job application, might be processed by an AI system.

AI is deeply embedded in our everyday, guiding our engagement with the surrounding world. We should be talking more about how authority is embedded in these systems and how these systems affect us.

Statistical models reinforce stereotypes

A recent spur of awareness was triggered by Google Translate’s gendered choices. The translation of gender-neutral language to a language with gendered pronouns includes a decision of which gender pronoun to use. The selection is based on statistics, for example if headaches are more common among women – “she has a headache”. But of course, men also have headaches, it’s just that the training data consists of Google searches that are more often associated with women than men.

This way, the statistical majority turns into representational totality in the form of these translation results. “He is a leader”, “she is a cleaner”, “he takes care of things”, “she takes care of the children” and so on. The AI-powered translator is a social conservative.

Such interactions affect our intuitions. And the presence of such technologies in our lives is rapidly increasing.

Who gets to decide what is acceptable in AI development?

Recently the futurist Risto Linturi took two GTP-3 powered AI’s called “Muskie” and “Saara” to the Finnish Parliament to meet its Committee for the Future. Think about it: a hearing at the Finnish Parliament, interviewing two AI’s as expert guests on matters of the future.

What does it mean if these AI applications are treated and interviewed as “expert guests” at Parliament? Or even experimented with for such use, as seems to be the case here. Interacted with, listened to, influenced by.

“Samantha” is another one of Linturi’s GPT-3 based AI “assistants”. Samantha can discuss issues such as creativity and reflect on her use as a teaching assistant. Samantha is used in leadership training, what if she was assisting teachers at school? She would tirelessly crank out information, provide reflections and answer students’ questions.

Let’s imagine a student would like to know about programming, for example. Would Samantha tell – explicitly or imply just by using gendered pronouns – that programmers are typically men? Or would Samantha say that originally programmers were mostly women and that although the ICT industry is currently male-dominated, being a programmer is an equally good career choice for women?

Professor Teemu Roos’ recent chat with Samantha provides a needed reality check. Only a few lines in, Samantha’s answers take a turn towards some very murky paths and eventually escalate into full blown paranoia and violent populism.

It is inevitable that Samantha and similar AI applications will be biased, right from the training data used to teach them. But often biases are very difficult to define and address. Drawing the line between acceptable and unacceptable biases is not always easy.

Think about it globally. Which cultural foundations and values should we choose for these AI applications (often scaled globally)? Social acceptability would look very different based on Hinduism, Islam, or Christianity. Not only across societies but also within societies there is great variety in our moral reasoning, and no single moral foundation is to be found as Jonathan Haidt’s research points out.

So who gets to be the judge? Whose opinion was consulted – whose mattered most – when Samantha, Muskie and Saara were built? Or who decided the translator should follow the majority-to-totality logic in its gendered translations? A choice thoroughly ideological and of great social relevance was likely treated as just another technological question of functions and statistics.

Exploring power structures instead of one common good

The questions we just presented are part of AI ethics. The term ethics might intuitively raise expectations of objective knowledge about the ‘good’. Knowledge only waiting to be defined, embedded in code, and implemented in machines. But there are no simple answers to be found to these questions. Rather what there is, is power – the power to define, execute and influence.

Timeless and objective ethical truths are the approach of dictators. Ethical dictatorship, in which an authority is named to solve all these issues for us, is impossible for several reasons:

There is no one objective truth and locus from which to form ethical judgments. Interest, politics, and power are better framings than ethical single-point authority.

No one actor can hold enough information of all that needs to be considered when making ethical judgments about impacts on living beings and their environments. A systemic approach is needed (as e.g. democratic systems aim to be).

The world is in constant flux regarding interests, values, their alignment, and how ethical principles are interpreted in real contexts and cases. Ethical judgments are inevitably social and involve a process.

By these frames, it is easier to understand the importance of diversity and social deliberation around AI systems to be able to processually work out the variety of the ‘goods’ and the ‘bads’ and everything in between.

“The method of democracy is to bring conflicts out into the open where their special claims can be seen and appraised, where they can be discussed and judged“, wrote the philosopher John Dewey almost a century ago and confronted the technocrats of his day.

Instead of individualistic principles and good intentions – like Google’s motto “Don’t be evil” -, and instead of expectations for objective ethical truths, what we now need is much more social reflection and democratic deliberation of the data and AI-powered social milieu we are rapidly building around us.

AI will change our world and thinking. Different actors have different interests to influence the development of AI, possibly even so that money or abuses of political power can buy effectiveness. Unless we are aware of this, we may find ourselves unknowingly giving away rights and granting opportunities to strongly influence our thoughts and actions.

For us at Solita, the social sustainability of AI is a growing concern, and thus we are doing research and development around these issues. One crucial, multidisciplinary context for this work is the project AIGA.

What do you think? How is AI currently changing our societies, interactions, and mental models? What to prepare for in the future?

Please take part in this discussion with the people around you and e.g. in social media – social deliberation is key.

P.S. The title of this text is a quote from Frank Pasquale’s The Black Box Society, and the quote, in the beginning, is from Shoshana Zuboff’s pamphlet The Age of Surveillance Capitalism. You should read them both.

Published also on AIGA’s blog.

About the writers

Antti Rannisto is a sociologist and ethnographer at Solita’s Design & Strategy team. For the past 10+ years, he has worked with applied social science in service and product design, organisational change, brands, and communications. Much of Antti’s current work revolves around social implications of new technologies, helping organisations balance and better position human and non-human agency in sociotechnical systems.

Manu Setälä is an experienced leader in the field of research and innovation. He has wide experience in various areas like information and communication technology, platform economy, UX, and teaching. In his current role as Solita’s Head of Research, he helps connect the dots between academic insights and industry applications around new technologies.