16 Jan 2019Blog

Machines as social agents

Machines as social agents

The history of mankind is not only a history of man but also her tools. Tools have been co-agents of our history ever since the emergence of our species, starting with the development of early stone tools at the Olduvai Gorge in Eastern Africa some two and a half million years ago. More recently, our co-agency has been manifested in the revolutions generated by the printing press, the steam engine, electricity, computers, and the Internet.

Our current zeitgeist is that of the Fourth Industrial Revolution, in which we see the rise of cyber-physical systems and new kinds of non-biological intelligence. Here the agentic capacity of technology is becoming something to be taken quite literally as machines start making decisions once made only by biological agents.

This stresses the need for ethical, societal and political consideration of systems possessing qualities pointing towards non-biological intelligence and agency.

But are these considerations topical now because of the intelligence of these systems or rather the lack of it?

Human intelligence is creatively adaptive and driven by embodied meaning

The discourse on artificial intelligence can get pretty wild and jazzed up. Our fantasies, hopes and fears, are undermined with the reminder of just how rudimentary our most intelligent machines still are in comparison with everyday human smarts. Despite their superhuman computational power, machines fail in simple human tasks where recognizing contextual meaning is often key to understanding what is the smart thing to do in any given situation. No machine would survive the complexity of a night out with a bunch of Englishmen with their taste for irony and banter.

Very quickly our efforts to build non-biological general intelligence hit this “barrier of meaning”, as Melanie Mitchell, a Professor of Computer Science at Portland State University, recently wrote in her New York Times Opinion piece.

What is meaning then? There’s a whole field of science devoted to answering this question, namely semiotics, and most everything done in the fields of human and social sciences touches on the topic in one way or another. Meaning drives human action in a very complex but also very concrete sense. The objects, subjects, situations and phenomena we face in our daily conduct are loaded with meaning for us, and change of context often introduces a change in meaning. Just think about the different meanings – and behavioral consequences – of a mundane urinal in the contexts of a toilet or on display at a museum for modern art. With the latter we of course refer to Marcel Duchamp’s ready-made classic Fountain from 1917.

Marcel Duchamp’s “Fountain” (1917)

One way to define meaning is this: meaning is the function of things we encounter as signs representing or bringing about also something more than just themselves – generating thoughts, feelings and actions. These meanings or interpretative effects of things we encounter are amazingly rich in their subtleties, mutable and context-bound. We not only recognize instantly the contextual meaning of Duchamp’s Fountain as a piece of art, but also interpret and reinterpret again and again its meaning vis-a-vis the tradition of art. We also form different individual interpretations of it, which then interact and influence each other through debates, critiques, books on art history and various more casual human exchanges around the piece. This result in a flux of contesting and changing meanings affecting how we see and treat this basically very mundane object.

We usually interpret meanings and their alteration instantly and effortlessly, our interpretations come in forms that can be consciously cognitive (e.g. being conscious of the meaning of the word ‘cat’) but are not necessarily or even usually so. Most commonly meaning functions in embodied form, as implicit dispositions guiding action, with no need for conscious awareness or representation. This results in what we sometimes call ‘common sense.’ If a person holding a knife approaches you on a dark alley/in a kitchen/at the hospital’s surgical bed/on stage in a theater play, you ‘just know’ how to and how not to react – and often you don’t even ‘know’ in any cognitive sense but skillfully skip straight to the right kind of action.

Our interactions with the world and other living creatures is guided by this kind of embodied and intuitive understanding of the meaning of things and situations we face. And importantly, when the world around us (the environment of our action) changes, as it constantly does to some extent, we have the quite wondrous ability to learn and adjust our action in creative ways. This human creativity utilizes analogies and conceptual metaphors in exporting our understanding from one domain and applying it to something new and unforeseen.

Currently, and probably to the foreseeable future, machines are nowhere near human capabilities for this kind of learning, creativity and generalization.

Machines as social agents blog

Artificial intelligence and outdated philosophy

This embodied, intuitive common sense and creative being-in-the-world has been a favorite subject of philosophers of the phenomenological and pragmatist traditions. According to them, it is here, rather than in our explicit cognitive-logical skills, where we find the foundation of our intelligent behavior.

A vocal proponent of this kind of thinking – and a sharp skeptic of the general AI project – was the philosopher Hubert Dreyfus who passed away in 2017. Working in the phenomenological tradition of Martin Heidegger and Maurice Merleau-Ponty, Dreyfus often liked to sarcastically point out how people fussing about the intelligence of machines had actually gotten the wonders of human intelligence all wrong, as something purely cognitive and abstract computational capacity when it should be understood as an embodied, emotional and dynamic relationship with the world around us. The technological types had adopted an outdated philosophy of the mind. This philosophy had its climax in the 17th century rationalism of Rene Descartes with its dualistic mind-body split (the ethereal conscious mind being the sole domain of our intelligence, the profane body being its enemy and distractor), and this Cartesian conceptual baggage is something that current multidisciplinary study of the mind still tries to get rid of. This aspiration got its famous manifest in the neuroscientist Antonio Damasio’s book from 1994, Descartes’ Error – Emotion, Reason, and the Human Brain, where Damasio shows how there’s no intelligence without emotion.

In a recent interview Damasio stated his opinion that “human intelligence can’t be transferred to machines” due to our inability to build emotions and feelings into these machines. Be this as it may, it is safe to say that the intelligence of machines is currently something very different from the intelligence of humans, and compared against human intelligence, in many respects machines are just daft.

And still, we are nowhere near understanding the wonders of the human mind and our intelligent behavior. So how could we even think about transferring anything like it into our technologies? And while we may not even try to replicate intelligence of the human kind, and rather focus on the strengths of machine intelligence, we should be very conscious of the differences of these different types of systems driving the behavior of humans and machines respectively. Understanding of these foundational differences should guide our decisions on what kind of autonomy and agency we can assign to machines – where and when should we let them decide and guide their own operations.

Stupid machines, hidden agents

In her aforementioned article, Melanie Mitchell quotes the AI researcher Pedro Domingos’ incisive conclusion about where we stand now: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Forget super intelligent machines taking over, it is stupid machines with too much power that we should be cautious about.

Machines as social agents blog

Today, most machine learning systems we interface with are quite simple in terms of the utility they serve: they recommend you products via email, they suggest movies you might like to watch via your favourite streaming service and so on. However, in some of these systems a machine may optimize your experience in unforeseen ways. It may efficiently optimize someone else’s utility in ways that directly or indirectly impact your life in undesirable ways, changing our society as a side product.

Machine learning systems are built using data. Usually the more data the better results, but what does better mean in this context? In machine learning an algorithm optimizes its results for a given task. For example, a human resources application with artificial intelligence could optimize for finding the best candidates for a given open position. For that purpose, the machine learning algorithm has been fit to a mass of CV and job data from past recruitments, both successful and not so. The algorithm effectively becomes a representation of the past via the viewpoint of the data it has been fit with, i.e. the only thing the algorithm “knows” of this world is the data it has been trained with. This trained representation can then be used for future HR scoring between candidates and open jobs.

In many countries’ legislation it has been made illegal to discriminate by gender, religion, ethnicity and so on. Such an aforementioned AI tool would not be able to take legislation into account when scoring applicants for open positions – remember, it knows nothing of anything else beyond its training data. If in the past men used to be hired for managerial positions with a higher percentage than women – and this would be represented in algorithm’s training data – the AI application would continue the trend via its scoring and predictions. Not because it “wants” to be evil, or to break the law, but just because of the mathematics at play.

Naturally organizations building such applications make efforts to stay within regulatory bounds and thus actively fight these kinds of examples of discrimination. However, it may be difficult, or even impossible, as the example of Amazon retiring its internal HR AI tool last year shows. Even with Amazon’s vast engineering resources and scientific know-how, fighting discrimination in their own AI system proved to be an insurmountable task.

These kinds of examples raise the obvious political question “to whom is the optimization advantageous”? To the highly talented female manager looking for a job, but who just does not quite make the algorithmic cut due to inbuilt bias in the system? Hardly.

Amazon as a leading player in the field of artificial intelligence has stringent ethical checks and balances at play, and this one example was nipped in the bud before it could cause harm, but what about all those models with unknown bias (to end users, to developers even) hidden from plain sight, built and deployed by organizations with lesser amount of talent and available resources? Is this the beginning of an era of (socially and morally) stupid hidden agents?

Machines as social agents blog

If the human capacity for socially responsible agency is based on (a) our ability to holistically grasp the multiplicity of meanings of things, situations and actions we face and do, and (b) on being socially contestable and accountable for our actions to our partners, families, social groups and societies, then we need to ask: what kind of autonomous agency should we and should we not assign to machines to which neither of these apply?

Human reality modified by machine learning

Whether machine learning is producing intelligence of the human variety or not, use of machine learning models is becoming more and more pervasive in our society and everyday life. Each round of industrial revolution has profoundly changed humanity, taking us an extra step away from our naturistic roots. Current developments are no exception, but what perhaps distinguishes this particular revolution from previous ones is that the economical and societal landscape has changed:

  1. there is a new structure of global connectivity between people and electronic services
  2. connectivity is instant
  3. connectivity is affordable

This landscape provides a fertile ground for online products and services that no longer are paid for with money but with personal data. If data is the new oil and AI is the new electricity, then at this point in time the latter is being fed with the former, and end user experience is thus optimized by stupid machines in ways to maximise value to someone – not necessarily the end user or society.

The longer we live in this cycle of optimization hooked to addictive online products and services, the further these services augment our understanding of reality. We take segment-of-one optimized views of the world for granted. We are more than happy to accept personalized web search results. We do not miss content we never knew existed. Dan McQuillan suggests that when machine learning makes decisions without giving reasons, it modifies our very idea of reason, changing what is knowable and what is understood as real. Living and growing in this environment will have an unforeseen impact to who we are as humans.

Our work on AI ethics

There is a discrepancy between ambition and understanding when it comes to deploying AI in business. Over the past few years companies have been unable to avoid hearing about magical promises made by AI marketeers, which has lead to buyers recognizing them needing a tick in the “we are using AI at our organisation” -box. For this reason we tend to kick start data science oriented customers engagements with AI training, enabling a more meaningful discussion to take place on the use of AI. AI training allows discussion to be about creating added value using math not magic, what modelling should optimize, using what data, what are the client’s ethics and how well they understand intended and possible unintended consequences of deploying AI solutions.

This dialogue enables fruitful analysis of what data it is ethically sound to use for modelling. Does data require obfuscation for reasons of privacy? Who can process and see the data? It also allows both parties to understand that even omitting the most obvious data that might cause discriminatory bias in a resulting AI system, data leakage is a real and tricky problem. For example, removing gender from training data does not automatically ensure a gender bias free AI system. Modelling on consumer purchase data is tricky, as different genders have very different and distinctly identifiable shopping patterns. It is easy to imagine many such examples where it becomes altogether difficult to remove discriminatory data from modelling. What should we do then – stop modelling?

For these purposes we at Solita and Palmu have set up an AI Ethics panel with people from various backgrounds, ranging from technical expertise to social sciences and design, that helps us identify and debate on important and contentious AI-related issues and projects. We cannot, and do not want to, stop technological development, but we agree that the time to talk about ethical issues around use of AI is now. Tomorrow it may be too late.

Machines as social agents blog

AI in a pluralistic society

As part of our AI ethics endeavors, this year at Slush we brought a soundproof, closed-off cube in the middle of the hectic venue for people to step inside to think and discuss these issues in small groups. We showed glimpses of possible futures, where artificial intelligence has been tasked with making decisions on behalf of humans, decisions that from a human point of view have instantly recognizable ethical aspects to them. Would you let an AI delay a self-driving car if it made the overall flow of traffic faster for everyone else? Would you let an AI prioritize a millionaire investor for cancer treatment over an unemployed person? Would you let AI do recruiting decisions based on estimations of candidates’ short and long-term profitability? After laying down the scenarios, people had to choose their side: either you agreed with the AI’s decisions and were willing to give machines more power, or didn’t agree, wanting to keep these decisions in human hands.

During the two days our experts on machine learning, data science, design and human insights facilitated discussions with people from all over the world. A general concern of the people we met was that currently not enough is done to raise awareness of the complexities of AI-related ethical and societal issues. If treated only in a technological framework, human and cultural biases are easily slipped into our AI applications. While fallible, human reasoning is also amazingly holistic by nature. Guided by intuition, emotions and values our reasoning can recognize ethical relevance in a blink, while machines with all their processing power continuously fail to do this.

However, simplicity is to be avoided also here. We need to balance between dodging moral relativism while recognizing the diversity and the political nature of our moral considerations. If the multiplicity of different cultural and human values is not recognized, our discourse starts resembling that of dictators and totalitarian regimes. As one participant at our Slush cube noted: “If you pose the question as there was one ethical solution – then that’s called dictatorship.”

The work towards ethical AI solutions and regulation should be seen not as proceeding towards a fixed solution, but as a social process, resembling the process of pluralistic democracies with contesting ideologies taking part in the quest to find shared value, common ground and compromise.

The authors are Antti Rannisto and Janni Turunen. Rannisto is a sociologist and ethnographer, Turunen is an old-school hacker and IT know-it-all working as Solita’s AI Lead.

The authors want to thank Mikki Mustonen, Anni Ojajärvi and the whole AI Ethics group at Solita for insightful comments and discussions around the topic!

Want to immerse yourself more in the topic? Check the interviews of our recent guests, the techno-sociologist Zeynep Tufekci on the societal impacts of ML, here, and the philosopher of technology Alix Rubsaam on interrelated historical representations of humans and machines respectively, here.

To take part in the conversation comment on Twitter: @antti_rannisto, @randommman