18 Jan 2019Blog

3 Reasons Why We Need To Debate AI Rules Right Now

As AI systems become more and more prevalent, I’m reminded by a quote from Ernest Hemingway’s The Sun Also Rises. “How do you go bankrupt? First gradually, and then suddenly”. For me that sums why we need to increase awareness on the use of AI right now and why I’m convinced businesses and societies have to actively engage in a debate on the rules and potential regulations. In this blog post, I want to discuss three types of new challenges AI brings us.

Last month we challenged startup leaders, investors and other technology decision makers to debate potential futures with us at Slush, one of the largest tech conferences in Europe. We brought a soundproof, closed-off cube in the middle of the hectic venue for people to step inside to think and discuss these issues in small groups.

Although our overture was extremely well received and started a lot of promising conversations, there were also some critics who argued that “AI is just a tool” and not a topic that merits much wider exploration than a narrow engineering mindset. While I do agree that different AIs are different tools, we have to remember the idea of professor Marshall McLuhan: “we shape our tools, but thereafter our tools shape us” (the actual quote can’t be directly attributed to McLuhan, but is rather written by his friend, professor Father John Culkin in his article on McLuhan).

To take one analogy, the ethics of warfare were forever altered by the discovery of nuclear weapons, but you couldn’t leave it just in the hands of the generals. The hydrogen bomb is a tool, but also a source of major ethical considerations.

Although AI systems are still simple & stupid (and partly just for that reason), the introduction of more and more machine intelligence into decision making processes that pervade our everyday lives means we are already well on our way towards a new business environment rife with unsolved ethical dilemmas.

It’s crucial to also frame the impact of AI systems from an ethical angle.

Right now the use of machine learning systems to optimize processes can lead to significant marginal improvements, and if looked at simply as a business problem, we might choose to ignore that the use the same systems may have discriminatory consequences. Through the work of professor Ann Tenbrunsel at the university of Notre Dame we already know that the same person reaches different conclusions if asked to look at a problem as a business challenge or an ethical dilemma.

We still lack a universal framework through which to classify the possible ethical implications, but through the discussion at Slush and afterwards, I’ve started considering at least these three different frames.

1. The Ability to Know

What happens when we are able to gain an understanding we didn’t have before? Does the new knowledge force us to act, since inaction also is a choice? In the coming years, AIs will increase our ability to measure and quantify things – and we need to choose what we do with that knowledge.

At Slush, we used the example of a self-driving car routing an individual to a slower route to maximize the overall flow of traffic. Ignoring the questions of collective vs individual benefit, I find most interesting the question in this dilemma to be what should we do with the new knowledge created through AI systems. We are now able “to know” much more than previously: although we’ve understood that congestion slows commutes for everyone, now we are able to understand (and potentially control through navigation systems) how an individual impacts this situation. Should we apply this knowledge?

Of course, from the business angle this generates a lot of interesting opportunities.

We are constantly better to understand which customers are probably profitable in the future and how to nudge them to purchase even more.

But what happens for example when insurance companies are able to understand their clients so deeply that your exercise habits or a late night out impact your premiums? Or when a workplace analytics solution is able to calculate the profitability of individual employees?

China’s dystopian Social Credit System of course one ultimate result of the increased “ability to know”. Creating a form of mass surveillance that manages rewards and punishments of citizens based on their economic and personal behaviour is now possible and will be fully implemented in 2020. What do we want to do with our increased “ability to know”, and how do we create rules and regulations that protect our individual freedoms in the future?

2. Recursive Problems

A typical case of algorithms causing discriminatory results is well known. For example, it has been observed in the use of AI systems in recruiting or in the criminal justice system. By using learning material from existing decisions by biased humans, the same negative attitudes against female applicants or African-American offenders get introduced to the systems that make automated decisions, even if the data doesn’t directly contain references to gender or ethnicity.

The challenge goes a level deeper when we consider that the automated systems also cause changes in the behaviour of the people they address.

This way, we start causing self-reinforcing loops. Consider what might happen if we were to allow facial recognition systems to flag individuals for stop-and-search by police on the streets. If the system would discriminate against a minority that gets stopped by the police more often, the experience of being discriminated against might lead to even further alienation of that minority from society. That alienation might in turn lead to an increase of anti-social behavior, causing the facial recognition system to discriminate even more. This creates a recursive negative feedback loop.

This is happening right now. We’ve already seen YouTube’s content recommendation algorithms generate a feedback loop that is a worrying example of businesses ignoring ethical implications. The recommendation engine promotes material that you most likely spend the longest time viewing, to optimise for most views of advertising. In his excellent article, Columbia Journalism Review’s chief digital writer Mathew Ingram outlines how this has the potential to start to radicalize viewers of controversial content, as the algorithm seems to pick more and more content that confirms their biases to keep them watching, instead of offering differing viewpoints. At the same time, this loop leads to even more extreme content being created, thus potentially polarizing societies even further, all to maximise YouTube’s ad space.

3. The Unpredictable

Finally, although I’ve been a cheerleader of the disruptive innovations that Silicon Valley startups have brought us, we’ve started to see a series of unpredictable negative consequences.

The weaponisation of social media by Russia to promote Donald Trump and Brexit has thrown western democracies into chaos by turning the platforms we used to share cute cat pictures on to propaganda machines.

The same algorithms that help Facebook sell more targeted advertising can be used to pinpoint the weakest in society and alter their voting behaviour, as argued by techno-sociologist Zeynep Tufekci during her visit to Solita last year.

Similarly the negative impact of Airbnb on life in cities like Amsterdam or Barcelona, or the different crises that have plagued Uber show how unintended and unpredictable the consequences of technological innovation can be.

To counter the unpredictable, I believe we need two starting points. First, a basic “AI literacy” e.g. understanding what is real and what is snake oil. And powered by that literacy, a constructive debate – it is the only way to make sure that we consider not only the business benefits of emerging technologies, but also look at the technologies through the lens of ethical and societal considerations.

Interested to learn more? Read the excellent blog post by my colleagues Antti Rannisto and Jani Turunen about machines as social agents that dives much deeper into the work we do at Solita around the topic of AI and ethics. And pre-order our report on AI and ethics!

Many thanks to my colleagues at Solita’s AI ethics forum who contributed their views and comments, especially @ollilind, @randommman and @anttirannisto!