I currently work with machine learning in the context of customer experiences. My team develops and creates applications in which algorithms help determine what an individual customer wants or needs next. The applications learn from the customers’ behaviour and the recommendations offered to customers are constantly improving both during use and when customer profiles are being compared to each other. I stopped to think about the moral responsibility of these activities.
Facebook has been at the focal point for a long time with regard to algorithms that prioritise content for the users. Facebook has an opportunity to change the way people think and thus, in practice, the entire world.
This is a responsibility that Facebook probably does not want or is not prepared for. At Facebook, algorithms are as a general rule written to maximise user numbers and the time spent using the service and thus to increase advertising revenue. As a side product, Facebook influences which news we read and whose lives we follow and inevitably this affects our thoughts – whether we want it or not.
It is interesting, to say the least, that on one of the biggest media platforms in the history of mankind the role of the editor-in-chief and moral guardian has been given to advertising algorithms.
YouTube – the greatest radicaliser of our time
I read an article about a researcher testing YouTube algorithms. The researcher created new accounts and watched videos covering different topics to see which videos YouTube auto-plays after a video ends. The researcher summarised the tests by stating that YouTube is the greatest radicaliser of our time.
For example, if you watch immigration-related videos, the algorithm will keep looking for more radical videos to maintain your interest. If you start with a video on immigrant employment, you will soon end up watching content produced by the extreme right. Once again it is about maximising advertising revenue through machine learning. However, the outcome is that extreme phenomena receive more attention than ever before. https://mobile.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html?referer=
Where Facebook and Google carry out morally questionable activities unintentionally, there are several examples of intentional immoral activities.
Cambridge Analytica and a conspiracy theory
One of the current hot topics is Cambridge Analytica and the harvesting of 50 million Facebook profiles to manipulate people’s voting behaviour. Something else has also come up: https://www.hs.fi/talous/art-2000005610811.html?ref=rss
The same organisation has had a remarkable influence on Trump’s election and Brexit both financially and operationally.
It is worth asking why the players who, in addition to Cambridge Analytica, own one of the world’s biggest hedge fund firms, Renaissance Technologies, want Trump as the president of the United States and the UK to withdraw from the EU. Do they want these things so much that they are willing to put their money and reputation on the line? The Renaissance funds are worth 84 billion dollars with an average annual return of approximately 30 percent. All investments are made with machines using predictive analytics.
Here comes the conspiracy theory of the day. Hedge funds live off volatility and Renaissance’s own investment portfolio is too big for a market without sufficient volatility. If there is enough volatility, the algorithms will bring in money. Trump’s presidency and the implementation of Brexit, even if the risks are high, are justified from this perspective.
The question is: what has to happen next in order to have enough volatility in the markets? A major military conflict? If this is the case, the question is:
How scary would it be if there was an organisation with 84 billion dollars, the world’s best algorithm writers and the morals of Beelzebub?
I am an optimist and believe that it is inevitable that companies will optimise the value experienced by customers instead of short-sighted sales maximisation – especially in Europe where the GDPR protects consumers. The use of machine learning to improve customer experiences, instead of advertising, will also lead to a better outcome. However, I am sure that we will still encounter similar moral issues and everyone writing algorithms should take responsibility for their effects.