No wonder this happened, as it turned out Facebook had harvested data from hundreds of millions of people across the world to profile and target them with misinformation and political messages, without their knowledge or consent.
The damage may be irreparable. It may well be that we’ll end up coining the expression “a Facebook moment” to describe what happens when a company loses its key asset in the digital age: user trust.
The more we use and rely on data to improve our operations, the more our businesses depend on our users allowing us to gather and analyse the information they provide.
This is also a huge opportunity for traditional companies who are only on the early stage of their data journeys. Like our Principal Consultant Lasse Girs argues so poignantly in his blog post: “While “old organizations” have a long way to transform their cultural habits to become data-driven, they should also maintain their integrity and ensure their data-drive is not in conflict with their values. For example, organizations that value their customers over everything else should have no ethical dilemmas in deciding what customer data to collect and not collect. These ethical choices can make them stand out in this current era of digital and data confusion, and can be a significant competitive edge over organizations for whom data is an asset and customers are (only) data suppliers.”
To help our current and future customers navigate these challenges is one reason we at Solita wanted to research and publish a report on the ethical challenges of using AI in business and society.
We are keen to lead the conversation around the topic, since the technology industry – and all other industries – can ill afford another Facebook moment. We’ve already seen for example a case in Finland where a financial institution, Svea misused data to determine eligibility for credit. It’s in everybody’s interest that there won’t be more.
In this discussion we first need to understand that AI (which we define broadly to encompass computer programs that have been realised through the use of machine learning and other related data operations) is not “just a tool”, as has been naively argued by some. The use of automated decision making will have a broad impact on the way we live, and either we as business people and technologists self-regulate – or face the risk of losing customer trust and ending up with broad, stifling regulation.
To quote a popular science fiction classic, Jurassic Park: “Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
My colleague, our Head of AI Jani Turunen sums up in the foreword of the report how we at Solita approach these challenges with a broader than “just engineering” mindset: “we do a lot of work helping our customers gain value from the use of artificial intelligence, and we have realised that when human data comes into play, we immediately face difficult questions around automating decision making. If we help our customers build an AI-based medical device, how does its use change the familiar scenario of a doctor telling a patient how things are with her health? How do patients feel about getting advice from a machine? Is the advice more or less trustworthy when a machine gives it? Can the advice be better coming from a machine? What does better advice mean in this context, better for whom and what exactly does the AI algorithm optimise?”
We hope the report helps you start considering the opportunities arising from AI technologies for your business, and the society at large, but also helps you avoid a Facebook moment. If you want to continue the discussion with us, please reach out and let’s set up a meeting!