There are no second chances with automated decision-making. You can’t plead your case or crack a joke with an AI system, in the same way you could with a human decision-maker. Add the threat that AI systems will mirror existing human biases and the future starts to sound quite scary. But this is not a foregone conclusion. Choices taken today will shape the future. This is why the debate around AI ethics is one of the most crucial issues facing humanity.
We spoke with Henrik Rydenfelt, a writer on AI ethics with a background in philosophy and communications, and contributor to our recent report The Impact of AI, about the dangers of digitalising discrimination.
Do you think it is acceptable that AI systems would make discriminating decisions?
Well, the obvious answer to this would be no. It is not acceptable. The only case where I see that this could be morally reasonable is if the algorithm would be going for so-called ‘positive discrimination’ where the goal would be to even out the differences in our current society. For example, AI could offer bonuses more often to women in a company where there would be differences in wages between genders.
How can we protect ourselves from discrimination by AI models?
This comes down to data. What kind of data do we use to teach the algorithm? Data is basically the oil that makes the machine run.
The more data we provide the system with, the more accurate and informed results it can provide.
However, we know that there are some data that we should not feed to the algorithm – data that might lead into discrimination.
This raises a serious question: Will companies building these algorithms aim for the most reliable results in the short term and use as much data as possible, or will they intentionally restrict the data and make their AI system less intelligent, but ethical?
One example of this is the case where a bank’s AI-based credit scoring system was found guilty of discrimination, since it used factors like the applicant’s mother language, hometown and gender to define whether it would grant a loan or not. Now, if banks could not ask for the home address with loan applications, that might already help. Restricting the data that AI systems are allowed to use is one concrete way to avoid discrimination.
Who should oversee the ethics of AI?
Overseeing the ethics of AI and preventing discrimination from happening is still currently very difficult. Right now, the responsibility is very much on the designers of the applications – but also on the end-users themselves. The debate about this is still very much open.
The problem often is that companies own their algorithms and do not want to open up their proprietary information.
If a social media platform was to change their algorithm so that it would favour content produced by white people, by showing for example 10% more of that content on people’s news feeds, how would we ever know that?
Hypothetically, some researchers could study the algorithm and its behaviour. But collecting that vast amount of data and arriving at the right conclusion would be basically impossible. It is possible that these kinds of inequalities already exist in applications that we all use and we just do not know it.
How will AI change decision making?
For example, if there is a person who has made major changes for better in his or her life, no machine can yet understand that. If this person goes to apply for a loan, the AI application assessing the credit rating will only understand the past behaviour it sees from the data. If the person has a bad history in paying loans and in work life, the AI application would probably not grant the loan.
Instead, a human decision maker might perceive this and see the genuine change. That is the thing: AI can be a boring bureaucrat, who does not give second chances – humans do.
Henrik Rydenfelt is a Docent of Philosophy and Communications (University of Helsinki), Postdoctoral Researcher (University of Oulu), Chairman of the Council ofEthics for Communications in Finland.