Blog

Four strategies for building trust in your AI systems

Anna Metsäranta Head of Sustainable AI, Solita

Published 04 Jan 2023

Reading time 9 min

Artificial Intelligence continues to grow in popularity, with organisations making sizable investments into improving their business and serving customers better with the help of AI. At the same time, there is increasing awareness of the negative social and environmental impacts of AI. Can we trust that the AI systems making decisions about us do good and do no harm? Trust is a crucial aspect in determining the success of an AI system. It is essential for organisations to not only address logical reasons for trusting AI, but also to understand how people come to experience trust. In this blog post, we will explore four ways in which organisations can build and foster trust in their AI systems, ultimately ensuring that their investments in AI are worthwhile.

Laying the foundations of trustworthy AI

There is a considerable amount of work done in both academia and industry to improve AI trustworthiness, creating mechanisms for helping to ensure that AI systems are designed and built for mutually beneficial impact and that they work as designed. The more critical the potential impact of the AI solution, the more important it is to ensure it works as intended.

There are several common principles that build towards AI trustworthiness, for example:

  • Transparency: informing relevant stakeholders of the purpose of the solution, its functioning principles, the way it is used, and its intended benefits and beneficiaries
  • Explainability: ability to derive which factors contributed to the system’s output
  • Fairness: ability to deliberate and argue that the system’s outputs are just and not biased in unwanted ways
  • Robustness: consistency of the system’s outputs in varying situations
  • Privacy: processing sensitive information securely and lawfully
  • Accountability: the system has an assigned owner that is responsible for the system’s functioning principles

These and other principles of responsible use of AI can be put into practice through an AI governance and operating model and by ensuring that the special nature of AI systems is taken into account in risk and quality management processes. There are numerous trustworthy AI policies and frameworks available in the public domain, for example the comprehensive AI governance model created by the AIGA consortium in close collaboration of academia and industry, addressing the requirements of the EU AI Act proposal. Other ways of building AI trustworthiness include certificates, standards, and internal and external audits.

While these mechanisms create the necessary foundations for ‘objectively’ defined technological trustworthiness, are they enough? How do people empirically and subjectively come to experience trust? It may not suffice to provide rational justifications of an AI system’s correct workings if there are other factors that make people distrust its outputs. In the remainder of this blog we will walk you through four mechanisms of addressing the experienced trust in AI systems.

Fostering experienced trust

The aforementioned mechanisms of objective technological trustworthiness are geared towards providing experts assurance that trustworthy AI best practices have been followed when designing and building an AI system, as well as when monitoring that it works as intended. However, the majority of people do not have deep enough expertise in data and algorithms to form a view on whether the mechanisms are sufficient and appropriate. Their views on AI systems and the potential impacts of them are influenced by factors beyond these kinds of rational justifications. So how can organisations foster trust with the users of their AI systems and those affected by the use of them?

The feeling of trust is intuitive and affected by empirical experiences. To answer questions on how trust is constructed or what factors lead to mistrust requires expertise in social and behavioural sciences. Understanding how trust is built in different cultures, communities, and contexts requires human-centred thinking, interdisciplinary collaboration, and empirical work. Below we give four examples of what this could mean in practice.

#1: Leverage institutional trust

Humans are social animals that are wired to instinctively trust other humans in their close in-group communities. In democratic societies like Finland, citizens typically also trust institutions that they perceive to serve the common good, such as the police, tax authority, social welfare authority, even retail cooperatives. We expect such organisations to operate in a trustworthy, fair, and responsible manner. Our trust in such institutions is importantly built up by societal structures, such as law and regulation, and the presence of impartial supervisory bodies such as the Financial Supervisory Authority.

In contrast to humans trusting other humans or institutions that serve the common good, such social instincts do not necessarily extend to AI solutions that make decisions concerning us but whose functioning principles are difficult to grasp or hidden from us altogether. How, then, do we relate to AI solutions used by trustworthy institutions?

In qualitative user studies we have found that people’s trust in the institution is more important than their possible lack of awareness or understanding of the functioning principles of the AI systems. If a person trusts the institution, they will intuitively trust its AI solutions, or possibly not even stop to consider the AI solutions as separate objects of trust. Another factor that enforces trust in a trustworthy institution’s AI systems is the perceived opportunity to contest an AI system’s outputs and have any mistakes rectified by a fellow human.

Trust in the institution can be leveraged by ensuring that the organisation’s AI solutions are closely connected to the organisation’s mission, reflecting and serving its values. For example, one of OP Financial Group’s missions is to increase people’s financial skills. OP offers customers a personal finance management service called My financial balance which utilises AI to create transparency into the user’s income and spending, in order to support their own financial planning. In a qualitative study of current and potential users of My financial balance, it was found that the users considered OP a trustworthy institution that was believed to utilise AI responsibly. Trust in the institution resulted in trust in the AI solution. We will return to OP and My financial balance later in this article.

#2: Obtain objective testimonials

Trust in an institution or its own assurances of the trustworthiness of their AI solutions may not always suffice, particularly in the case of solutions that affect people’s health, safety, or privacy. Not everyone can or wants to study the functioning principles of AI solutions in sufficient depth to form their own opinion of whether the solutions can be trusted. Responsible AI standards and certificates help to build trust through objective evaluation, as do public opinions expressed by impartial external experts.

Psychologically, it is very effective to give an external evaluation a face. It is impactful when a respected, well known expert (and by association, their institution) publicly shares their assessment of the AI solution, especially if their evaluations are perceived to be driven by systematic suspicion as is the case with e.g. white hat hackers. The opinions of these experts cannot be bought – if the solution were later found to be harmful, it would also affect views on the experts themselves.

The credibility of recognised experts was a significant factor in creating trust in the Finnish Institute of Health and Welfare’s national covid-19 tracking application, Koronavilkku. Technology expert Sami Köykkä was frequently seen in publicly explaining the functioning principles of the application. Importantly, these approachable explanations were often in public dialogue with citizens and were corroborated by groups like white hat hackers, cybersecurity experts, public officials, and journalists, all testifying to Koronavilkku’s reliability, security, and privacy.

Case: Koronavilkku, one of the most popular mobile apps in Finland

#3: Design with users, for the users

Trust is encouraged by a feeling of ownership, and people tend to feel ownership towards things they have been a part of designing and developing. This can be achieved by co-designing AI solutions with the intended users – when users are able to participate in the iterative design and development of an AI solution, trust is created gradually by answering trust-related needs well before the solution is deployed. Another significant benefit of co-creation with users is that it results in reduced friction in the user experience, making the solution more intuitive and smooth to use. The smoothness of technology has been found to encourage users’ trust. These factors increase the likelihood of the co-created solution being used as intended, thus creating the expected mutual benefits.

The design and development of trustworthy AI solutions is at its best human-centric and fuelled by broad interdisciplinary collaboration. Users must be viewed and understood both as individuals and as part of social and cultural worlds, as the construction of trust is a multifaceted, social, and context-dependent process. Involving social scientists in the interdisciplinary team enables organisations to create purposeful and trustworthy AI solutions.

The importance of human-centricity increases the more stressful and critical the context of use is. For example, a public official’s situational awareness system should not cause cognitive burden to its user in critical decision-making situations, but rather provide information in an appropriate way to support the human’s ability to make decisions under stress. When a solution serves the user’s real needs in a context appropriate way, it is easier to trust its outputs.

As an example of impactful co-creation with end users, the HSL (Helsinki Region Transport Authority) extensively utilised participatory design methods in revamping the user interface of the journey planner throughout 2019-2021. The result is one of Finland’s most popular and widely used mobile apps.

Case: More convenient routes

#4: Communicate honestly and meaningfully

Trust is built through interaction. The more openly, actively, and honestly an organisation communicates about its AI solutions – including the limitations and challenges – the more they can gain trust with the users. Active communication is especially important in the case of AI solutions whose user community is large, making it impossible to involve a significant portion of the users through co-creation.

Communication should be tailored to the target audience’s needs. What are their specific wishes or concerns, what is their level of expertise, how do they relate to the solution? Explaining the mathematical principles of algorithms may not be the right approach if the target audience consists primarily of laypersons. Whatever the audience and their needs, the functioning of the solution should be explained in terms the target audience can absorb, with concrete examples. A responsible organisation also communicates potential challenges and aspects that the audience may not necessarily be able to ask about. Besides providing sufficient, meaningful, and understandable information, it is advisable to provide a channel for the users to report problems or ask questions related to the solution. The more critical the solution, the more important it is to ensure contestability.

OP Financial Group was mentioned earlier in this article as a trusted institution whose AI solutions enjoy inherent trust. OP wants to be a frontrunner in the area of responsible AI and has taken important steps to increase meaningful transparency into their AI solutions. OP recently published a report for consumers that explains the functioning principles and the algorithms used in the OP My financial balance personal finance management service. Importantly, the report also sheds light on the social and organisational aspects of the service by explaining how this AI technology is used and developed within the organisation. The report serves as a notable benchmark for other organisations.

Case: Meaningful transparency of consumer-facing AI

Building trust in your AI solutions is a worthy endeavour

Verifying the correct technical functioning of an AI solution is one crucial element in building users’ trust. Having in place an effective AI governance and operating model helps to increase trust in an organisation’s ability to assess the risks of a solution, verify its impacts, and mitigate any potential negative effects. However, metrics and governance models are not enough to foster intuitive trust in a solution, especially if the system is critical from the users’ perspective.

Making the effort to understand what affects the emergence of user trust enables organisations to build and maintain trust systematically. The methods covered in this article include leveraging institutional trust, inviting impartial assessments of external experts, applying participatory design methods, and openly communicating about both AI solutions as well as the organisation’s processes for building and maintaining trustworthy AI.

Investing into building and maintaining trust pays itself back through higher acceptance of the AI solutions. Trust of the users and the people affected by these solutions is necessary for reaping the expected rewards. It is also responsible use of your technological power.

  1. Data