Artificial Intelligence now affects every aspect of our lives, in the workplace, methods of study, even in selecting what we stream on TV. This technology is everywhere, but how much do we trust it, and why should we?

It is no surprise that the origins of trust lie in the human ability to determine if someone or something is telling them the truth. Our progress as a race has been heavily influenced by our ability, or desire, to cooperate with others: from hunting for food, to landing on the moon, we do this in teams. The more we collaborate, generally the better we perform. To do that successfully, one must trust others to do what you expect of them, and that clearly cannot be guaranteed. So, what makes you trust someone or something? Ultimately, it is a balance of risk versus reward. Given the variety in human nature, everyone approaches this differently, some enjoy the excitement of uncertainty, while others are more adverse. Trust as a concept is not clear cut, it is complex and far from black and white.

From this starting point, if one is designing an AI system with which a user is going to collaborate, how can reassurance be given to them so that they build trust in the system and any advice provided?

This series of three articles will assess some of the key factors that impact our ability to trust:

  • Firstly, the basics, identifying the underlying features that create trust.
  • Then, emerging issues in digital technology that can affect that trust over time.
  • Finally, how we should design systems to address our basic human concerns.

As we move into the era of digital technology, there are increasing fears over cyber space and most organisations are trying to provide reassurance to customers over their digital trust credentials. Many articles have been written on this topic and most focus on the functional aspects of the system, cyber security, governance and alike which are generally procedural in nature. Sadly, few address the more fundamental issues of human belief and how the design of products should evolve to address these often subconscious concerns.

A starting point for this process has to be a clear recognition of the fact that artificial intelligence systems do not possess sentience, even if we may desire or expect our digital assistant to act that way. So, although such systems cannot lie, it has been widely shown they cannot be relied on to tell the truth. With this in mind, as designers we must consider how to protect our customers whilst considering how useful is a tool that is often, indeed frequently, being creative with the facts? In essence, we must carefully address the challenge of how to advise users of the reliability of a response, whilst ensuring that the system retains utility.

From a psychological perspective much of our interactions with others depends on our expectations of them. Skills, reliability and openness are some of the underlying attributes that we look for when evaluating people with whom we intend to collaborate. If we view an AI assistant in that light, these are the features that we will be searching for. To build trust, AI developers must ensure that their customers clearly understand the capability of a system, as for some expectations may far exceed capability, as is often the case with human co-workers. People need to interact with the tools they are using, learn their traits and shortcomings to understand the limits of what can be achieved, otherwise they will be readily discarded.

Culture is another critical factor in the creation of trust, it can impact the degree to which a customer will perceive the potential benefits of a technology versus the perception of their social peer group or management structure. There is no silver bullet in terms of how to address this, the more support and transparency one can provide in the product, the easier it will be to establish trust. This may increase software development costs by building in additional layers of ‘on-boarding’ support, but it will create enhanced reputational trust and provide a wider client base in the long run.

These fundamental features of trust therefore need to form the basis of the approach to designing AI systems, but that is not the end of the story. Our next article will focus on some of the emerging concerns that can impact our trust in digital technologies and what we must do as developers and users to realise their vast potential, whilst protecting ourselves.

In this series Ian Risk, Non-Executive Director of Intellium AI is joined by Psychologist Charlotte Preen to assess the importance of the human approach to establishing trust with this emerging technology. This article was produced using solely human endeavours, you will have to trust us on that one.