In an age of disinformation and fake news there is naturally a rising concern that generative artificial intelligence is only going to make things worse. As with any new technology, there will be those who use it for its intended purpose and those who choose to exploit it. With this in mind, designers tasked with bringing AI systems into use must therefore be cognisant of their responsibility towards misuse, just as much as the benefits. The digital world has for years been plagued by issues of cyber security, fraud and malware, but the scale, speed and frequency at which certain actors employing AI can achieve their deceptions is now a primary concern for all stakeholders. These new malpractices are a particular worry given the human tendency to place trust in technology rather than seek the truth. The old adage of ‘don’t believe anything you hear and only half what you see’ without verifying the source has never been more true, but it is getting harder now to see the wood for the trees. This final article in our trilogy will focus on what we, as developers, need to do so that users realise the potential of AI and avoid the possible pitfalls.

For the designer, the absolute goal is to ensure the user establishes a trusted relationship with a new product. One might say a utopic outcome for this process would be that a completely intuitive interface is realised. Here the user seamlessly engages, their expectations are exceeded, and trust is created. Unfortunately, when the hype in AI is so great, this may make the notions of what can be achieved unrealistic. Hence expectations must be carefully managed.

At Intellium AI, our ethos is first to understand where customers are on their ‘journey of discovery’ and, where possible, to tailor solutions to meet their needs. Potential users are rarely at the same waypoint, so to build something completely generic is unlikely to achieve positive results for either party. We firmly believe that it is a combination of approaches that makes the difference when it comes to design. These can be grouped into three types: the fundamentals, best practices and trust centric activities.

From a business perspective, there are certain essentials that one must have in place for customers to establish trust in a company: legal registrations, transparent accounting, reputable owners and alike. In a highly competitive market, first impressions really do count, and this process can happen in a fraction of a second. To ignore such basic elements would immediately place a business at risk of non-compliance or complete rejection. In the digital arena, other factors, specifically data protection and digital risk management, become more prominent. Organisations must therefore offer transparency around data usage and security policies to establish legitimacy and provide reassurance that sensitive information will be protected. Third-party certifications, accreditations and prominent customer reviews help to establish credible authority over these concerns. They present regulatory and social evidence of trust in an organisation and its brand. Details of after sales support and ethical practices, continue the theme of confirming that the fundamentals are in place even before a purchase is made.

Adhering to best practice considers how a product fits in with the life of the customer, what other products they utilise and how accustomed they are with certain features. The more familiar they are to them, the more rapidly adoption can occur. Aspects such as colour coding, menu design and navigation mechanisms can make a user interface highly intuitive especially if aligned to common practice in other mainstream products. User feedback on the achievement of goals or tasks, re-affirm they are operating appropriately. Intertwining this approach with frequent safety reminders prior to performing irreversible actions can prevent issues such as data loss, helping to ease understanding of what can be quite complex tasks, without placing inexperienced users at risk.

These initial steps help to build trust through the periods of software acquisition and training but as the user gains experience and exercises the capabilities of the system, their exposure to risk increases. It is here that trust centric practices come to the fore.

On-boarding tutorials and tooltips integrated into the software form the backbone of these and should reinforce both the intended uses and limitations to prevent overtrust being created. The more transparent the vendor is, the more rapidly credibility and trust will be established, in exactly the same manner it would with a human counterpart. If a co-worker says they don’t have the ability to complete a task you can’t blame them if they fail, so if an AI system states it has low confidence in a particular result, you should proceed with caution. As others engage with the output of a system, ensuring openness about data used for training algorithms allows a wider user community to establish if they believe the datasets are adequate, or if they are inappropriately biased. Process transparency is critical to make the system explainable in order to determine how outcomes were generated, reinforced by confidence scores to help build a picture of reliability. Some research has however shown that in certain circumstances, providing low confidence scores can have a negative impact on trust, but in reality one has to play the long game. These measures backed up with data citations help establish the validity of any outcomes. Finally, providing advanced users the ability to customise or personalise displays or the level of support provided builds a sense of ownership, achievement and pride in the capabilities they have realised.

The degree to which trust can be established and more importantly, retained is not something that is tangible, it is a concept that is fluid. As the digital community learns how to push the boundaries of these systems we must continually adapt support as the manner in which they are implemented may go far beyond what the designer may have envisaged. A trusted partnership is needed to collectively chart a course to the desired destination and to establish this has to be our goal.

In conclusion, we must first reflect on one key observation concerning AI: we are all part of a very large trial, potentially the largest global experiment ever conducted. We are still learning about this technology and it is learning about us. Yet it is mainstream, marketed as the next big thing, without the impact being fully understood or regulated. We must proceed with caution, trust is a precious commodity, it must be earned and never taken for granted. This multi-faceted, dynamic human attribute is deep rooted in our emotions and is the key to success or failure of a product. For Intellium AI it forms the core of everything we do built on principles of knowledge, transparency, diversity, integrity and reliability.