XAI key for data understanding
Andreas Bartsch, Head of Service Delivery at PBT Group
Even though the concept of explainable artificial intelligence (XAI) might not be a completely unfamiliar one to data scientists, it is becoming increasingly important in a world driven by algorithms. From a business perspective, XAI is fundamental to establishing trust in the processes behind machine learning (ML) and how AI comes to its conclusions.
XAI is a set of methods and techniques that help people understand and interpret predictions made by ML models. Think of it as having AI translated in such a way that makes sense to humans. By tracing the algorithmic thinking process, it provides an adequate explanation that is satisfactory for human consumption. And this is what greatly improves the user experience.
Making sense of logic
The resultant cooperation between the algorithm and the user creates trust that can help influence future behaviour. If the recipient understands how it delivers the results, then they can enhance it further. By understanding the logic, the human operator can identify any potential weak points from a data science perspective.
Coming to grips with the what, why, and how brings with it an understanding of the function the algorithm must provide. One aspect of this is the comprehensibility of AI. This centres on the knowledge a user gets from the process. The knock-on effect is providing meaning through the interpretability of the algorithms and then, ultimately, introducing a layer of transparency to AI. Therefore, XAI delivers the understandability critical to being able to relate to the processes used.
XAI in action
From a use case perspective, industries like healthcare, aerial navigation, and the military will find profound value in XAI. These industries deal with life and death scenarios where making wrong decisions are not an option. Understanding how an algorithm comes to its conclusion and whether it can be trusted form the building blocks to improving the process over time.
XAI can best be illustrated through dynamic graphs and textual descriptions. This allows users to readily chart the ‘thinking’ path taken and show how it came to its conclusion. Think of it as writing a Maths test in school. While the answer is important, the focus is on the steps required to get there.
The same applies to XAI.
Basis of trust
Being able to explain and trust AI-generated insights are critical for any industry. If a company cannot trust and rely on the output of AI and ML, then it cannot become more productive and even improve the decision-making process.
XAI therefore allows for human judgment and improvement as the ultimate accountability and auditability still reside with people. It is the human operator that is ultimately responsible for decisions taken. XAI is about achieving understandability of – and gaining trust – between an algorithm and the human recipient of the output. This can be compared to aspects of organisational leadership, where employee ‘buy-in’ is achieved, by simply eXplAIning the supporting logic for the decision-making process. After all, if someone understands the thinking behind a decision, they are more than likely to accept it.
In all this, data quality is critical. If the data does not adhere to a certain standard, it can be a massive problem down the line. The adage of garbage in, garbage out certainly applies. This is even more so the case in industries like healthcare, for instance, where mistakes cost lives. XAI will help guide the path forward around understanding data analysis and the algorithmic processes associated with AI and ML.