ACACIA: A guide to using explainable AI
The R&D Digital Innovation team has developed a guide to using explainable AI (XAI) when working with tabular data-black box models, benchmarking six libraries and over 20 explainability methods. They’ve also proposed an explainability workflow.
Black boxes
As a business, we’re leveraging the power of Artificial Intelligence (AI) to build powerful algorithms that can process huge volumes of complicated data, automate repetitive workflows, support decision-making, and extract insights that can unlock new value. However, as these algorithms become more complex, they can turn into black boxes that can be problematic for us to explain results.
A practitioner’s guide
The practitioner’s guide shows how to use explainable AI (XAI) methods for black box algorithms when processing tabular data. XAI tools are important because they provide transparency and explainability to otherwise complicated processes. The team compared XAI frameworks, encompassing six Python libraries and over 20 types of explainability methods. The study highlights advantages and limitations of different frameworks. A logical process to interpret the models has been proposed, starting with a global view then moving to explain specific examples.
Next steps
This study has been funded by EDF R&D under the ACACIA project, and the results have been positive. In the future, it would be interesting to expand this practical guide to other data formats such as time series, images and text data.
Find out more: rdoperations@edfenergy.com
Related articles
Leveraging AGR know-how for AMR success