When to Explain? Model Agnostic Explanation Using a Case-based Approach and Counterfactuals
Abstract
ExplainableArtificialIntelligence(XAI)systemshavegained importance with the increasing demand for understanding why and how an artificial intelligence system makes decisions. Counterfactual expla- nations, one of the rising trends of XAI, benefit from human counter- factual thinking mechanisms and aim to follow a similar way of rea- soning. In this paper, we create an eXplainable Case-Based Reasoning system using counterfactual samples with a model-agnostic approach. While CBR methodology allows us to use past experiences to create new explanations, using counterfactuals helps to increase understandability. The main idea of this paper is to generate an explanation when necessary. The proposed method is sample-centric. Thus, an adaptive explanation area is calculated for each data point in the dataset. We detect if there is any existing counterfactual of the samples to increase the coverage of the system, and we create explanation cases from detected sample- counterfactual pairs. If a query case is in the explanation area, at least one explanation case will be triggered, and a two-phase explanation will be created using a text template and a bi-directional bar graph. In this work, we will show (1) how explanation cases are created, (2) how the nature of a dataset influences the explanation area, (3) how understand- able explanations are created, and (4) how the proposed method works on open datasets.Downloads
Download data is not yet available.
Downloads
Published
2023-03-09
How to Cite
[1]
B. Bayrak and K. Bach, “When to Explain? Model Agnostic Explanation Using a Case-based Approach and Counterfactuals”, NIKT, no. 1, Mar. 2023.
Issue
Section
Articles