Skip to main content

SoBigData Articles

The Local to Global problem

Artificial Intelligence is making its way into our lives at a growing pace. Complex and unintelligible models are seeping into our decision-making processes in a wide range of applications, from product recommendation to personal health, from security to credit score. In sensitive domains such as these, transparency and understanding are essential and directly coded into law [1]. Rather than blindly relying on "black box" systems, we strive for accurate models with a sensible decision-making policy, rather than bigoted models with suspect biases [2].

Explanations come in two forms: a  local one, in which a rationale for a single black box decision is provided, and a global one, in which a set of rationales for any black-box decision are provided. The former is precise, to the point, multi-faceted, and often provides a recourse to the decision, that is, a clear way for the user of the system to actively change the decision. On the other hand, the latter is general, less precise, and is often inaccessible to the user demanding explanations or to the auditor who wishes to peer into the black box.

To overcome these issues, we define the Local to Global explainability problem as the problem to infer global, general explanations of a black box model from a set of local and specific ones [3]. The Local to Global problem relies on three assumptions: 

  1. logical explainability, that is, explanations are provided in  a logical form and can be reasoned upon.

  2. local explainability, that is, regardless of the complexity of the black box model, a simple explanation can locally approximate its behavior.

  3. composability, that is, we can compose local explanations by leveraging their logical form.

We tackle this problem in the tabular domain and focus on decision rules as logical explanations.

RRS [4], short for "Rule Relevance Score", scores local decision rules according to a weighted schema, balancing the explanation accuracy, complexity, and generality.

Another promising result in this direction is GLocalX, short for "Global through Local Explanations", an algorithm that iteratively transforms local explanations, generalizing them at each iteration step.

With these initial steps, we hope to kickstart research in other more complex domains, such as sequential, textual, and image data.

A hierarchical graph extracted from the GLocalX algorithm

 

Written by Francesco Bodria

 

References:

[1]: [A right to explanation, The Alan Turing Institute](https://www.turing.ac.uk/research/impact-stories/a-right-to-explanation)

[2]: [Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks _by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)

[3]: [Meaningful Explanations of Black Box AI Decision Systems](https://ojs.aaai.org//index.php/AAAI/article/view/5050)

[4]: [Global Explanations with Local Scoring](https://link.springer.com/chapter/10.1007/978-3-030-43823-4_14)