Skip to main content

SoBigData Event

ECML PKDD Joint International Workshop on Advances in Interpretable Machine Learning and Artificial Intelligence & eXplainable Knowledge Discovery in Data Mining

In the past decade, we have witnessed the increasing deployment of powerful automated decision-making systems in settings ranging from control of safety-critical systems to face detection on mobile phone cameras. Albeit remarkably powerful in solving complex tasks, these systems are typically completely obscure, i.e., they do not provide any mechanism to understand and explore their behavior and the reasons underlying the decisions taken.

This opaqueness can raise legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for higher scrutiny in the deployment of automated decision-making systems. These include the “ACM Statement on Algorithmic Transparency and Accountability”, the “European Recommendations on Machine-Learned Automated Decision Making”, and the EU's General Data Protection Regulation (GDPR). The latter suggests in one of its clauses that individuals should be able to obtain explanations of the decisions proposed by automated processing, and to challenge those decisions. On the other hand, recent studies have shown that models learning from data can be attacked to intentionally provide wrong decisions via generated adversarial data. Massive challenges are still open and must be addressed to ensure that automated decision making can be accountably deployed, and that the resulting systems can be trusted. All this calls for joint efforts across technical, legal, sociological, and ethical domains to address these increasingly pressing issues.

The purpose of AIMLAI-XKDD (Advances in Interpretable Machine Learning and Artificial Intelligence & eXplainable Knowledge Discovery in Data Mining), is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining, machine learning, artificial intelligence. AIMLAI-XKDD is an event organized into two moments: a tutorial to introduce audience to the topic, and a workshop to discuss recent advances in the research field. The tutorial will provide a broad overview of the state of the art and the major applications for explainable and transparent approaches. Likewise it will highlight the main open challenges. The workshop will seek top-quality submissions addressing uncovered important issues related to explainable and interpretable data mining and machine learning models. Papers should present research results in any of the topics of interest for the workshop as well as application experiences, tools and promising preliminary ideas. AIMLAI-XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view, but also from a legal, ethical or sociological perspective. Besides the central topic of interpretable algorithms and explanation methods, we also welcome submissions that answer research questions like "how to measure and evaluate interpretability and explainability?" and "how to integrate humans in the machine learning pipeline for interpretability purposes?".

Topics of interest include, but are not limited to: TOPICS

  • Interpretable and Explainable ML and AI
    • Interpretability Modules (also known as post-hoc interpretability)
    • Explainable Recommendation Models
    • Technical Aspects of Algorithms for Explanation
    • Explaining Black-box Decision Systems
    • Adversarial Attack-based Models
    • Software Engineering Aspects of Interpretable ML & AI
    • Monitoring and Understanding System Behavior
  • Transparency and Ethics in AI, ML, and Data Mining
    • Transparent Data Mining
    • Fairness Checking for Explanation Methods
    • Explanation for Privacy Risk
    • Ethical and legal aspects in AI and ML
    • Privacy-preserving Explanations
    • Transparent Classification Approaches
    • Anonymity and Information Hiding Problems in Comprehensible Models
    • Social Good Applications
    • Case Study Analysis
    • Privacy Risk Assessment
    • Privacy-by-design Approaches for Human Data
    • Statistical Aspects, Bias Detection and Causal Inference
  • Methodology and Formalization of Interpretability
    • Formal Measures of Interpretability
    • Relation between Interpretability and the Complexity of Models
    • How to Evaluate Interpretability
  • User-Centric Interpretability
    • Semantic Interpretability: How to Add Semantics to Explanations
    • Human-in-the-loop to Construct and/or Evaluate Interpretable Models
    • Integration of ML Algorithms, Infovis and Man-Machine Interfaces
    • Experiments on Simulated and Real Decision Systems

 

Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, infovis, man-machine interfaces, psychology, etc. Research driven by application cases where interpretability matters are also of our interest, e.g., medical applications, decision systems in law and administrations, industry 4.0, etc.