×C6. Interpretable and explainable ML

Humans are using ML models for almost everything. But, for certain tasks prediction is not enough. The model needs to be interpretable or explainable. According to NIST, interpretation refers to the ability to contexualize a model’s output in a manner that relates it to the system’s designed functional purpose, and the goals, values and preferences of end users while explanation refers to the ability to accurately describe the mechanism or implementation that led to an algorithm’s output, often so that the algorithm can be improved in some way. Interpretable and explainable AI session welcomes theoretical as well as application-oriented work in this broad setting.

Key topics: Definition and study of interpretability, Human in the loop approaches, Causality-based approaches, Exemplar-based reasoning