Markus Löcher
Mon 25 Oct 2021, 15:00 - 16:00
online (Zoom)

If you have a question about this talk, please contact: Zohreh Kaheh (zkaheh)

Image for From unbiased MDI Feature Importance to Explainable AI for Trees

We attempt to give a unifying view of the various recent attempts to (i) improve the interpretability of tree-based models and (ii) debias the default variable-importance measure in random Forests, Gini importance. In particular, we demonstrate a common thread among the out-of-bag-based bias correction methods and their connection to a local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed SHAP values.

(If time permits, we will report recent empirical results comparing SHAP with the much simpler conditional feature contributions proposed by Saabas.)