First, I (Keith) want to share something I was taught in MBA school – all new (and old but still promoted) technologies exaggerate their benefits, are overly dismissive of difficulties, underestimate the true costs and fail to anticipate how older (less promoted) technologies can adapt and offer similar and/or even better benefits and/or with less difficulties and/or less costs.
Now I have recently become aware of work by Cynthia Rudin (Duke) that argues upgraded versions of easy to interpret machine learning (ML) technologies (e.g. Cart) can offer similar predictive performance of new(er) ML (e.g. deep neural nets) with the added benefit of interpret-ability. But I am also trying to keep in mind or even anticipate how newer ML (e.g. deep neural nets) can adapt to (re-)match this.
Never say never.
The abstract from Learning customized and optimized lists of rules with mathematical programming. Cynthia Rudin and Seyda Ertekin may suffice to provide a good enough sense for this post.
We introduce a mathematical programming approach to building rule lists, which are a type of interpretable, nonlinear, and logical machine learning classifier involving IF-THEN rules. Unlike traditional decision tree algorithms like CART and C5.0, this method does not use greedy splitting and pruning. Instead, it aims to fully optimize a combination of accuracy and sparsity, obeying user-defined constraints. This method is useful for producing non-black-box predictive models, and has the benefit of a clear user-defined tradeoff between training accuracy and sparsity. The flexible framework of mathematical programming allows users to create customized models with a provable guarantee of optimality.
For those with less background in ML, think of regression trees or decision trees (Cart) on numerical steroids.
For those with more background in predictive modelling this may be the quickest way to get a sense of what is at stake (and the challenges). Start at 17:00 and its done by 28:00 – so 10 minutes.
My 9 line summary notes of Rudin’s talk (link above): Please stop doing “Explainable” ML [for high-stakes decisions].
Explainable ML – using a black box and explaining it afterwards.Interpretable ML – using a model that is not black box.
Advantages of interpret-able ML are mainly for high-stakes decisions.
Accuracy/interpretabilty tradeoff is a myth – in particular for problems with good data representations – all ML methods perform about the same.
[This does leave many application areas where it is not a myth and Explainable or even un-explainable ML will have accuracy advantages.]
Explainable ML is flawed, there are two models the black box model and an understudy model that is explainable and predicts similarly but not identically (exactly the same x% of the time). And sometimes the explanations do not make sense.
There were two issues that Rudin identified in their linked talk that might be worth carefully thinking about:
1.Given a choice between an interpretable model and a black box model – many end users actually prefer the black box.
- Applied statisticians have always cared about interpretability but ML has not? “Statisticians can fix this. It won’t happen in ML.”
Now some reasons for 1. are discussed here To better enable others to avoid being misled when trying to learn from observations, I promise not be transparent, open, sincere nor honest?
As for ML not caring about interpretability there was a lot work done in AI in the 1980s trying to discern if human learning was interpretable or just explainable – Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA, US: The MIT Press.
Recalling the never say never, some might be interested in this Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions