“Please, explain.” Interpretability of black-box machine learning models

Estimated time:
time
min

In February 2019 Polish government added an amendment to a banking law that gives a customer a right to receive an explanation in case of a negative credit decision. It's one of the direct consequences of implementing GDPR in EU. This means that a bank needs to be able to explain why the loan wasn’t granted if the decision process was automatic. In October 2018 world headlines reported about <a href="https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine">Amazon AI recruiting tool</a> that favored men. Amazon’s model was trained on biased data that were skewed towards male candidates. It has built rules that penalized résumés that included the word “women’s”. <h2>Consequences of not understanding models’ predictions</h2> What is common for the two examples above is that both models in the banking industry and the one built by Amazon are very complex tools, so-called black-box classifiers, that don’t offer straightforward and human-interpretable decision rules. Financial institutions will have to invest in model interpretability research if they want to continue using ML-based solutions. And they probably will, because such algorithms are more accurate in predicting credit risk. Amazon on the other hand, could have saved a lot of money and bad press if the model was properly validated and understood. <h2>Why now? Trends in data modeling.</h2> Machine learning has continued to stay on the top of Gartner's Hype Cycle since 2014, to be replaced by the Deep Learning (a form of ML) in 2018 suggesting the adoption hasn’t reached its peak yet. <img class="wp-image-1819 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b7d6e578149a2ea79e4cf0_abbc38ce_pasted-image-0.webp" alt="gartner-hype-cycle-for-emerging-technologies-2018" width="1440" height="1218" /> Source: <a href="https://www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018/">https://www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018/</a> Machine learning growth is predicted to further accelerate. Based on the <a href="http://www.univa.com/resources/univa-machine-learning-survey.php">report</a> by Univa 96% of the companies are expected to use ML in production in the next 2 years. The reasons behind this are: widespread data collection, availability of vast computation resources and active open-source community. ML adoption growth is accompanied by the increase in ML-interpretability research driven by regulations like GDPR, EU’s “right to explain”, concerns about safety (medicine, autonomous vehicles), reproducibility and bias or end-users expectations (debug the model to improve it or learn something new about the studied subject). <img class="wp-image-1820 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022459ba690fd79983289_pasted-image-0-1.webp" alt="papers on black boxes chart" width="836" height="535" /> Source: http://people.csail.mit.edu/beenkim/papers/BeenK_FinaleDV_ICML2017_tutorial.pdf <h2></h2> <h2>Black-box algorithms interpretability possibilities</h2> As data scientists, we should be able to provide an explanation to end users about how a model works. However, this not necessarily means understanding every piece of the model or generating a set of decision rules. There could also be a case where this is not required: <ul><li style="font-weight: 400;">problem is well studied, </li><li style="font-weight: 400;">model results has no consequences, </li><li style="font-weight: 400;">understanding the model by the end-user could pose a risk of gaming the system.</li></ul> &nbsp; If we look at the results from the <a href="https://www.kaggle.com/sudhirnl7/data-science-survey-2018">Kaggle’s Machine Learning and Data Science Survey</a> from 2018,  around 60% of respondents think they could explain most of machine learning models (some models were still hard to explain for them). The most common approach used to ML understanding is analyzing model features by looking at feature importance and feature correlations. <b>Feature importance analysis</b> offers first good insights into what the model is learning and what factors might be important. However, this technique can be unreliable if features are correlated. It can provide good insights only if model variables are interpretable. For many <a href="https://towardsdatascience.com/boosting-algorithm-gbm-97737c63daa3">GBMs</a> libraries it’s fairly easy to generate <a href="https://www.r-bloggers.com/variable-importance-plot-and-variable-selection/">feature importance plots</a>. In the case of <b>Deep Learning</b> situation is much more complicated. When using neural networks you could look at weights, as they contain the information about the input, but the information is compressed. What’s more, you can only analyze the connections on the first level, since on further levels it’s too complicated. No wonder that when in 2016 <a href="https://arxiv.org/abs/1602.04938"><b>LIME </b>(Local Interpretable Model-Interpretable Explanations) </a>paper was presented at NIPS conference it had a huge impact. The idea behind LIME is to locally approximate a black-box model with an easier to understand white-box model constructed on interpretable input data. It has proven great results providing <a href="https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime">interpretation for image classification</a> and <a href="https://christophm.github.io/interpretable-ml-book/lime.html#lime-for-text">text</a>. However, for tabular data, it’s difficult to find interpretable features and their local interpretation might be misleading. LIME is implemented in Python (<a href="https://github.com/marcotcr/lime">lime</a> and <a href="https://github.com/datascienceinc/Skater">Skater</a>) and R (<a href="https://cran.r-project.org/web/packages/lime/index.html">lime package</a> and <a href="https://cran.r-project.org/web/packages/iml/index.html">iml package</a>, <a href="https://cloud.r-project.org/web/packages/live/index.html">live package</a>) and is very easy to use. Another promising idea is <a href="https://arxiv.org/abs/1705.07874">SHAP (Shapley Additive Explanations)</a>. It’s based on game theory. It assumes that features are players, models are coalitions and Shapley values tell how to fairly distribute the “payout” among the features. This technique distributes the effects fairly, is easy to use and offers visually compelling implementation. <a href="https://github.com/pbiecek/DALEX"><b>DALEX</b> package</a> (Descriptive Machine Learning Explanations) available in R offers a set of tools that help to understand how complex models are working. Using DALEX you can create model explainer and inspect it visually e.g. breakdown plots. You might also be interested in <a href="https://github.com/ModelOriented/DrWhy/blob/master/README.md">DrWhy.Ai</a> which is developed by the same group of researchers as DALEX. <h2>Practical use cases</h2> <h3><strong>Detecting objects on the pictures</strong></h3> &nbsp; <b>Image recognition</b> is already widely used, among others in autonomous cars to detect if cars, traffic lights etc. are on the picture, in wildlife conservation to detect if a certain animal is in the picture or in the insurance to detect flooding of crops. We will use the “Husky vs Wolf example” from the original LIME paper to illustrate the importance of model interpretation. The classifier task was to identify if a wolf was on the picture or not. It falsely misclassified Siberian Husky as a wolf. Thanks to LIME researchers were able to identify what areas of the pictures were important for the model. It turned out that if the picture contains snow it is classified as a wolf. <img class="size-full wp-image-1821" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0232640e2bdeffdb1cc1f_pasted-image-0-2.webp" alt="" width="320" height="159" /> Source: LIME paper The algorithm was using the background of the picture and totally ignoring animal characteristics. The model should look at the animal eyes instead. Thanks to this discovery it was possible to fix the model and extend the training examples to prevent the reasoning snow = wolf. <h3><strong>Classification as decision support system</strong></h3> &nbsp; Intensive Care Unit of Amsterdam UMC <a href="https://medium.com/@Pacmedhealth/ai-for-health-care-tackling-the-issue-of-interpretability-868be42aaf50">wants predict the probabilities of patient’s readmission and/or mortality at the moment of discharge</a>. The goal is to help doctors pick the right moment to move the patient from ICU. If the doctor understands what the model is doing is more likely to use it’s recommendation in making the final judgement. In order to demonstrate how such model can be interpreted using LIME, we can have a look at the example from <a href="https://www.researchgate.net/publication/309551203_Machine_Learning_Model_Interpretability_for_Precision_Medicine">another study</a> that aims to do early prediction of the mortality at the ICU. Random Forest model (a black-box model) is used to predict mortality status and lime package is used to locally explain the prediction score for every patient. <img class="size-full wp-image-1822" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022bf790b9e1ad8903164_unnamed.webp" alt="black box precision" width="512" height="237" /> Source: https://www.researchgate.net/publication/309551203_Machine_Learning_Model_Interpretability_for_Precision_Medicine A patient from the selected example has high death probability (78%). The model features that contribute to mortality are higher counts of atrial fibrillation and higher lactate level, which is consistent with current medical understanding. <h2>Humans and machines - a perfect match</h2> In order to achieve success in building an interpretable AI we need to combine data science knowledge, algorithms and end users expertise. Data science work doesn’t finish after creating the model. It’s an iterative, usually long process with feedback loops provided by the experts, making sure the outcome is solid and understandable by humans. We strongly believe that by combining humans expertise and machines performance we can obtain the best conclusion: improve machine results and overcome human gut-feel bias.

Contact us!
Damian's Avatar
Damian Rodziewicz
Head of Sales
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
community
ai&research