AAAI 2021 

BayesTrEx: a Bayesian Sampling Approach to Model Transparency by Example 
Serena Booth^{*,1}  Yilun Zhou^{*,1}  Ankit Shah^{1}  Julie Shah^{1} 
^{*}Equal Contribution 

Left: Given a Corgi/Bread classifier, we generate prediction level sets, or sets of examples which trigger a target prediction confidence (e.g., p(Corgi) = p(Bread) = 0.5). Perturbing an arbitrary image to trigger the target confidence is one way of finding such examples, as shown in (A). However, such examples give little insight into the typical model behavior because they are unrealistic and unlikely. For more insight, BayesTrEx explicitly considers a data distribution (grayshading on the righthand plots) and finds indistribution examples in a particular level set (e.g., likely Corgi (B), likely Bread (D), or ambiguous between Corgi and Bread (C)).
Top right: the classifier level set of p(Corgi) = p(Bread) = 0.5 overlaid on the data distribution. Example (A) is unlikely to be sampled by BayesTrEx due to nearzero density under the distribution, while example (C) is likely to be sampled. Bottom right: Sampling directly from the true posterior is infeasible, so we relax the formulation by “widening” the level set. By using different data distributions and confidences, BayesTrEx can uncover examples that invoke various model behaviors to improve model transparency. 
Abstract 
Posthoc explanation methods are gaining popularity for interpreting, understanding, and debugging neural networks. Most analyses using such methods explain decisions in response to inputs drawn from the test set. However, the test set may have few examples that trigger some model behaviors, such as highconfidence failures or ambiguous classifications. To address these challenges, we introduce a flexible model inspection framework: BayesTrEx. Given a data distribution, BayesTrEx finds indistribution examples with a specified prediction confidence. We demonstrate several use cases of BayesTrEx, including revealing highly confident (mis)classifications, visualizing class boundaries via ambiguous examples, understanding novelclass extrapolation behavior, and exposing neural network overconfidence. We use BayesTrEx to study classifiers trained on CLEVR, MNIST, and FashionMNIST, and we show that this framework enables more flexible holistic model analysis than just inspecting the test set. Code is available at https://github.com/serenabooth/BayesTrEx.


@inproceedings{booth21:bayestrex, 