figshare
Browse
pcbi.1010979.g002.tif (1.32 MB)

Opening the black box: Visual explanations of mini-CNN and Imagene.

Download (1.32 MB)
figure
posted on 2023-11-27, 19:08 authored by Ryan M. Cecil, Lauren A. Sugden

A: Trained parameter values for mini-CNN. The 2x1 kernel detects differences between consecutive rows. The dense weights map illustrates the linear weights that are applied to the output of the convolution and ReLU layer (depicted in panel B). The black band at the top of the weights indicates that the model is more likely to predict the image as neutral if there exists variation among the top rows. B: Example of a pre-processed image and its corresponding output after the convolution layer and ReLU activation. C: Visualization of Imagene with SHAP explanations. From left to right are examples of neutral and sweep processed images, SHAP values for the two image examples, and average SHAP values across 1000 neutral and sweep images. A negative SHAP value (blue) indicates that the pixel of interest contributes toward a prediction of neutral, while a positive SHAP value (red) indicates that the pixel of interest contributes toward a prediction of sweep. Similar to the black bar in the dense weights map in panel a, the large SHAP values located in the top region of the average Shap images indicate that Imagene focuses on the top block of the image to make its prediction.

History