pcbi.1008946.s001.pdf (368.47 kB)
Download file

Supporting information.

Download (368.47 kB)
journal contribution
posted on 29.11.2021, 18:54 authored by Niksa Praljak, Shamreen Iram, Utku Goreke, Gundeep Singh, Ailis Hill, Umut A. Gurkan, Michael Hinczewski

The supporting information contains figures and tables detailing performance comparisons of our approach with alternative Phase I and Phase II network architectures. Fig A: Comparative performance of the Phase I network against another recent segmentation model, HR-net. Training and validation history of performance metrics for the two networks, with our encoder-decoder in the top row, and HR-net in the bottom row. The solid curve corresponds to the 5-fold mean of each metric, while the same colored light band denotes the spread in the corresponding metric over these folds. Training history is shown in red and validation in blue (purple indicates overlap). To see the architecture details for both the encoder-decoder and HR-net, follow this link: https://github.com/hincz-lab/DeepLearning-SCDBiochip/blob/master/Demonstrate_architectures.ipynb. Fig B: Phase II architecture. A schematic outline of the architectures for our choice of Phase II network (ResNet-50) and two other networks used for performance comparison: a vanilla network and Xception. We appended a global average pooling layer along with a fully connected layer so that we can fine-tune the ImageNet pretrained models, e.g. ResNet-50 and Xception backbones, on our sickle red blood cell task. All of the tensor shapes shown in the cartoon illustration correspond to the input features, intermediate features, and output probability vector. Fig C: Comparative performance of Phase II network against two other models—a vanilla network and Xception. Training and validation history of performance metrics for the three networks are shown here. The solid curve corresponds to the mean training history over 5 folds, while the same colored light band denotes the spread in the corresponding metric over these folds. Training history is shown in red and validation in blue (purple indicates overlap). Table A: Final metric values (averaged over 5 folds) reached by each Phase II network at the end of training. This corresponds to the 30th, 50th and 30th epochs for ResNet-50, the vanilla network, and Xception respectively. Uncertainties indicate spread (standard deviation) around the mean of each metric over the 5 folds. The best achieved metric value over all networks is shown in bold, for both training and validation. While Xception does marginally better than ResNet-50 in training, it overfits more—validating our final choice of ResNet-50 for Phase II network based on overall performance. Table B: Comparison of overall evaluation metrics for various pipeline configurations (Phase I + Phase II) on the sample set of 19 whole channel images. R2 values are shown for the machine learning vs. manual count comparison in each case, similar to main text Fig 8. Table legend: ED: Encoder-decoder; CE Jaccard: Cross entropy Jaccard loss; All: total sRBC counts; Def: deformable sRBC counts; NDef: non-deformable sRBC counts; Proc. time: total processing time for all 19 channels.

(PDF)

History