figshare
Browse
pcbi.1007828.s007.tif (466.58 kB)

S7 Fig -

Download (466.58 kB)
figure
posted on 2020-04-28, 17:41 authored by Karren Dai Yang, Karthik Damodaran, Saradha Venkatachalapathy, Ali C. Soylemezoglu, G. V. Shivashankar, Caroline Uhler

(a-c) Training the variational autoencoder on various breast cell lines; 64 random images out of 1220 total are held-out for validation, and the remaining images are used to train the autoencoder (a) Training and test loss curves of variational autoencoder plotted over 1000 epochs. (b) Nuclear images generated from sampling random vectors in the latent space and mapped back to the image space. These random samples resemble real nuclei, suggesting that the variational autoencoder learns the image manifold. (c) Input and reconstructed images from different cell lines, illustrating that the latent space captures the main visual feature of the original images. (d-f) Hyperparameter tuning for variational autoencoder on breast cell lines. (d-e) Training loss and test loss curves respectively with high, mid, and no regularization. (f, top row) Reconstruction results for each model. Models with no or mid-level regularization can reconstruct input images well, while models with high regularization do not. (f, bottom row) Sampling results for each model. Models with no regularization do not generate random samples as well as models with mid-level regularization, which suggests that the model with mid-level regularization best captures the manifold of nuclear images.

(TIF)

History