Here, we provide the real photo datasets of materials and the trained generative models used in our paper:
Liao C., Sawayama M., & Xiao B. (2024). Probing the Link Between Vision and Language in Material Perception Using Psychophysics and Unsupervised Learning. PLoS Computational Biology
Real photo datasets:
Soap dataset: 8085 photos. Please check our previously published TID dataset.
Rock dataset: rock_dataset.zip (3180 photos)
Squishy toy dataset: squishy_toy_dataset.zip (1900 photos)
StyleGAN2-ADA models:
Soap Model: StyleGAN_models/StyleGAN_G_soap.pkl
Rock Model: StyleGAN_models/StyleGAN_G_rock.pkl
Squishy Toy Model: StyleGAN_models/StyleGAN_G_toy.pkl