SynSpeech Dataset (Large Version Part 2)
The SynSpeech Dataset (Large Version Part 2) is an English-language synthetic speech dataset designed for benchmarking disentangled speech representation learning methods. Created using OpenVoice and LibriSpeech-100, it includes 249 unique speakers, each with 500 distinct sentences spoken in four styles: “default,” “friendly,” “sad,” and “whispering,” recorded at a 16kHz sampling rate.
Due to file size limitations, the dataset has been split into two nearly equal halves. This first half contains data for 113 of the 249 speakers, along with metadata detailing speaker information, gender, speaking style, text, and file paths. The synspeech_Large_Metadata.csv
file provides metadata for both halves, and both parts of the archive must be extracted and placed within the same parent directory for full functionality.
Data is organized by speaker ID, making this dataset ideal for applications in representation learning, speaker and content factorization, and TTS synthesis.