figshare
Browse

SynSpeech Dataset (Large Version Part 1)

Version 2 2024-11-07, 11:41
Version 1 2024-11-07, 11:37
dataset
posted on 2024-11-07, 11:41 authored by Yusuf BrimaYusuf Brima

The SynSpeech Dataset (Large Version Part 1) is an English-language synthetic speech dataset designed for benchmarking disentangled speech representation learning methods. Created using OpenVoice and LibriSpeech-100, it includes 249 unique speakers, each with 500 distinct sentences spoken in four styles: “default,” “friendly,” “sad,” and “whispering,” recorded at a 16kHz sampling rate.

Due to file size limitations, the dataset has been split into two nearly equal halves. This first half contains data for 136 of the 249 speakers, along with metadata detailing speaker information, gender, speaking style, text, and file paths. The synspeech_Large_Metadata.csv file provides metadata for both halves, and both parts of the archive must be extracted and placed within the same parent directory for full functionality.

Data is organized by speaker ID, making this dataset ideal for applications in representation learning, speaker and content factorization, and TTS synthesis.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC