The SynSpeech Dataset (Medium Version) is an English-language synthetic speech dataset created using OpenVoice and LibriSpeech-100 for bench-marking disentangled speech representation learning methods. It includes 50 unique speakers, each with 500 distinct sentences spoken in 4 “default, friendly, sad, whispering” styles at a 16kHz sampling rate. Data is organized by speaker ID, with a `synspeech_Medium_Metadata.csv` file detailing speaker information, gender, speaking style, text, and file paths. This dataset is ideal for tasks in representation learning, speaker and content factorization, and TTS synthesis.