Adal Abilbekov, Saida Mussakhojayeva, Rustem Yeshpanov, Huseyin Atakan Varol. KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech Synthesis. LREC-COLING 2024, pages 9626–9632. https://doi.org/10.48550/arXiv.2404.01033
Bib
@inproceedings{abilbekov-etal-2024-kazemotts-dataset,
title = "{K}az{E}mo{TTS}: A Dataset for {K}azakh Emotional Text-to-Speech Synthesis",
author = "Abilbekov, Adal and
Mussakhojayeva, Saida and
Yeshpanov, Rustem and
Varol, Huseyin Atakan",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.841",
pages = "9626--9632",
abstract = "This study focuses on the creation of the KazEmoTTS dataset, designed for emotional Kazakh text-to-speech (TTS) applications. KazEmoTTS is a collection of 54,760 audio-text pairs, with a total duration of 74.85 hours, featuring 34.23 hours delivered by a female narrator and 40.62 hours by two male narrators. The list of the emotions considered include {`}neutral{''}, {}angry{''}, {}happy{''}, {}sad{''}, {}scared{''}, and {`}surprised{''}. We also developed a TTS model trained on the KazEmoTTS dataset. Objective and subjective evaluations were employed to assess the quality of synthesized speech, yielding an MCD score within the range of 6.02 to 7.67, alongside a MOS that spanned from 3.51 to 3.57. To facilitate reproducibility and inspire further research, we have made our code, pre-trained model, and dataset accessible in our GitHub repository.",
}