This is the first study of multilingual end-to-end (E2E) automatic speech recognition (ASR) for three languages used in Kazakhstan: Kazakh, Russian, and English. Kazakhstan is a multinational country where Kazakh is the official state language, whereas Russian and English are the languages of interethnic and international communication. In this regard, we initiate the first study of a single joint E2E ASR model applied to simultaneously recognize the Kazakh, Russian, and English languages. We believe that this work will further progress the speech processing research and advance the speech-enabled technology in Kazakhstan and its neighboring countries.
Besides conducting the first detailed study of multilingual E2E ASR for Kazakh, Russian, and English, other contributions of this work are:
If you use our dataset for commercial purposes, please add this statement to your product or service:
Our product uses ISSAI Multilingual (Kazakh, Russian, English) Speech Corpus (https://doi.org/10.48342/0qzd-fk83), which is available under a Creative Commons Attribution 4.0 International License.
If you use our dataset for research, please cite it as:
Mussakhojayeva, S., Khassanov, Y., & Varol, H. A. (2021). A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English. arXiv preprint arXiv:2108.01280.
Here is the demo of the automatic speech recognition system built using ISSAI multilingual speech corpus. Please click the “RECORD” button and speak immediately until the countdown reaches zero. The recognized output will be displayed above the “RECORD” button after 10 seconds. Please note that some browsers don’t support the audio recording feature.
Instructions for using multilingual ASR demo:
Some browser versions don’t support audio recording technology. If this is your case, please, consider using up-to-date browsers on desktop devices.
The dataset statistics for the Kazakh, Russian, and English languages. Utterance and word counts are in thousands (k) or millions (M), and durations are in hours (hr). The overall statistics ‘Total’ are obtained by combining the training, validation,and test sets across all the languages.
|1||Kazakh||train||KSC ||318.4 hr||147.2k||1.6M|
|test-B (books)||OpenSTT ||3.6 hr||3.7k||28.1k|
|test-Y (YouTube)||3.4 hr||3.9k||31.2k|
|valid||CV ||7.4 hr||4.3k||43.9k|
|test-SF (YouTube)||SpeakingFaces ||7.7 hr||6.8k||37.7k|
 Abdrakhmanova, M., Kuzdeuov, A., Jarju, S., Khassanov, Y., Lewis, M., Varol, H.A.: SpeakingFaces: A large-scale multimodal dataset of voice commands with visual and thermal video streams. Sensors 21(10) (2021).
 Slizhikova, A., Veysov, A., Nurtdinova, D., Voronin, D.: Russian open speech to text dataset. https://github.com/snakers4/open_stt accessed: 2021-01-15.
 Khassanov, Y., Mussakhojayeva, S., Mirzakhmetov, A., Adiyev, A., Nurpeiissov,M., Varol, H.A.: A crowdsourced open-source Kazakh speech corpus and initial speech recognition baseline. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 697–706. Association for Computational Linguistics, 2021.
 Ardila, R., Branson, M., Davis, K., Kohler, M., Meyer, J., Henretty, M., Morais,R., Saunders, L., Tyers, F.M., Weber, G.: Common voice: A massively-multilingualspeech corpus. In: LREC. pp. 4218–4222. ELRA (2020)