A Study of Multimodal Person Verification Using Audio-Visual-Thermal Data


In this paper, we study an approach to multimodal person verification using audio, visual, and thermal modalities. The combination of audio and visual modalities has already been shown to be effective for robust person verification. From this perspective, we investigate the impact of further increasing the number of modalities by supplementing thermal images. In particular, we implemented unimodal, bimodal, and tri- modal verification systems using the state-of-the-art deep learning architectures and compared their performance under clean and noisy conditions. We also compared two popu- lar fusion approaches based on simple score averaging and soft attention mechanism. The experiment conducted on the SpeakingFaces dataset demonstrates the superiority of the trimodal verification system over both unimodal and bimodal systems. To enable the reproducibility of the experiment and facilitate research into multimodal person verification, we make our code, pretrained models and preprocessed dataset freely available in our GitHub repository1.

Index Terms— Person verification, multimodal, audio- visual-thermal, data augmentation, fusion

Information about the publication


Madina Abdrakhmanova, Saniya Abushakimova, Yerbolat Khassanov, Huseyin Atakan Varol