Noise-robust speech recognition system based on multimodal audio-visual approach using different deep learning classification techniques

Other Title(s)

تصميم نظام للتعرف على الاصوات قوي في حالة وجود ضوضاء يعتمد على وسائط سمعية وبصرية للصوت مع تقنيات مختلفة للتعلم العميق

Time cited in Arcif : 
2

Joint Authors

al-Maghribi, Islam Id Ali Muhammad
Judi, Amr Muhammad Rifat
Faruq, Hisham Muhammad

Source

The Egyptian Journal of Language Engineering

Issue

Vol. 7, Issue 1 (30 Apr. 2020), pp.27-42, 16 p.

Publisher

Egyptian Society of Language Engineering

Publication Date

2020-04-30

Country of Publication

Egypt

No. of Pages

16

Main Subjects

Information Technology and Computer Science

Topics

Abstract EN

This paper extends an earlier work on designing a speech recognition system based on Hidden Markov Model (HMM) classification technique of using visual modality in addition to audio modality[1].

Improved off traditional HMM-based Automatic Speech Recognition (ASR) accuracy is achieved by implementing a technique using either RNN-based or CNN-based approach.

This research is intending to deliver two contributions: The first contribution is the methodology of choosing the visual features by comparing different visual features extraction methods like Discrete Cosine Transform (DCT), blocked DCT, and Histograms of Oriented Gradients with Local Binary Patterns (HOG+LBP), and applying different dimension reduction techniques like Principal Component Analysis (PCA), auto-encoder, Linear Discriminant Analysis (LDA), t-distributed Stochastic Neighbor Embedding (t-SNE) to find the most effective features vector size.

Then the obtained visual features are early integrated with the audio features obtained by using Mel Frequency Cepstral Coefficients (MFCCs) and feed the combined audio-visual feature vector to the classification process.

The second contribution of this research is the methodology of developing the classification process using deep learning by comparing different Deep Neural Network (DNN) architectures like Bidirectional Long-Short Term Memory (BiLSTM) and Convolution Neural Network (CNN) with the traditional HMM.

The proposed model is evaluated on two multi-speakers AV-ASR datasets named AVletters and GRID with different SNR.

The model performs speaker-independent experiments in AVlettter dataset and speaker-dependent in GRID dataset.

American Psychological Association (APA)

al-Maghribi, Islam Id Ali Muhammad& Judi, Amr Muhammad Rifat& Faruq, Hisham Muhammad. 2020. Noise-robust speech recognition system based on multimodal audio-visual approach using different deep learning classification techniques. The Egyptian Journal of Language Engineering،Vol. 7, no. 1, pp.27-42.
https://search.emarefa.net/detail/BIM-1012038

Modern Language Association (MLA)

al-Maghribi, Islam Id Ali Muhammad…[et al.]. Noise-robust speech recognition system based on multimodal audio-visual approach using different deep learning classification techniques. The Egyptian Journal of Language Engineering Vol. 7, no. 1 (Apr. 2020), pp.27-42.
https://search.emarefa.net/detail/BIM-1012038

American Medical Association (AMA)

al-Maghribi, Islam Id Ali Muhammad& Judi, Amr Muhammad Rifat& Faruq, Hisham Muhammad. Noise-robust speech recognition system based on multimodal audio-visual approach using different deep learning classification techniques. The Egyptian Journal of Language Engineering. 2020. Vol. 7, no. 1, pp.27-42.
https://search.emarefa.net/detail/BIM-1012038

Data Type

Journal Articles

Language

English

Notes

-

Record ID

BIM-1012038