LIP READING SENTENCES USING DEEP LEARNING WITH ONLY VISUAL CUES

Lip Reading Sentences Using Deep Learning With Only Visual Cues

Lip Reading Sentences Using Deep Learning With Only Visual Cues

Blog Article

In this paper, a neural network-based lip reading system is proposed.The system is lexicon-free and uses purely visual cues.With only a limited number of visemes as classes to recognise, the system is designed to lip read sentences covering a wide decorations range of vocabulary and to recognise words that may not be included in system training.The system has been testified on the challenging BBC Lip Reading Sentences 2 (LRS2) benchmark dataset.Compared with the state-of-the-art works in lip reading sentences, the system has achieved a significantly improved performance with 15% lower word error rate.

In addition, experiments with videos of varying illumination have shown that the proposed model has a good robustness to varying levels of lighting.The main contributions of this paper are: 1) The classification of visemes in continuous speech using a specially designed transformer with a unique topology; 2) The use of visemes as a classification Leggings schema for lip reading sentences; and 3) The conversion of visemes to words using perplexity analysis.All the contributions serve to enhance the accuracy of lip reading sentences.The paper also provides an essential survey of the research area.

Report this page