Diffusion-based Unsupervised Audio-visual Speech Enhancement - Department of Natural Language Processing & Knowledge Discovery
Pré-Publication, Document De Travail Année : 2024

Diffusion-based Unsupervised Audio-visual Speech Enhancement

Résumé

This paper proposes a new unsupervised audiovisual speech enhancement (AVSE) approach that combines a diffusion-based audio-visual speech generative model with a non-negative matrix factorization (NMF) noise model. First, the diffusion model is pre-trained on clean speech conditioned on corresponding video data to simulate the speech generative distribution. This pre-trained model is then paired with the NMF-based noise model to iteratively estimate clean speech. Specifically, a diffusion-based posterior sampling approach is implemented within the reverse diffusion process, where after each iteration, a speech estimate is obtained and used to update the noise parameters. Experimental results confirm that the proposed AVSE approach not only outperforms its audio-only counterpart but also generalizes better than a recent supervisedgenerative AVSE method. Additionally, the new inference algorithm offers a better balance between inference speed and performance compared to the previous diffusion-based method.

Fichier principal
Vignette du fichier
cmxyyzzrpvkmyrykwgbnjftrchwgjsgk.pdf (394.02 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04718254 , version 1 (03-10-2024)

Licence

Identifiants

Citer

Jean-Eudes Ayilo, Mostafa Sadeghi, Romain Serizel, Xavier Alameda-Pineda. Diffusion-based Unsupervised Audio-visual Speech Enhancement. 2024. ⟨hal-04718254⟩
42 Consultations
33 Téléchargements

Altmetric

Partager

More