SPECTRON: TARGET SPEAKER EXTRACTION USING CONDITIONAL TRANSFORMER WITH ADVERSARIAL REFINEMENT

Tathagata Bandyopadhyay

Visual Computing Lab, Technical University of Munich
Code | Paper | Slides**
** Slide was prepared earlier and hence doesn't include the Adversarial Refinement (MSD) concept.

Abstract

Recently, attention-based transformers have become a de facto standard in many deep learning applications including natural language processing, computer vision, signal processing, etc.. In this paper, we propose a transformer-based end-to-end model to extract a target speaker's speech from a monaural multi-speaker mixed audio signal. Unlike existing speaker extraction methods, we introduce two additional objectives to impose speaker embedding consistency and waveform encoder invertibility and jointly train both Speaker Encoder and Speech Separator to better capture the speaker conditional embedding. Furthermore, we leverage a multi-scale discriminator to refine the perceptual quality of the extracted speech. Our experiments show that the use of a dual path transformer in the separator backbone along with proposed training paradigm improves the CNN baseline by 3.12 dB points. Finally, we compare our approach with recent state-of-the-arts and show that our model outperforms existing methods by 4.1 dB points on an average without creating additional data dependency.

Qualitative Results


Spectron vs VoiceFilter:
Mixed Audio Input Reference audio for Speaker Embedding VoiceFilter Output Spectron Output Clean audio
(ground truth)

Spectron vs X-TaSNet:
Mixed Audio Input Reference audio for Speaker Embedding X-tasnet Output Spectron Output Clean audio
(ground truth)

Spectron in the wild:
Mixed Audio Input Reference audio for
Speaker Embedding
Spectron Output