Akademska digitalna zbirka SLovenije - logo
E-viri
Recenzirano Odprti dostop
  • Adversarial attack and defe...
    Jati, Arindam; Hsu, Chin-Cheng; Pal, Monisankha; Peri, Raghuveer; AbdAlmageed, Wael; Narayanan, Shrikanth

    Computer speech & language, July 2021, 2021-07-00, Letnik: 68
    Journal Article

    •Expository study on adversarial attacks and possible countermeasures for deep speaker recognition systems.•White box attacks: FGSM, PGD, Carlini and Wagner.•Defensive countermeasures: Adversarial training, adversarial Lipschitz regularization.•Several ablation studies e.g., varying strength of the attack, measuring signal-to-noise ratio and perceptibility, effect of noise augmentation, transferability analysis.•Strongest attacks: PGD, Carlini & Wagner; most imperceptible adversarial samples: Carlini & Wagner; best defense: PGD-based adversarial training. Robust speaker recognition, including in the presence of malicious attacks, is becoming increasingly important and essential, especially due to the proliferation of smart speakers and personal agents that interact with an individual’s voice commands to perform diverse and even sensitive tasks. Adversarial attack is a recently revived domain which is shown to be effective in breaking deep neural network-based classifiers, specifically, by forcing them to change their posterior distribution by only perturbing the input samples by a very small amount. Although, significant progress in this realm has been made in the computer vision domain, advances within speaker recognition is still limited. We present an expository paper that considers several adversarial attacks to a deep speaker recognition system, employs strong defense methods as countermeasures, and reports a comprehensive set of ablation studies to better understand the problem. The experiments show that the speaker recognition systems are vulnerable to adversarial attacks, and the strongest attacks can reduce the accuracy of the system from 94% to even 0%. The study also compares the performances of the employed defense methods in detail, and finds adversarial training based on Projected Gradient Descent (PGD) to be the best defense method in our setting. We hope that the experiments presented in this paper provide baselines that can be useful for the research community interested in further studying adversarial robustness of speaker recognition systems.