Adversarial Training Techniques in Deep Learning: Analyzing Adversarial Training Techniques to Enhance the Robustness of Deep Learning Models Against Adversarial Attacks

Authors

  • Prof. Wei Chen Associate Professor of Computational Intelligence, Tsinghua University, Beijing, China
  • Gopalakrishnan Arjunan AI/ML Engineer at Accenture, Bangalore, India

Keywords:

Adversarial Training, Deep Learning, Adversarial Attacks, Robustness, Neural Networks, Gradient Descent, Defense Mechanisms, Transferability, Attack Strategies, Model Interpretability

Abstract

Adversarial attacks pose a significant threat to the reliability of deep learning models. Adversarial training has emerged as a promising approach to enhance the robustness of these models. This paper provides a comprehensive analysis of adversarial training techniques in deep learning, aiming to understand their effectiveness in improving model robustness against adversarial attacks. We discuss the fundamental concepts of adversarial attacks and adversarial training, review key adversarial training methods, and analyze their impact on model performance and robustness. Additionally, we highlight challenges and future research directions in this area.

Downloads

Download data is not yet available.

Downloads

Published

28-04-2022

How to Cite

[1]
“Adversarial Training Techniques in Deep Learning: Analyzing Adversarial Training Techniques to Enhance the Robustness of Deep Learning Models Against Adversarial Attacks”, Adv. in Deep Learning Techniques, vol. 2, no. 1, pp. 15–26, Apr. 2022, Accessed: Mar. 20, 2026. [Online]. Available: https://www.thesciencebrigade.org/adlt/article/view/111

Most read articles by the same author(s)