CLC number:
On-line Access: 2024-01-18
Received: 2023-07-12
Revision Accepted: 2023-10-08
Crosschecked: 0000-00-00
Cited: 0
Clicked: 200
Wei LIN, Lichuan LIAO. Towards sustainable adversarial training with successive perturbation generation[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Towards sustainable adversarial training with successive perturbation generation",
author="Wei LIN, Lichuan LIAO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300474"
}
%0 Journal Article
%T Towards sustainable adversarial training with successive perturbation generation
%A Wei LIN
%A Lichuan LIAO
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300474
TY - JOUR
T1 - Towards sustainable adversarial training with successive perturbation generation
A1 - Wei LIN
A1 - Lichuan LIAO
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300474
Abstract: adversarial training with online-generated adversarial examples has achieved promising performance in defending adversarial attacks and improving the robustness of CNN models. However, most existing adversarial training methods are dedicated to finding strong adversarial examples for forcing the model to learn the adversarial data distribution, which inevitably imposes a large computational overhead and results in a decrease in the generalization performance on clean data. In this paper, we show that progressively enhancing the adversarial strength of adversarial examples across training epochs can effectively improve the model robustness, and appropriate model shifting can preserve the generalization performance of models in conjunction with negligible computational cost. To this end, we propose a successive perturbation generation scheme for adversarial training (SPGAT), which progressively strengthen the adversarial examples by adding the perturbations on adversarial examples transferred from the previous epoch, and shifts models across the epochs to improve the efficiency of adversarial training. The proposed SPGAT is both efficient and effective, e.g., the computation time of our method is 900 min as against the 4100 min duration observed in the case of standard adversarial training, and the performance boost is more than 7% and 3% in terms of adversarial accuracy and clean accuracy, respectively. We extensively evaluate the SPGAT on various datasets, including small-scale MNIST, middle-scale CIFAR-10, and large-scale CIFAR-100. The experimental results show that our method is more efficient while performing favorably against state-of-the-art methods.
Open peer comments: Debate/Discuss/Question/Opinion
<1>