Full Text:   <4857>

Summary:  <360>

CLC number: TP311; TP183

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-04-26

Cited: 0

Clicked: 1416

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Zhen LIANG

https://orcid.org/0000-0002-1171-7061

Wanwei LIU

https://orcid.org/0000-0002-2315-1704

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.10 P.1375-1389

http://doi.org/10.1631/FITEE.2300059


Towards robust neural networks via a global and monotonically decreasing robustness training strategy


Author(s):  Zhen LIANG, Taoran WU, Wanwei LIU, Bai XUE, Wenjing YANG, Ji WANG, Zhengbin PANG

Affiliation(s):  Institute for Quantum Information & State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha 410073, China; more

Corresponding email(s):   liangzhen@nudt.edu.cn, wwliu@nudt.edu.cn

Key Words:  Robust neural networks, Training method, Drawdown risk, Global robustness training, Monotonically decreasing robustness


Share this article to: More |Next Article >>>

Zhen LIANG, Taoran WU, Wanwei LIU, Bai XUE, Wenjing YANG, Ji WANG, Zhengbin PANG. Towards robust neural networks via a global and monotonically decreasing robustness training strategy[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(10): 1375-1389.

@article{title="Towards robust neural networks via a global and monotonically decreasing robustness training strategy",
author="Zhen LIANG, Taoran WU, Wanwei LIU, Bai XUE, Wenjing YANG, Ji WANG, Zhengbin PANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="10",
pages="1375-1389",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300059"
}

%0 Journal Article
%T Towards robust neural networks via a global and monotonically decreasing robustness training strategy
%A Zhen LIANG
%A Taoran WU
%A Wanwei LIU
%A Bai XUE
%A Wenjing YANG
%A Ji WANG
%A Zhengbin PANG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 10
%P 1375-1389
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300059

TY - JOUR
T1 - Towards robust neural networks via a global and monotonically decreasing robustness training strategy
A1 - Zhen LIANG
A1 - Taoran WU
A1 - Wanwei LIU
A1 - Bai XUE
A1 - Wenjing YANG
A1 - Ji WANG
A1 - Zhengbin PANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 10
SP - 1375
EP - 1389
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300059


Abstract: 
Robustness of deep neural networks (DNNs) has caused great concerns in the academic and industrial communities, especially in safety-critical domains. Instead of verifying whether the robustness property holds or not in certain neural networks, this paper focuses on training robust neural networks with respect to given perturbations. State-of-the-art training methods, interval bound propagation (IBP) and CROWN-IBP, perform well with respect to small perturbations, but their performance declines significantly in large perturbation cases, which is termed "drawdown risk" in this paper. Specifically, drawdown risk refers to the phenomenon that IBP-family training methods cannot provide expected robust neural networks in larger perturbation cases, as in smaller perturbation cases. To alleviate the unexpected drawdown risk, we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch (global robustness training), and the corresponding robustness losses are combined with monotonically decreasing weights (monotonically decreasing robustness training). With experimental demonstrations, our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent. It is also noteworthy that our training method achieves higher model accuracy than the original training methods, which means that our presented training strategy gives more balanced consideration to robustness and accuracy.

基于全局和单调递减鲁棒性策略的鲁棒神经网络训练方法

梁震1,吴陶然2,3,刘万伟4,5,薛白2,杨文婧1,王戟1,庞征斌4
1国防科技大学量子信息研究所兼高性能计算国家重点实验室,中国长沙市,410073
2中国科学院软件研究所计算机科学国家重点实验室,中国北京市,100190
3中国科学院大学计算机科学与技术学院,中国北京市,100190
4国防科技大学计算机学院,中国长沙市,410073
5国防科技大学复杂系统软件工程实验室,中国长沙市,410073
摘要:深度神经网络的鲁棒性引发了学术界和工业界的高度关注,特别是在安全攸关领域。相比于验证神经网络的鲁棒性是否成立,本文关注点在于给定扰动前提下的鲁棒神经网络训练。现有的代表性训练方法--区间边界传播(IBP)和CROWN-IBP--在较小扰动下表现良好,但在较大扰动下性能显著下降,本文称之为衰退风险。具体来说,衰退风险是指与较小扰动情况相比,IBP系列训练方法在较大扰动情况下不能提供预期的鲁棒神经网络的现象。为了缓解这种衰退风险,我们提出一种全局的、单调递减的鲁棒神经网络训练策略,该策略在每个训练轮次考虑多个扰动(全局鲁棒性训练策略),并将其相应的鲁棒性损失以单调递减的权重进行组织(单调递减鲁棒性训练策略)。实验证明,所提策略在较小扰动下能够保持原有算法的性能,在较大扰动下的衰退风险得到很大程度改善。值得注意的是,与原有训练方法相比,所提训练策略保留了更多的模型准确度,这意味着该训练策略更加平衡地考虑了模型的鲁棒性和准确性。

关键词:鲁棒神经网络;训练方法;衰退风险;全局鲁棒性训练;单调递减鲁棒性

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Balunović M, Baader M, Singh G, et al., 2019. Certifying geometric robustness of neural networks. Proc 33rd Int Conf on Neural Information Processing Systems, Article 1372.

[2]Bojarski M, Testa DD, Dworakowski D, et al., 2016. End to end learning for self-driving cars. https://arxiv.org/abs/1604.07316

[3]Casadio M, Komendantskaya E, Daggitt ML, et al., 2022. Neural network robustness as a verification property: a principled case study. Proc 34th Int Conf on Computer Aided Verification, p.219-231.

[4]Chen XL, He KM, 2021. Exploring simple Siamese representation learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15750-15758.

[5]Cohen JM, Rosenfeld E, Kolter JZ, 2019. Certified adversarial robustness via randomized smoothing. Proc 36th Int Conf on Machine Learning, p.1310-1320.

[6]Devlin J, Chang MW, Lee K, et al., 2018. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[7]Du TY, Ji SL, Shen LJ, et al., 2021. Cert-RNN: towards certifying the robustness of recurrent neural networks. Proc ACM SIGSAC Conf on Computer and Communications Security, p.516-534.

[8]Duda RO, Hart PE, Stork DG, 2001. Pattern Classification (2nd Ed.). Wiley, New York, USA.

[9]Dvijotham K, Gowal S, Stanforth R, et al., 2018. Training verified learners with learned verifiers. https://arxiv.org/abs/1805.10265

[10]Ehlers R, 2017. Formal verification of piece-wise linear feed-forward neural networks. Proc 15th Int Symp on Automated Technology for Verification and Analysis, p.269-286.

[11]Goodfellow IJ, Shlens J, Szegedy C, 2015. Explaining and harnessing adversarial examples. Proc 3rd Int Conf on Learning Representations.

[12]Gowal S, Dvijotham K, Stanforth R, et al., 2018. On the effectiveness of interval bound propagation for training verifiably robust models. https://arxiv.org/abs/1810.12715

[13]Guo XW, Wan WJ, Zhang ZD, et al., 2021. Eager falsification for accelerating robustness verification of deep neural networks. Proc 32nd IEEE Int Symp on Software Reliability Engineering, p.345-356.

[14]Hein M, Andriushchenko M, 2017. Formal guarantees on the robustness of a classifier against adversarial manipulation. Proc 31st Int Conf on Neural Information Processing Systems, p.2266-2276.

[15]Huster T, Chiang CYJ, Chadha R, 2019. Limitations of the Lipschitz constant as a defense against adversarial examples. Proc Joint European Conf on Machine Learning and Knowledge Discovery in Databases, p.16-29.

[16]Katz G, Barrett C, Dill DL, et al., 2017. Reluplex: an efficient SMT solver for verifying deep neural networks. Proc 29th Int Conf on Computer Aided Verification, p.97-117.

[17]Ko CY, Lyu ZY, Weng L, et al., 2019. POPQORN: quantifying robustness of recurrent neural networks. Proc 36th Int Conf on Machine Learning, p.3468-3477.

[18]Lecuyer M, Atlidakis V, Geambasu R, et al., 2019. Certified robustness to adversarial examples with differential privacy. Proc IEEE Symp on Security and Privacy, p.656-672.

[19]Leino K, Wang ZF, Fredrikson M, 2021. Globally-robust neural networks. Proc 38th Int Conf on Machine Learning, p.6212-6222.

[20]Li JL, Liu JC, Yang PF, et al., 2019. Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. Proc 26th Int Static Analysis Symp, p.296-319.

[21]Liang Z, Liu WW, Wu TR, et al., 2023. Advances and prospects of training methods for robust neural networks. Sci Technol Fores, 2(1):78-89 (in Chinese).

[22]Liu JX, Xing YH, Shi XM, et al., 2022. Abstraction and refinement: towards scalable and exact verification of neural networks. https://arxiv.org/abs/2207.00759

[23]Liu WW, Song F, Zhang THR, et al., 2020. Verifying ReLU neural networks from a model checking perspective. J Comput Sci Technol, 35(6):1365-1381.

[24]Ma L, Juefei-Xu F, Zhang FY, et al., 2018. DeepGauge: multi-granularity testing criteria for deep learning systems. Proc 33rd ACM/IEEE Int Conf on Automated Software Engineering, p.120-131.

[25]Madry A, Makelov A, Schmidt L, et al., 2018. Towards deep learning models resistant to adversarial attacks. Proc 6th Int Conf on Learning Representations.

[26]Mirman M, Gehr T, Vechev MT, 2018. Differentiable abstract interpretation for provably robust neural networks. Proc 35th Int Conf on Machine Learning, p.3575-3583.

[27]Murphy KP, 2012. Machine Learning: a Probabilistic Perspective. MIT Press, Cambridge, USA.

[28]Ryou W, Chen JY, Balunovic M, et al., 2021. Scalable polyhedral verification of recurrent neural networks. Proc 33rd Int Conf on Computer Aided Verification, p.225-248.

[29]Salman H, Yang G, Zhang H, et al., 2019. A convex relaxation barrier to tight robust verification of neural networks. Proc 33rd Int Conf on Neural Information Processing Systems, Article 882.

[30]Singh G, Gehr T, Mirman M, et al., 2018. Fast and effective robustness certification. Proc 32nd Int Conf on Neural Information Processing Systems, p.10825-10836.

[31]Singh G, Gehr T, Püschel M, et al., 2019. An abstract domain for certifying neural networks. Proc ACM on Programming Languages, p.1-30.

[32]Sun B, Sun J, Dai T, et al., 2021. Probabilistic verification of neural networks against group fairness. Proc 24th Int Symp on Formal Methods, p.83-102.

[33]Tian Y, Yang WJ, Wang J, 2021. Image fusion using a multi-level image decomposition and fusion method. Appl Opt, 60(24):7466-7479.

[34]Tjeng V, Xiao KY, Tedrake R, 2019. Evaluating robustness of neural networks with mixed integer programming. Proc 7th Int Conf on Learning Representations.

[35]Tran HD, Manzanas Lopez D, Musau P, et al., 2019. Star-based reachability analysis of deep neural networks. Proc 3rd Int Symp on Formal Methods, p.670-686.

[36]Wang SQ, Pei KX, Whitehouse J, et al., 2018a. Efficient formal safety analysis of neural networks. Proc 32nd Int Conf on Neural Information Processing Systems, p.6369-6379.

[37]Wang SQ, Chen YZ, Abdou A, et al., 2018b. MixTrain: scalable training of formally robust neural networks. https://arxiv.org/abs/1811.02625

[38]Weng TW, Zhang H, Chen PY, et al., 2018a. Evaluating the robustness of neural networks: an extreme value theory approach. Proc 6th Int Conf on Learning Representations.

[39]Weng TW, Zhang H, Chen HG, et al., 2018b. Towards fast computation of certified robustness for ReLU networks. Proc 35th Int Conf on Machine Learning, p.5273-5282.

[40]Wong E, Schmidt FR, Metzen JH, et al., 2018. Scaling provable adversarial defenses. Proc 32nd Int Conf on Neural Information Processing Systems, p.8410-8419.

[41]Xiao KY, Tjeng V, Shafiullah NM, et al., 2019. Training for faster adversarial robustness verification via inducing ReLU stability. Proc 7th Int Conf on Learning Representations.

[42]Zhang H, Weng TW, Chen PY, et al., 2018. Efficient neural network robustness certification with general activation functions. Proc 32nd Int Conf on Neural Information Processing Systems, p.4944-4953.

[43]Zhang H, Chen HG, Xiao CW, et al., 2020. Towards stable and efficient training of verifiably robust neural networks. Proc 8th Int Conf on Learning Representations.

[44]Zhang YD, Zhao Z, Chen GK, et al., 2022. QVIP: an ILP-based formal verification approach for quantized neural networks. Proc 37th IEEE/ACM Int Conf on Automated Software Engineering, p.82:1-82:13.

[45]Zhao Z, Zhang YD, Chen GK, et al., 2022. CLEVEREST: accelerating CEGAR-based neural network verification via adversarial attacks. Proc 29th Int Static Analysis Symp, p.449-473.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE