Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

Towards robust neural networks via a global and monotonically decreasing robustness training strategy

Abstract: Robustness of deep neural networks (DNNs) has caused great concerns in the academic and industrial communities, especially in safety-critical domains. Instead of verifying whether the robustness property holds or not in certain neural networks, this paper focuses on training robust neural networks with respect to given perturbations. State-of-the-art training methods, interval bound propagation (IBP) and CROWN-IBP, perform well with respect to small perturbations, but their performance declines significantly in large perturbation cases, which is termed "drawdown risk" in this paper. Specifically, drawdown risk refers to the phenomenon that IBP-family training methods cannot provide expected robust neural networks in larger perturbation cases, as in smaller perturbation cases. To alleviate the unexpected drawdown risk, we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch (global robustness training), and the corresponding robustness losses are combined with monotonically decreasing weights (monotonically decreasing robustness training). With experimental demonstrations, our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent. It is also noteworthy that our training method achieves higher model accuracy than the original training methods, which means that our presented training strategy gives more balanced consideration to robustness and accuracy.

Key words: Robust neural networks; Training method; Drawdown risk; Global robustness training; Monotonically decreasing robustness

Chinese Summary  <10> 基于全局和单调递减鲁棒性策略的鲁棒神经网络训练方法

梁震1,吴陶然2,3,刘万伟4,5,薛白2,杨文婧1,王戟1,庞征斌4
1国防科技大学量子信息研究所兼高性能计算国家重点实验室,中国长沙市,410073
2中国科学院软件研究所计算机科学国家重点实验室,中国北京市,100190
3中国科学院大学计算机科学与技术学院,中国北京市,100190
4国防科技大学计算机学院,中国长沙市,410073
5国防科技大学复杂系统软件工程实验室,中国长沙市,410073
摘要:深度神经网络的鲁棒性引发了学术界和工业界的高度关注,特别是在安全攸关领域。相比于验证神经网络的鲁棒性是否成立,本文关注点在于给定扰动前提下的鲁棒神经网络训练。现有的代表性训练方法--区间边界传播(IBP)和CROWN-IBP--在较小扰动下表现良好,但在较大扰动下性能显著下降,本文称之为衰退风险。具体来说,衰退风险是指与较小扰动情况相比,IBP系列训练方法在较大扰动情况下不能提供预期的鲁棒神经网络的现象。为了缓解这种衰退风险,我们提出一种全局的、单调递减的鲁棒神经网络训练策略,该策略在每个训练轮次考虑多个扰动(全局鲁棒性训练策略),并将其相应的鲁棒性损失以单调递减的权重进行组织(单调递减鲁棒性训练策略)。实验证明,所提策略在较小扰动下能够保持原有算法的性能,在较大扰动下的衰退风险得到很大程度改善。值得注意的是,与原有训练方法相比,所提训练策略保留了更多的模型准确度,这意味着该训练策略更加平衡地考虑了模型的鲁棒性和准确性。

关键词组:鲁棒神经网络;训练方法;衰退风险;全局鲁棒性训练;单调递减鲁棒性


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2300059

CLC number:

TP311; TP183

Download Full Text:

Click Here

Downloaded:

3131

Download summary:

<Click Here> 

Downloaded:

228

Clicked:

919

Cited:

0

On-line Access:

2023-10-27

Received:

2023-02-01

Revision Accepted:

2023-10-27

Crosschecked:

2023-04-26

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE