Full Text:   <840>

Summary:  <31>

Suppl. Mater.: 

CLC number: TP391

On-line Access: 2025-05-06

Received: 2023-12-25

Revision Accepted: 2024-04-07

Crosschecked: 2025-05-06

Cited: 0

Clicked: 1204

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yahong Han

https://orcid.org/0000-0003-2768-1398

Shuai ZHAO

https://orcid.org/0000-0002-8745-9433

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.4 P.510-533

http://doi.org/10.1631/FITEE.2300867


A comprehensive survey of physical adversarial vulnerabilities in autonomous driving systems


Author(s):  Shuai ZHAO, Boyuan ZHANG, Yucheng SHI, Yang ZHAI, Yahong HAN, Qinghua HU

Affiliation(s):  College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; more

Corresponding email(s):   yahong@tju.edu.cn

Key Words:  Physical adversarial attacks, Physical adversarial defenses, Artificial intelligence safety, Deep learning, Autonomous driving system, Data-fusion, Adversarial vulnerability


Shuai ZHAO, Boyuan ZHANG, Yucheng SHI, Yang ZHAI, Yahong HAN, Qinghua HU. A comprehensive survey of physical adversarial vulnerabilities in autonomous driving systems[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(4): 510-533.

@article{title="A comprehensive survey of physical adversarial vulnerabilities in autonomous driving systems",
author="Shuai ZHAO, Boyuan ZHANG, Yucheng SHI, Yang ZHAI, Yahong HAN, Qinghua HU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="4",
pages="510-533",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300867"
}

%0 Journal Article
%T A comprehensive survey of physical adversarial vulnerabilities in autonomous driving systems
%A Shuai ZHAO
%A Boyuan ZHANG
%A Yucheng SHI
%A Yang ZHAI
%A Yahong HAN
%A Qinghua HU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 4
%P 510-533
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300867

TY - JOUR
T1 - A comprehensive survey of physical adversarial vulnerabilities in autonomous driving systems
A1 - Shuai ZHAO
A1 - Boyuan ZHANG
A1 - Yucheng SHI
A1 - Yang ZHAI
A1 - Yahong HAN
A1 - Qinghua HU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 4
SP - 510
EP - 533
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300867


Abstract: 
autonomous driving systems (ADSs) have attracted wide attention in the machine learning communities. With the help of deep neural networks (DNNs), ADSs have shown both satisfactory performance under significant uncertainties in the environment and the ability to compensate for system failures without external intervention. However, the vulnerability of ADSs has raised concerns since DNNs have been proven vulnerable to adversarial attacks. In this paper, we present a comprehensive survey of current physical adversarial vulnerabilities in ADSs. We first divide the physical adversarial attack methods and defense methods by their restrictions of deployment into three scenarios: the real-world, simulator-based, and digital-world scenarios. Then, we consider the adversarial vulnerabilities that focus on various sensors in ADSs and separate them as camera-based, light detection and ranging (LiDAR) based, and multifusion-based attacks. Subsequently, we divide the attack tasks by traffic elements. For the physical defenses, we establish the taxonomy with reference to input image preprocessing, adversarial example detection, and model enhancement for the DNN models to achieve full coverage of the adversarial defenses. Based on the above survey, we finally discuss the challenges in this research field and provide further outlook on future directions.

面向自动驾驶系统的物理对抗脆弱性综述

赵帅1,2,3,张博渊1,2,石育澄1,2,翟洋1,2,3,韩亚洪1,2,胡清华1,2
1天津大学智能与计算学部,中国天津市,300072
2天津市机器学习重点实验室,中国天津市,300072
3中汽智联技术有限公司,中国天津市,300000
摘要:自动驾驶系统(ADS)在机器学习领域受到广泛关注。借助深度神经网络(DNN),这些系统在面对环境重大不确定性时不仅展现满意性能,还能在没有外部干预情况下纠正系统故障。然而,由于深度神经网络易受对抗样本攻击,自动驾驶系统的脆弱性成为研究焦点。本文详细调查了当前自动驾驶系统存在的物理对抗漏洞。首先,根据部署限制将物理对抗攻击和防御方法分为3类:现实世界、仿真世界及数字世界。分析自动驾驶系统中不同传感器的对抗攻击,将其分为基于摄像头的攻击、基于激光雷达(LiDAR)的攻击及基于多传感器融合的攻击。根据交通元素将攻击任务分类。对于物理防御,以图像预处理、对抗检测和模型增强防御为基础,为深度神经网络模型建立一个全面的防御体系。最终讨论了该研究领域面临的挑战,并展望未来发展方向。

关键词:物理对抗攻击;物理对抗防御;人工智能安全;深度学习;自动驾驶系统;数据融合;对抗脆弱性

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Abdelfattah M, Yuan KW, Wang ZJ, et al., 2021a. Adversarial attacks on camera-LiDAR models for 3D car detection. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2189-2194.

[2]Abdelfattah M, Yuan KW, Wang ZJ, et al., 2021b. Towards universal physical attacks on cascaded camera-LiDAR 3D object detection models. IEEE Int Conf on Image Processing, p.3592-3596.

[3]Ansari MA, Singh DK, 2021. Human detection techniques for real time surveillance: a comprehensive survey. Multim Tools Appl, 80(6):8759-8808.

[4]Bai T, Luo JQ, Zhao J, 2022. Inconspicuous adversarial patches for fooling image-recognition systems on mobile devices. IEEE Int Things J, 9(12):9515-9524.

[5]Benz P, Zhang CN, Imtiaz T, et al., 2020. Double targeted universal adversarial perturbations. Proc 15th Asian Conf on Computer Vision, p.284-300.

[6]Boloor A, He X, Gill C, et al., 2019. Simple physical adversarial examples against end-to-end autonomous driving models. IEEE Int Conf on Embedded Software and Systems, p.1-7.

[7]Boloor A, Garimella K, He X, et al., 2020. Attacking vision-based perception in end-to-end autonomous driving models. J Syst Archit, 110:101766.

[8]Brand M, Naeh I, Teitelman D, 2022. Adversarial attack against image-based localization neural networks. https://arxiv.org/abs/2210.06589

[9]Brock A, Donahue J, Simonyan K, 2018. Large scale GAN training for high fidelity natural image synthesis. 7th Int Conf on Learning Representations.

[10]Cao YL, Xiao CW, Cyr B, et al., 2019. Adversarial sensor attack on LiDAR-based perception in autonomous driving. Proc ACM SIGSAC Conf on Computer and Communications Security, p.2267-2281.

[11]Cao YL, Wang NF, Xiao CW, et al., 2021. Invisible for both camera and LiDAR: security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. IEEE Symp on Security and Privacy, p.176-194.

[12]Cao YL, Xiao CW, Anandkumar A, et al., 2022. AdvDO: realistic adversarial attacks for trajectory prediction. 17th European Conf on Computer Vision, p.36-52.

[13]Cao YL, Bhupathiraju SH, Naghavi P, et al., 2023. You can’t see me: physical removal attacks on LiDAR-based autonomous vehicles driving frameworks. 32nd USENIX Security Symp, p.2993-3010.

[14]Chaubey A, Agrawal N, Barnwal K, et al., 2020. Universal adversarial perturbations: a survey. https://arxiv.org/abs/2005.08087

[15]Chen J, Gao Y, Liu Y, et al., 2022. Leveraging model poisoning attacks on license plate recognition systems. IEEE Int Conf on Trust, Security and Privacy in Computing and Communications, p.827-834.

[16]Chen ST, Cornelius C, Martin J, et al., 2019. ShapeShifter: robust physical adversarial attack on faster R-CNN object detector. European Conf on Machine Learning and Knowledge Discovery in Databases, p.52-68.

[17]Chen YL, Liu S, Shen XY, et al., 2020. DSGN: deep stereo geometry network for 3D object detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.12536-12545.

[18]Chen ZY, Li B, Xu JH, et al., 2022. Towards practical certifiable patch defense with vision transformer. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15127-15137.

[19]Cheng ZY, Liang J, Choi H, et al., 2022. Physical attack on monocular depth estimation with optimal adversarial patches. 17th European Conf on Computer Vision, p.514-532.

[20]Chiang PY, Ni RK, Abdelkader A, et al., 2020. Certified defenses for adversarial patches. 8th Int Conf on Learning Representations.

[21]Choi JI, Tian Q, 2022. Adversarial attack and defense of YOLO detectors in autonomous driving scenarios. IEEE Intelligent Vehicles Symp, p.1011-1017.

[22]Chou E, Tramèr F, Pellegrino G, 2020. SentiNet: detecting localized universal attacks against deep learning systems. IEEE Security and Privacy Workshops, p.48-54.

[23]Croce F, Hein M, 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Proc 37th Int Conf on Machine Learning, p.2206-2216.

[24]Deng Y, Mu TT, 2023. Understanding and improving ensemble adversarial defense. 37th Int Conf on Neural Information Processing Systems.

[25]Dibaei M, Zheng X, Jiang K, et al., 2020. Attacks and defences on intelligent connected vehicles: a survey. Digit Commun Netw, 6(4):399-421.

[26]Ding WH, Xu CJ, Arief M, et al., 2023. A survey on safety-critical driving scenario generation—a methodological perspective. IEEE Trans Intell Transp Syst, 24(7):6971-6988.

[27]Dosovitskiy A, Beyer L, Kolesnikov A, et al., 2020. An image is worth 16×16 words: transformers for image recognition at scale. 9th Int Conf on Learning Representations.

[28]Duan RJ, Ma XJ, Wang YS, et al., 2020. Adversarial camouflage: hiding physical-world attacks with natural styles. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.997-1005.

[29]Duan RJ, Mao XF, Qin AK, et al., 2021. Adversarial laser beam: effective physical-world attack to DNNs in a blink. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.16057-16066.

[30]Ertler C, Mislej J, Ollmann T, et al., 2020. The Mapillary Traffic Sign Dataset for detection and classification on a global scale. 16th European Conf on Computer Vision, p.68-84.

[31]Eykholt K, Evtimov I, Fernandes E, et al., 2018a. Physical adversarial examples for object detectors. Proc 12th USENIX Workshop on Offensive Technologies, p.1. https://arxiv.org/pdf/1807.07769

[32]Eykholt K, Evtimov I, Fernandes E, et al., 2018b. Robust physical-world attacks on deep learning visual classification. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1625-1634.

[33]Fang LJ, Jiang QH, Shi JP, et al., 2020. TPNet: trajectory proposal network for motion prediction. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6796-6805.

[34]Feng WW, Xu NQ, Zhang TZ, et al., 2024. Robust and generalized physical adversarial attacks via Meta-GAN. IEEE Trans Inform Forensic Secur, 19:1112-1125.

[35]Gowal S, Dvijotham K, Stanforth R, et al., 2019. Scalable verified training for provably robust image classification. Proc IEEE/CVF Int Conf on Computer Vision, p.4841-4850.

[36]Gu JD, Tresp V, Qin Y, 2022. Are vision transformers robust to patch perturbations? 17th European Conf on Computer Vision, p.404-421.

[37]Guesmi A, Alouani I, 2022. Adversarial attack on radar-based environment perception systems.

[38]Hallyburton RS, Liu YP, Cao YL, et al., 2022. Security analysis of camera-LiDAR fusion against black-box attacks on autonomous vehicles. 31st USENIX Security Symp, p.1903-1920.

[39]Han XS, Chen KJ, Zhou Y, et al., 2021. A unified anomaly detection methodology for lane-following of autonomous driving systems. IEEE Int Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking, p.836-844.

[40]Han XS, Xu GW, Zhou Y, et al., 2022. Physical backdoor attacks to lane detection systems in autonomous driving. Proc 30th ACM Int Conf on Multimedia, p.2957-2968.

[41]Hau Z, Demetriou S, Lupu EC, 2022. Using 3D shadows to detect object hiding attacks on autonomous vehicle perception. IEEE Security and Privacy Workshops, p.229-235.

[42]Hayes J, 2018. On visible adversarial perturbations & digital watermarking. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.1597-1604.

[43]Hingun N, Sitawarin C, Li J, et al., 2023. REAP: a large-scale realistic adversarial patch benchmark. Proc IEEE/CVF Int Conf on Computer Vision, p.4617-4628.

[44]Hu YCT, Chen JC, Kung BH, et al., 2021. Naturalistic physical adversarial patch for object detectors. Proc IEEE/CVF Int Conf on Computer Vision, p.7828-7837.

[45]Hu ZH, Huang SY, Zhu XP, et al., 2022. Adversarial texture for fooling person detectors in the physical world. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.13297-13306.

[46]Huang LF, Gao CY, Zhou YY, et al., 2020. Universal physical camouflage attacks on object detectors. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.717-726.

[47]Huang YH, Ma L, Li YC, 2023. PatchCensor: patch robustness certification for transformers via exhaustive testing. ACM Trans Soft Eng Methodol, 32(6):154.

[48]Jan STK, Messou J, Lin YC, et al., 2019. Connecting the digital and physical world: improving the robustness of adversarial attacks. Proc 33rd AAAI Conf on Artificial Intelligence, p.962-969.

[49]Ji N, Feng YF, Xie HD, et al., 2021. Adversarial YOLO: defense human detection patch attacks via detecting adversarial patches. https://arxiv.org/abs/2103.08860

[50]Jia W, Lu ZJ, Zhang HC, et al., 2022. Fooling the eyes of autonomous vehicles: robust physical adversarial examples against traffic sign recognition systems. https://arxiv.org/abs/2201.06192v1

[51]Jin KF, Wang HY, Liu CX, et al., 2022. Graph neural network based relation learning for abnormal perception information detection in self-driving scenarios. Int Conf on Robotics and Automation, p.8943-8949.

[52]Kiran BR, Sobh I, Talpaert V, et al., 2022. Deep reinforcement learning for autonomous driving: a survey. IEEE Trans Intell Transp Syst, 23(6):4909-4926.

[53]Kong ZL, Guo JF, Li A, et al., 2020. PhysGAN: generating physical-world-resilient adversarial examples for autonomous driving. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.14242-14251.

[54]Kos T, Markezic I, Pokrajcic J, 2010. Effects of multipath reception on GPS positioning performance. Proc ELMAR, p.399-402.

[55]Kumar KN, Vishnu C, Mitra R, et al., 2020. Black-box adversarial attacks in autonomous vehicle technology. IEEE Applied Imagery Pattern Recognition Workshop, p.1-7.

[56]Kwon H, Baek JW, 2021. Adv-Plate attack: adversarially perturbed plate for license plate recognition system. J Sens, 2021:6473833.

[57]Lagraa S, Cailac M, Rivera S, et al., 2019. Real-time attack detection on robot cameras: a self-driving car application. 3rd IEEE Int Conf on Robotic Computing, p.102-109.

[58]Levine A, Feizi S, 2020. (De)Randomized smoothing for certifiable defense against patch attacks. Proc 34th Int Conf on Neural Information Processing Systems, Article 542.

[59]Li PL, Chen XZ, Shen SJ, 2019. Stereo R-CNN based 3D object detection for autonomous driving. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7636-7644.

[60]Li YM, Wen CC, Juefei-Xu F, et al., 2021. Fooling LiDAR perception via adversarial trajectory perturbation. Proc IEEE/CVF Int Conf on Computer Vision, p.7878-7887.

[61]Liu J, Levine A, Lau CP, et al., 2022. Segment and complete: defending object detectors against adversarial patch attacks with robust patch detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.14953-14962.

[62]Lovisotto G, Turner H, Sluganovic I, et al., 2021. SLAP: improving physical adversarial examples with short-lived adversarial perturbations. 30th USENIX Security Symp, p.1865-1882.

[63]Lu JJ, Sibai H, Fabry E, 2017. Adversarial examples that fool detectors. https://arxiv.org/abs/1712.02494

[64]Machado GR, Silva E, Goldschmidt RR, 2021. Adversarial machine learning in image classification: a survey toward the defender’s perspective. ACM Comput Surv, 55(1):8.

[65]Man YM, Li M, Gerdes R, 2020. GhostImage: remote perception attacks against camera-based image classification systems. 23rd Int Symp on Research in Attacks, Intrusions and Defenses, p.317-332. https://arxiv.org/abs/2001.07792v2

[66]Marchisio A, Caramia G, Martina M, et al., 2022. fakeWeather: adversarial attacks for deep neural networks emulating weather conditions on the camera lens of autonomous systems. Int Joint Conf on Neural Networks, p.1-9.

[67]Metzen JH, Yatsura M, 2021. Efficient certified defenses against patch attacks on image classifiers. 9th Int Conf on Learning Representations.

[68]Metzen JH, Finnie N, Hutmacher R, 2021. Meta adversarial training against universal patches. https://arxiv.org/abs/2101.11453

[69]Modas A, Sanchez-Matilla R, Frossard P, et al., 2020. Toward robust sensing for autonomous vehicles: an adversarial perspective. IEEE Signal Process Mag, 37(4):14-23.

[70]Moosavi-Dezfooli SM, Fawzi A, Fawzi O, et al., 2017. Universal adversarial perturbations. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.86-94.

[71]Pan XR, Xia ZF, Song SJ, et al., 2021. 3D object detection with Pointformer. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7459-7468.

[72]Patel N, Krishnamurthy P, Garg S, et al., 2020. Bait and switch: online training data poisoning of autonomous driving systems. https://arxiv.org/abs/2011.04065

[73]Qian YG, Ma DF, Wang B, et al., 2020. Spot evasion attacks: adversarial examples for license plate recognition systems with convolutional neural networks. Comput Secur, 95:101826.

[74]Qiu SL, Liu QH, Zhou SJ, et al., 2019. Review of artificial intelligence adversarial attack and defense technologies. Appl Sci, 9(5):909.

[75]Raju RS, Lipasti M, 2020. BlurNet: defense by filtering the feature maps. 50th Annual IEEE/IFIP Int Conf on Dependable Systems and Networks Workshops, p.38-46.

[76]Rana K, Madaan R, 2020. Evaluating effectiveness of adversarial examples on state of art license plate recognition models. IEEE Int Conf on Intelligence and Security Informatics, p.1-3.

[77]Rasouli A, Kotseruba I, Kunic T, et al., 2019. PIE: a large-scale dataset and models for pedestrian intention estimation and trajectory prediction. Proc IEEE/CVF Int Conf on Computer Vision, p.6261-6270.

[78]Rossolini G, Nesti F, D’Amico G, et al., 2023. On the real-world adversarial robustness of real-time semantic segmentation models for autonomous driving. IEEE Trans Neur Netw Learn Syst, 35(12):18328-18342.

[79]Rudenko A, Palmieri L, Herman M, et al., 2020. Human motion trajectory prediction: a survey. Int J Rob Res, 39(8):895-935.

[80]Sato T, Shen JJ, Wang NF, et al., 2021. Dirty road can attack: security of deep learning based automated lane centering under physical-world attack. Proc 30th USENIX Security Symp, p.3309-3326.

[81]Schmalfuss J, Mehl L, Bruhn A, 2023. Distracting downpour: adversarial weather attacks for motion estimation. Proc IEEE/CVF Int Conf on Computer Vision, p.10072-10082.

[82]Serban A, Poll E, Visser J, 2021. Adversarial examples on object recognition: a comprehensive survey. ACM Comput Surv, 53(3):66.

[83]Sitawarin C, Bhagoji AN, Mosenia A, et al., 2018. Rogue signs: deceiving traffic sign recognition with malicious ads and logos. https://arxiv.org/abs/1801.02780

[84]Sun JS, Cao YL, Chen QA, et al., 2020. Towards robust LiDAR-based perception in autonomous driving: general black-box adversarial sensor attack and countermeasures. 29th USENIX Security Symp, p.877-894.

[85]Suryanto N, Kim Y, Kang H, et al., 2022. DTA: physical camouflage attacks using differentiable transformation network. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15284-15293.

[86]Szegedy C, Zaremba W, Sutskever I, et al., 2014. Intriguing properties of neural networks.

[87]Thys S, Van Ranst W, Goedemé T, 2019. Fooling automated surveillance cameras: adversarial patches to attack person detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.49-55.

[88]Tu J, Ren M, Manivasagam S, et al., 2020. Physically realizable adversarial examples for LiDAR object detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.13713-13722.

[89]Tu J, Li HC, Yan XC, et al., 2022. Exploring adversarial robustness of multi-sensor perception systems in self driving. Proc 5th Conf on Robot Learning, p.1013-1024.

[90]Wang DH, Jiang TS, Sun JL, et al., 2022. FCA: learning a 3D full-coverage vehicle camouflage for multi-view physical adversarial attack. Proc 36th AAAI Conf on Artificial Intelligence, p.2414-2422.

[91]Wang JK, Liu AS, Yin ZX, et al., 2021. Dual attention suppression attack: generate adversarial camouflage in physical world. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8561-8570.

[92]Wei H, Tang H, Jia XM, et al., 2022. Physical adversarial attack meets computer vision: a decade survey. https://arxiv.org/abs/2209.15179

[93]Wei XX, Pu BZ, Lu JF, et al., 2022. Visually adversarial attacks and defenses in the physical world: a survey. https://arxiv.org/abs/2211.01671

[94]Wei XX, Huang Y, Sun YT, et al., 2023. Unified adversarial patch for cross-modal attacks in the physical world. Proc IEEE/CVF Int Conf on Computer Vision, p.4422-4431.

[95]Wu T, Tong L, Vorobeychik Y, 2020a. Defending against physically realizable attacks on image classification. 8th Int Conf on Learning Representations.

[96]Wu T, Ning XF, Li WS, et al., 2020b. Physical adversarial attack on vehicle detector in the CARLA simulator. https://arxiv.org/abs/2007.16118

[97]Wu ZX, Lim SN, Davis LS, et al., 2020. Making an invisibility cloak: real world adversarial attacks on object detectors. 16th European Conf on Computer Vision, p.1-17.

[98]Xu CJ, Ding WH, Lyu WJ, et al., 2022. SafeBench: a benchmarking platform for safety evaluation of autonomous vehicles. Proc 36th Int Conf on Neural Information Processing Systems, Article 1861.

[99]Xu H, Ju A, Wagner D, 2021. Model-agnostic defense for lane detection against adversarial attack. Automotive and Autonomous Vehicle Security Workshop, Article 25.

[100]Xu KD, Zhang GY, Liu SJ, et al., 2020. Adversarial T-shirt! Evading person detectors in a physical world. 16th European Conf on Computer Vision, p.665-681.

[101]Xue MF, Yuan CX, He C, et al., 2021. NaturalAE: natural and robust physical adversarial examples for object detectors. J Inform Secur Appl, 57:102694.

[102]Yamanaka K, Matsumoto R, Takahashi K, et al., 2020. Adversarial patch attacks on monocular depth estimation networks. IEEE Access, 8:179094-179104.

[103]Yang JH, Boloor A, Chakrabarti A, et al., 2020. Finding physical adversarial examples for autonomous driving with fast and differentiable image compositing. https://arxiv.org/abs/2010.08844

[104]Yang KC, Tsai T, Yu HG, et al., 2021. Robust roadside physical adversarial attack against deep learning in Lidar perception modules. Proc ACM Asia Conf on Computer and Communications Security, p.349-362.

[105]Yang XH, Liu WF, Zhang SL, et al., 2021. Targeted attention attack on deep learning models in road sign recognition. IEEE Int Things J, 8(6):4980-4990.

[106]Yatsura M, Sakmann K, Hua NG, et al., 2023. Certified defences against adversarial patch attacks on semantic segmentation. 11th Int Conf on Learning Representations.

[107]Yoon HJ, Jafarnejadsani H, Voulgaris P, 2023. Learning when to use adaptive adversarial image perturbations against autonomous vehicles. IEEE Robot Autom Lett, 8(7):4179-4186.

[108]Yu Y, Yang WH, Tan YP, et al., 2022. Towards robust rain removal against adversarial attacks: a comprehensive benchmark analysis and beyond. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6003-6012.

[109]Zeng FG, Dong B, Zhang YA, et al., 2022. MOTR: end-to-end multiple-object tracking with Transformer. 17th European Conf on Computer Vision, p.659-675.

[110]Zhang H, Dana K, Shi JP, et al., 2018. Context encoding for semantic segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.7151-7160.

[111]Zhang JD, Lou Y, Wang JP, et al., 2021. Evaluating adversarial attacks on driving safety in vision-based autonomous vehicles. IEEE Int Things J, 9(5):3443-3456.

[112]Zhang QZ, Hu ST, Sun JC, et al., 2022. On adversarial robustness of trajectory prediction for autonomous vehicles. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15138-15147.

[113]Zhang Y, Foroosh H, David P, et al., 2019. CAMOU: learning physical vehicle camouflages to adversarially attack detectors in the wild. 7th Int Conf on Learning Representations.

[114]Zhong YQ, Liu XM, Zhai DM, et al., 2022. Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15324-15333.

[115]Zhu XP, Li X, Li JM, et al., 2021. Fooling thermal infrared pedestrian detectors in real world using small bulbs. Proc 35th AAAI Conf on Artificial Intelligence, p.3616-3624.

[116]Zhu XP, Hu ZH, Huang SY, et al., 2022. Infrared invisible clothing: hiding from infrared detectors at multiple angles in real world. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.13307-13316.

[117]Zhu ZJ, Zhang YC, Chen H, et al., 2023. Understanding the robustness of 3D object detection with Bird’-Eye-View representations in autonomous driving. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.21600-21610.

[118]Zolfi A, Kravchik M, Elovici Y, et al., 2021. The translucent patch: a physical and universal attack on object detectors. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15227-15236.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE