Full Text:  <1292>

Summary:  <205>

CLC number: TP18;U495

On-line Access: 2023-01-21

Received: 2022-04-03

Revision Accepted: 2023-01-21

Crosschecked: 2022-08-10

Cited: 0

Clicked: 1326

Citations:  Bibtex RefMan EndNote GB/T7714




Huiqian LI


-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)

Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning

Author(s):  Huiqian LI, Jin HUANG, Zhong CAO, Diange YANG, Zhihua ZHONG

Affiliation(s):  School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China; more

Corresponding email(s):  lihq20@mails.tsinghua.edu.cn, huangjin@tsinghua.edu.cn, caoc15@mails.tsinghua.edu.cn, ydg@tsinghua.edu.cn

Key Words:  Pedestrian; Hybrid reinforcement learning; Autonomous vehicles; Decision-making

Share this article to: More <<< Previous Paper|Next Paper >>>

Huiqian LI, Jin HUANG, Zhong CAO, Diange YANG, Zhihua ZHONG. Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2200128

@article{title="Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning",
author="Huiqian LI, Jin HUANG, Zhong CAO, Diange YANG, Zhihua ZHONG",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning
%A Huiqian LI
%A Zhong CAO
%A Diange YANG
%A Zhihua ZHONG
%J Frontiers of Information Technology & Electronic Engineering
%P 131-140
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer

T1 - Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning
A1 - Huiqian LI
A1 - Jin HUANG
A1 - Zhong CAO
A1 - Diange YANG
A1 - Zhihua ZHONG
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 131
EP - 140
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -

Ensuring the safety of pedestrians is essential and challenging when autonomous vehicles are involved. Classical pedestrian avoidance strategies cannot handle uncertainty, and learning-based methods lack performance guarantees. In this paper we propose a hybrid reinforcement learning (HRL) approach for autonomous vehicles to safely interact with pedestrians behaving uncertainly. The method integrates the rule-based strategy and reinforcement learning strategy. The confidence of both strategies is evaluated using the data recorded in the training process. Then we design an activation function to select the final policy with higher confidence. In this way, we can guarantee that the final policy performance is not worse than that of the rule-based policy. To demonstrate the effectiveness of the proposed method, we validate it in simulation using an accelerated testing technique to generate stochastic pedestrians. The results indicate that it increases the success rate for pedestrian avoidance to 98.8%, compared with 94.4% of the baseline method.




Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Bai HY, Cai SJ, Ye N, et al., 2015. Intention-aware online POMDP planning for autonomous driving in a crowd. IEEE Int Conf on Robotics and Automation, p.454-460.

[2]Batkovic I, Zanon M, Ali M, et al., 2019. Real-time constrained trajectory planning and vehicle control for proactive autonomous driving with road users.18th European Control Conf, p.256-262.

[3]Bhattacharyya A, Reino DO, Fritz M, et al., 2021. Euro-PVI: pedestrian vehicle interactions in dense urban centers. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6404-6413.

[4]Bouton M, Nakhaei A, Fujimura K, et al., 2018. Scalable decision making with sensor occlusions for autonomous driving. IEEE Int Conf on Robotics and Automation, p.2076-2081.

[5]Cao Z, Yang DG, Xu SB, et al., 2021. Highway exiting planner for automated vehicles using reinforcement learning. IEEE Trans Intell Transp Syst, 22(2):990-1000.

[6]Cao Z, Xu SB, Peng HE, et al., 2022. Confidence-aware reinforcement learning for self-driving cars. IEEE Trans Intell Transp Syst, 23(7):7419-7430.

[7]Everett M, Chen YF, How JP, 2021. Collision avoidance in pedestrian-rich environments with deep reinforcement learning. IEEE Access, 9:10357-10377.

[8]Feng S, Yan XT, Sun HW, et al., 2021. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nat Commun, 12(1):748.

[9]García J, Fernández F, 2015. A comprehensive survey on safe reinforcement learning. J Mach Learn Res, 16(1):1437-1480.

[10]Jayaraman SK, Tilbury DM, Yang XJ, et al., 2020a. Analysis and prediction of pedestrian crosswalk behavior during automated vehicle interactions. IEEE Int Conf on Robotics and Automation, p.6426-6432.

[11]Jayaraman SK, Robert LP, Yang XJ, et al., 2020b. Efficient behavior-aware control of automated vehicles at crosswalks using minimal information pedestrian prediction model. American Control Conf, p.4362-4368.

[12]Kapania NR, Govindarajan V, Borrelli F, et al., 2019. A hybrid control design for autonomous vehicles at uncontrolled crosswalks. IEEE Intelligent Vehicles Symp, p.1604-1611.

[13]Koç M, Yurtsever E, Redmill K, et al., 2021. Pedestrian emergence estimation and occlusion-aware risk assessment for urban autonomous driving. IEEE Conf on Intelligent Transportation Systems, p.292-297.

[14]Li ZR, Gong JW, Lu C, et al., 2020. Importance weighted Gaussian process regression for transferable driver behaviour learning in the lane change scenario. IEEE Trans Veh Technol, 69(11):12497-12509.

[15]Li ZR, Gong JW, Lu C, et al., 2021. Interactive behavior prediction for heterogeneous traffic participants in the urban road: a graph-neural-network-based multitask learning framework. IEEE/ASME Trans Mechatron, 26(3):1339-1349.

[16]Li ZR, Lu C, Yi YT, et al., 2022a. A hierarchical framework for interactive behaviour prediction of heterogeneous traffic participants based on graph neural network. IEEE Trans Intell Transp Syst, 23(7):9102-9114.

[17]Li ZR, Gong J, Lu C, et al., 2022b. Personalized driver braking behavior modeling in the car-following scenario: an importance-weight-based transfer learning approach. IEEE Trans Ind Electron, 69(10):10704-10714.

[18]Liu Q, Li XY, Yuan SH, et al., 2021. Decision-making technology for autonomous vehicles: learning-based methods, applications and future outlook. IEEE Conf on Intelligent Transportation Systems, p.30-37.

[19]Mnih V, Kavukcuoglu K, Silver D, et al., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533.

[20]National Highway Traffic Safety Administration, 2019. 2018 Fatal Motor Vehicle Crashes: Overview. Traffic Safety Facts Research Note, U.S. Department of Transportation, p.1-9.

[21]Pusse F, Klusch M, 2019. Hybrid online POMDP planning and deep reinforcement learning for safer self-driving cars. IEEE Intelligent Vehicles Symp, p.1013-1020.

[22]Rasouli A, Tsotsos JK, 2020. Autonomous vehicles that interact with pedestrians: a survey of theory and practice. IEEE Trans Intell Transp Syst, 21(3):900-918.

[23]Schratter M, Bouton M, Kochenderfer MJ, et al., 2019. Pedestrian collision avoidance system for scenarios with occlusions. IEEE Intelligent Vehicles Symp, p.1054-1060.

[24]Wang XP, Peng HE, Zhao D, 2019. Combining reachability analysis and importance sampling for accelerated evaluation of highly automated vehicles at pedestrian crossing. Proc ASME Dynamic Systems and Control Conf, Article V003T18A011.

[25]Yang DF, Redmill K, Özgüner Ü, 2020. A multi-state social force based framework for vehicle-pedestrian interaction in uncontrolled pedestrian crossing scenarios. IEEE Intelligent Vehicles Symp, p.1807-1812.

[26]Yurtsever E, Capito L, Redmill K, et al., 2020. Integrating deep reinforcement learning with model-based path planners for automated driving. IEEE Intelligent Vehicles Symp, p.1311-1316.

[27]Zhong YX, Cao Z, Zhu MH, et al., 2020. CLAP: cloud-and-learning-compatible autonomous driving platform. IEEE Intelligent Vehicles Symp, p.1450-1456.

[28]Zhou WT, Jiang K, Cao Z, et al., 2020. Integrating deep reinforcement learning with optimal trajectory planner for automated driving. IEEE 23rd Int Conf on Intelligent Transportation Systems, p.1-8.

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE