
CLC number: TP242
On-line Access: 2026-01-08
Received: 2025-05-29
Revision Accepted: 2025-10-26
Crosschecked: 2026-01-08
Cited: 0
Clicked: 64
Citations: Bibtex RefMan EndNote GB/T7714
https://orcid.org/0009-0007-1626-5141
Yinan YANG, Zhiye WANG, Xuan KONG, Peng ZHI, Dapeng ZHANG, Rui ZHOU, Qingguo ZHOU. E2MN: human-inspired end-to-end mapless navigation with oscillation suppression and short-term memory[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(11): 2254-2281.
@article{title="E2MN: human-inspired end-to-end mapless navigation with oscillation suppression and short-term memory",
author="Yinan YANG, Zhiye WANG, Xuan KONG, Peng ZHI, Dapeng ZHANG, Rui ZHOU, Qingguo ZHOU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="11",
pages="2254-2281",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2500348"
}
%0 Journal Article
%T E2MN: human-inspired end-to-end mapless navigation with oscillation suppression and short-term memory
%A Yinan YANG
%A Zhiye WANG
%A Xuan KONG
%A Peng ZHI
%A Dapeng ZHANG
%A Rui ZHOU
%A Qingguo ZHOU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 11
%P 2254-2281
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2500348
TY - JOUR
T1 - E2MN: human-inspired end-to-end mapless navigation with oscillation suppression and short-term memory
A1 - Yinan YANG
A1 - Zhiye WANG
A1 - Xuan KONG
A1 - Peng ZHI
A1 - Dapeng ZHANG
A1 - Rui ZHOU
A1 - Qingguo ZHOU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 11
SP - 2254
EP - 2281
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2500348
Abstract: Robotic navigation in unknown environments is challenging due to the lack of high-definition maps. Building maps in real time requires significant computational resources. Nevertheless, sensor data can provide sufficient environmental context for robots’ navigation. This paper presents an interpretable and mapless navigation method using only two-dimensional (2D) light detection and ranging (LiDAR), mimicking human strategies to escape from dead ends. Unlike traditional planners, which depend on global paths or vision-based and learning-based methods, requiring heavy data and hardware, our approach is lightweight and robust, and it requires no prior map. It effectively suppresses oscillations and enables autonomous recovery from local minimum traps. Experiments across diverse environments and routes, including ablation studies and comparisons with existing frameworks, show that the proposed method achieves map-like performance without a map—reducing the average path length by 50.51% when compared to the classical mapless Bug2 algorithm and increasing it by only 17.57% when compared to map-based navigation.
[1]Ali M, Liu LT, 2023. GP-Frontier for local mapless navigation. IEEE Int Conf on Robotics and Automation, p.10047-10053.
[2]Balan K, Manuel MP, Faied M, et al., 2019. A fuzzy based accessibility model for disaster environment. Int Conf on Robotics and Automation, p.2304-2310.
[3]Bektaş K, Bozma HI, 2022. APF-RL: safe mapless navigation in unknown environments. Int Conf on Robotics and Automation, p.7299-7305.
[4]Borenstein J, Koren Y, 1989. Real-time obstacle avoidance for fast mobile robots. IEEE Trans Syst Man Cybern, 19(5):1179-1187.
[5]Borenstein J, Koren Y, 1991. The vector field histogram—fast obstacle avoidance for mobile robots. IEEE Trans Robot Autom, 7(3):278-288.
[6]Chaudhary KL, Ghose D, 2017. Path planning in dynamic environments with deforming obstacles using collision cones. Indian Control Conf, p.87-92.
[7]Chen JG, Pan LL, Ji SP, et al., 2025. Online temporal fusion for vectorized map construction in mapless autonomous driving. IEEE Robot Autom Lett, 10(4):3948-3955.
[8]Cheng X, Wang ST, Hu YM, et al., 2023. Autonomous mapless navigation via maximum entropy learning. 42nd Chinese Control Conf, p.4562-4567.
[9]Clements WR, Van Delft B, Robaglia BM, et al., 2020. Estimating risk and uncertainty in deep reinforcement learning.
[10]Fiorini P, Shiller Z, 1998. Motion planning in dynamic environments using velocity obstacles. Int J Rob Res, 17(7):760-772.
[11]Fox D, Burgard W, Thrun S, 1997. The dynamic window approach to collision avoidance. IEEE Robot Autom Mag, 4(1):23-33.
[12]Jang Y, Baek J, Han S, 2022. Hindsight intermediate targets for mapless navigation with deep reinforcement learning. IEEE Trans Ind Electron, 69(11):11816-11825.
[13]Jin J, Nguyen NM, Sakib N, et al., 2020. Mapless navigation among dynamics with social-safety-awareness: a reinforcement learning approach from 2D laser scans. IEEE Int Conf on Robotics and Automation, p.6979-6985.
[14]Kamon I, Rivlin E, Rimon E, 1996. A new range-sensor based globally convergent navigation algorithm for mobile robots. Proc IEEE Int Conf on Robotics and Automation, p.429-435.
[15]Khatib O, 1985. Real-time obstacle avoidance for manipulators and mobile robots. Proc IEEE Int Conf on Robotics and Automation, p.500-505.
[16]Li XZ, Gong Y, Jia SM, et al., 2016. VFHF: a combining obstacle avoidance method based on laser rangefinder. IEEE Int Conf on Information and Automation, p.825-830.
[17]Liang J, Payandeh A, Song D, et al., 2024. DTG: diffusion-based trajectory generation for mapless global navigation. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.5340-5347.
[18]Lumelsky VJ, Stepanov AA, 1987. Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape. Algorithmica, 2(1):403-430.
[19]Marchesini E, Farinelli A, 2020. Discrete deep reinforcement learning for mapless navigation. IEEE Int Conf on Robotics and Automation, p.10688-10694.
[20]Mathews Z, Lechón M, Blanco Calvo JM, et al., 2009. Insect-like mapless navigation based on head direction cells and contextual learning using chemo-visual sensors. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2243-2250.
[21]Meng XR, Cai J, Wu YL, et al., 2018. A navigation framework for mobile robots with 3D LiDAR and monocular camera. 44th Annual Conf of the IEEE Industrial Electronics Society, p.3147-3152.
[22]Miao YM, Tang Y, Alzahrani BA, et al., 2021. Airborne LiDAR assisted obstacle recognition and intrusion detection towards unmanned aerial vehicle: architecture, modeling and evaluation. IEEE Trans Intell Transp Syst, 22(7):4531-4540.
[23]Mnih V, Kavukcuoglu K, Silver D, et al., 2013. Playing Atari with deep reinforcement learning.
[24]Ort T, Paull L, Rus D, 2018. Autonomous vehicle navigation in rural environments without detailed prior maps. IEEE Int Conf on Robotics and Automation, p.2040-2047.
[25]Ortega M, Paredes M, Cruz PJ, 2024. Control architecture based on the VFH+ obstacle avoidance algorithm for collective behaviors in swarms of differential drive mobile robots. Argentine Conf on Electronics, p.86-91.
[26]Ortiz S, Yu W, Li XO, 2019. Autonomous navigation in unknown environments using robust SLAM. 45th Annual Conf of the IEEE Industrial Electronics Society, p.5590-5595.
[27]Pappas P, Chiou M, Epsimos GT, et al., 2020. VFH+ based shared control for remotely operated mobile robots. IEEE Int Symp on Safety, Security, and Rescue Robotics, p.366-373.
[28]Przewodowski A, Osório FS, 2022. A Monte Carlo particle filter formulation for mapless-based localization. IEEE Intelligent Vehicles Symp, p.1782-1788.
[29]Rösmann C, Feiten W, Wösch T, et al., 2012. Trajectory modification considering dynamic constraints of autonomous robots. 7th German Conf on Robotics, p.1-6.
[30]Rudin C, 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell, 1(5):206-215.
[31]Tai L, Paolo G, Liu M, 2017. Virtual-to-real deep reinforcement learning: continuous control of mobile robots for mapless navigation. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.31-36.
[32]Ulrich I, Borenstein J, 1998. VFH+: reliable obstacle avoidance for fast mobile robots. Proc IEEE Int Conf on Robotics and Automation, p.1572-1577.
[33]Ulrich I, Borenstein J, 2000. VFH*: local obstacle avoidance with look-ahead verification. Proc IEEE Int Conf on Robotics and Automation, p.2505-2511.
[34]Vouros GA, 2022. Explainable deep reinforcement learning: state of the art and challenges. ACM Comput Surv, 55(5):92.
[35]Wapnick S, Manderson T, Meger D, et al., 2021. Trajectory-constrained deep latent visual attention for improved local planning in presence of heterogeneous terrain. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.460-467.
[36]Wu CF, Wang YL, Ma L, et al., 2021. VFH+ based local path planning for unmanned surface vehicles. IEEE Int Conf on Recent Advances in Systems Science and Engineering, p.1-6.
[37]Xie LH, Markham A, Trigoni N, 2020. SnapNav: learning mapless visual navigation with sparse directional guidance and visual reference. IEEE Int Conf on Robotics and Automation, p.1682-1688.
[38]Xu JP, Zhao CN, Yang J, et al., 2025. FDDSGCN: fractional decoupling dynamic spatiotemporal graph convolutional network for traffic forecasting. IEEE Int Conf on Acoustics, Speech and Signal Processing, p.1-5.
[39]Yan CZ, Qin JH, Liu QC, et al., 2023. Mapless navigation with safety-enhanced imitation learning. IEEE Trans Ind Electron, 70(7):7073-7081.
[40]Zhang W, Liu N, Zhang YF, 2021. Learn to navigate maplessly with varied LiDAR configurations: a support point-based approach. IEEE Robot Autom Lett, 6(2):1918-1925.
[41]Zhang YF, Wang G, 2017. An improved RGB-D VFH+ obstacle avoidance algorithm with sensor blindness assumptions. 2nd Int Conf on Robotics and Automation Engineering, p.408-414.
[42]Zhong YC, Liu JY, Jian ZQ, et al., 2023. DVT-Tree: dynamic visible topology tree for efficient mapless navigation in maze environments. IEEE 26th Int Conf on Intelligent Transportation Systems, p.4717-4724.
[43]Zhou QL, Lyu L, Liu H, 2022. Deep reinforcement learning with long-time memory capability for robot mapless navigation. IEEE 25th Int Conf on Computer Supported Cooperative Work in Design, p.1215-1220.
[44]Zou Q, Liu JY, Cong M, et al., 2024. Research on continuous control approach of mobile robots for mapless navigation. IEEE 14th Int Conf on CYBER Technology in Automation, Control, and Intelligent Systems, p.185-189.
Open peer comments: Debate/Discuss/Question/Opinion
<1>