Full Text:   <781>

Summary:  <197>

CLC number: TP391

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-07-24

Cited: 0

Clicked: 1228

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Liwen LIU

https://orcid.org/0000-0003-1867-3046

Ben FEI

https://orcid.org/0000-0002-3219-9996

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.7 P.938-950

http://doi.org/10.1631/FITEE.2300388


GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles


Author(s):  Liwen LIU, Weidong YANG, Ben FEI

Affiliation(s):  Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai 200433, China; more

Corresponding email(s):   21210240022@m.fudan.edu.cn, bfei21@m.fudan.edu.cn

Key Words:  Point cloud completion, Ground elevation estimation, Real-time, Autonomous vehicles


Liwen LIU, Weidong YANG, Ben FEI. GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 938-950.

@article{title="GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles",
author="Liwen LIU, Weidong YANG, Ben FEI",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="7",
pages="938-950",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300388"
}

%0 Journal Article
%T GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles
%A Liwen LIU
%A Weidong YANG
%A Ben FEI
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 7
%P 938-950
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300388

TY - JOUR
T1 - GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles
A1 - Liwen LIU
A1 - Weidong YANG
A1 - Ben FEI
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 7
SP - 938
EP - 950
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300388


Abstract: 
ground elevation estimation is vital for numerous applications in autonomous vehicles and intelligent robotics including three-dimensional object detection, navigable space detection, point cloud matching for localization, and registration for mapping. However, most works regard the ground as a plane without height information, which causes inaccurate manipulation in these applications. In this work, we propose GeeNet, a novel end-to-end, lightweight method that completes the ground in nearly real time and simultaneously estimates the ground elevation in a grid-based representation. GeeNet leverages the mixing of two- and three-dimensional convolutions to preserve a lightweight architecture to regress ground elevation information for each cell of the grid. For the first time, GeeNet has fulfilled ground elevation estimation from semantic scene completion. We use the SemanticKITTI and SemanticPOSS datasets to validate the proposed GeeNet, demonstrating the qualitative and quantitative performances of GeeNet on ground elevation estimation and semantic scene completion of the point cloud. Moreover, the cross-dataset generalization capability of GeeNet is experimentally proven. GeeNet achieves state-of-the-art performance in terms of point cloud completion and ground elevation estimation, with a runtime of 0.88 ms.

GeeNet:用于自动驾驶汽车地面高程估计的稳健快速点云补全

刘沥文1,杨卫东1,2,费奔1
1复旦大学计算机学院数据科学上海重点实验室,中国上海市,200433
2珠海复旦创新研究院,中国珠海市,519000
摘要:地面高程估计对于无人驾驶汽车和智能机器人的许多应用至关重要,包括三维物体检测、导航空间检测、用于定位的点云匹配和用于建图的配准。然而,现有大多数工作将地面视为没有高度信息的平面,导致这些应用中出现不准确的操作。本文提出一种端到端的轻量级方法GeeNet,可几乎实时地补全地面,同时在基于网格的表示中估计地面高程。GeeNet利用二维/三维卷积的混合来保留轻量级架构,以回归网格每个单元格的地面高程信息。GeeNet首次实现了语义场景补全的地面高程估计。使用SemanticKITTI和SemanticPOSS数据集对GeeNet进行验证,展示了其在地面高程估计和点云语义场景补全方面的定性和定量性能。此外,其跨数据集泛化能力也得到实验证明。相比文献方法,GeeNet取得更好性能,并以0.88 ms运行时实现地面高程估计和地面补全。

关键词:点云补全;地面高程估计;实时;自动驾驶车辆

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Behley J, Garbade M, Milioto A, et al., 2019. SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences. Proc IEEE/CVF Int Conf on Computer Vision, p.9296-9306.

[2]Boulch A, le Saux B, Audebert N, 2017. Unstructured point cloud semantic labeling using deep segmentation networks. Proc Workshop on 3D Object Retrieval, p.17-24.

[3]Byun J, Na KI, Seo BS, et al., 2015. Drivable road detection with 3D point clouds based on the MRF for intelligent vehicle. In: Mejias L, Corke P, Roberts J (Eds.), Field and Service Robotics. Springer, Cham, Switzerland, p.49-60.

[4]Chen LC, Papandreou G, Kokkinos I, et al., 2017. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Patt Anal Mach Intell, 40(4):834-848.

[5]Chen YD, Hao CY, Wu W, et al., 2016. Robust dense reconstruction by range merging based on confidence estimation. Sci China Inform Sci, 59(9):092103.

[6]Cheng R, Agia C, Ren Y, et al., 2021. S3CNet: a sparse semantic scene completion network for LiDAR point clouds. Proc Conf on Robot Learning, p.2148-2161.

[7]Choy C, Gwak J, Savarese S, 2019. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3070-3079.

[8]Garbade M, Chen YT, Sawatzky J, et al., 2019. Two stream 3D semantic scene completion. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.416-425.

[9]Graham B, Engelcke M, van der Maaten L, 2018. 3D semantic segmentation with submanifold sparse convolutional networks. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9224-9232.

[10]Himmelsbach M, Hundelshausen FV, Wuensche HJ, 2010. Fast segmentation of 3D point clouds for ground vehicles. IEEE Intelligent Vehicles Symp, p.560-565.

[11]Landrieu L, Boussaha M, 2019. Point cloud oversegmentation with graph-structured deep metric learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7432-7491.

[12]Landrieu L, Simonovsky M, 2018. Large-scale point cloud semantic segmentation with superpoint graphs. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4558-4567.

[13]Lang AH, Vora S, Caesar H, et al., 2019. PointPillars: fast encoders for object detection from point clouds. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.12689-12697.

[14]Lawin FJ, Danelljan M, Tosteberg P, et al., 2017. Deep projective 3D semantic segmentation. Proc 17th Int Conf on Computer Analysis of Images and Patterns, p.95-107.

[15]Leonard J, How J, Teller S, et al., 2008. A perception-driven autonomous urban vehicle. J Field Robot, 25(10):727-774.

[16]Liu BS, Chen XM, Han YH, et al., 2019. Accelerating DNN-based 3D point cloud processing for mobile computing. Sci China Inform Sci, 62(11):212206.

[17]Liu KQ, Wang WG, Tharmarasa R, et al., 2019. Ground surface filtering of 3D point clouds based on hybrid regression technique. IEEE Access, 7:23270-23284.

[18]Liu S, Hu Y, Zeng YM, et al., 2018. See and think: disentangling semantic scene completion. Proc 32nd Int Conf on Neural Information Processing Systems, p.261-272.

[19]Liu Y, 2016. Robust segmentation of raw point clouds into consistent surfaces. Sci China Technol Sci, 59(8):1156-1166.

[20]Narksri P, Takeuchi E, Ninomiya Y, et al., 2018. A slope-robust cascaded ground segmentation in 3D point cloud for autonomous vehicles. Proc 21st Int Conf on Intelligent Transportation Systems, p.497-504.

[21]Paigwar A, Erkent Ö, Sierra-Gonzalez D, et al., 2020. GndNet: fast ground plane estimation and point cloud segmentation for autonomous vehicles. IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2150-2156.

[22]Pan YC, Gao B, Mei JL, et al., 2020. SemanticPOSS: a point cloud dataset with large quantity of dynamic instances. IEEE Intelligent Vehicles Symp, p.687-693.

[23]Qi CR, Su H, Mo KC, et al., 2017a. PointNet: deep learning on point sets for 3D classification and segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.77-85.

[24]Qi CR, Yi L, Su H, et al., 2017b. PointNet++: deep hierarchical feature learning on point sets in a metric space. Proc 31st Int Conf on Neural Information Processing Systems, p.5105-5114.

[25]Ren DY, Wu ZY, Li JW, et al., 2022. Point attention network for point cloud semantic segmentation. Sci China Inform Sci, 65(9):192104.

[26]Riegler G, Ulusoy AO, Geiger A, 2017. OctNet: learning deep 3D representations at high resolutions. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.6620-6629.

[27]Rist CB, Emmerichs D, Enzweiler M, et al., 2022. Semantic scene completion using local deep implicit functions on LiDAR data. IEEE Trans Patt Anal Mach Intell, 44(10):7205-7218.

[28]Roldão L, de Charette R, Verroust-Blondet A, 2020. LMSCNet: lightweight multiscale 3D semantic completion. Int Conf on 3D Vision, p.111-119.

[29]Rummelhard L, Nègre A, Laugier C, 2015. Conditional Monte Carlo dense occupancy tracker. IEEE 18th Int Conf on Intelligent Transportation Systems, p.2485-2490.

[30]Rummelhard L, Paigwar A, Nègre A, et al., 2017. Ground estimation and point cloud segmentation using spatiotemporal conditional random field. IEEE Intelligent Vehicles Symp, p.1105-1110.

[31]Song SR, Yu F, Zeng A, et al., 2017. Semantic scene completion from a single depth image. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.190-198.

[32]Tatarchenko M, Park J, Koltun V, et al., 2018. Tangent convolutions for dense prediction in 3D. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3887-3896.

[33]Thomas H, Qi CR, Deschaud JE, et al., 2019. KPConv: flexible and deformable convolution for point clouds. Proc IEEE/CVF Int Conf on Computer Vision, p.6410-6419.

[34]Thrun S, Montemerlo M, Dahlkamp H, et al., 2006. Stanley: the robot that won the DARPA grand challenge. J Field Robot, 23(9):661-692.

[35]Wang L, Huang YC, Hou YL, et al., 2019. Graph attention convolution for point cloud semantic segmentation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.10288-10297.

[36]Wang PS, Liu Y, Guo YX, et al., 2017. O-CNN: octree-based convolutional neural networks for 3D shape analysis. ACM Trans Graph, 36(4):72.

[37]Wu BC, Wan A, Yue XY, et al., 2018. SqueezeSeg: convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. IEEE Int Conf on Robotics and Automation, p.1887-1893.

[38]Wu BC, Zhou XY, Zhao SC, et al., 2019. SqueezeSegV2: improved model structure and unsupervised domain adaptation for road-object segmentation from a LiDAR point cloud. Int Conf on Robotics and Automation, p.4376-4382.

[39]Wu WX, Qi ZA, Li FX, 2019. PointConv: deep convolutional networks on 3D point clouds. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9613-9622.

[40]Yan X, Gao JT, Li J, et al., 2021. Sparse single sweep LiDAR point cloud segmentation via learning contextual shape priors from scene completion. Proc AAAI Conf on Artificial Intelligence, p.3101-3109.

[41]Zhou Y, Tuzel O, 2018. VoxelNet: end-to-end learning for point cloud based 3D object detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4490-4499.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE