Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars

Abstract: Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

Key words: Visual perception, Self-localization, Mapping, Motion planning, Robotic car

Chinese Summary  <80> æ— äººè½¦è‡ªä¸»å®šä½å’Œéšœç¢ç‰©æ„ŸçŸ¥çš„视觉主导多传感器èžåˆæ–¹æ³•

概è¦ï¼šäººç±»é©¾é©¶ä¸Žè‡ªä¸»é©¾é©¶åœ¨å¯¹äº¤é€šçŽ¯å¢ƒçš„ç†è§£æ–¹å¼ä¸Šæœ‰ç€æ˜Žæ˜¾å·®åˆ«ã€‚首先,人主è¦é€šè¿‡è§†è§‰æ¥ç†è§£äº¤é€šåœºæ™¯ï¼Œè€Œæœºå™¨æ„ŸçŸ¥éœ€è¦èžåˆå¤šç§å¼‚æž„çš„ä¼ æ„Ÿä¿¡æ¯æ‰èƒ½ä¿è¯è¡Œè½¦å®‰å…¨ã€‚其次,一个熟练的驾驶员å¯ä»¥è½»æ¾é€‚应å„ç§åЍæ€äº¤é€šçŽ¯å¢ƒï¼Œä½†çŽ°æœ‰çš„æœºå™¨æ„ŸçŸ¥ç³»ç»Ÿå´ä¼šç»å¸¸è¾“å‡ºæœ‰å™ªå£°çš„æ„ŸçŸ¥ç»“æžœï¼Œè€Œè‡ªä¸»é©¾é©¶è¦æ±‚感知结果近乎100%准确。本文æå‡ºäº†ä¸€ç§ç”¨äºŽæ— äººè½¦äº¤é€šçŽ¯å¢ƒæ„ŸçŸ¥çš„è§†è§‰ä¸»å¯¼çš„å¤šä¼ æ„Ÿå™¨èžåˆè®¡ç®—框架,通过几何和语义约æŸèžåˆæ¥è‡ªç›¸æœºã€æ¿€å…‰é›·è¾¾ï¼ˆLIDAR)åŠåœ°ç†ä¿¡æ¯ç³»ç»Ÿï¼ˆGIS)的信æ¯ï¼Œä¸ºæ— äººè½¦æä¾›é«˜ç²¾åº¦çš„自主定ä½å’Œå‡†ç¡®é²æ£’的障ç¢ç‰©æ„ŸçŸ¥ï¼Œå¹¶è¿›ä¸€æ­¥è®¨è®ºäº†å·²æˆåŠŸé›†æˆåˆ°ä¸Šè¿°æ¡†æž¶å†…çš„é²æ£’的视觉算法,主è¦åŒ…æ‹¬ä»Žè®­ç»ƒæ•°æ®æ”¶é›†ã€ä¼ æ„Ÿå™¨æ•°æ®å¤„ç†ã€ä½Žçº§ç‰¹å¾æå–到障ç¢ç‰©è¯†åˆ«å’ŒçŽ¯å¢ƒåœ°å›¾åˆ›å»ºç­‰å¤šä¸ªå±‚æ¬¡çš„è§†è§‰ç®—æ³•ã€‚æ‰€æå‡ºçš„æ¡†æž¶é‡Œå·²ç”¨äºŽè‡ªä¸»ç ”å‘的无人车,并在å„ç§çœŸå®žåŸŽåŒºçŽ¯å¢ƒä¸­è¿›è¡Œäº†é•¿è¾¾å…«å¹´çš„å®žåœ°æµ‹è¯•ï¼Œå®žéªŒç»“æžœéªŒè¯äº†è§†è§‰ä¸»å¯¼çš„多传感èžåˆæ„ŸçŸ¥æ¡†æž¶çš„鲿£’性和高效性。

关键è¯ç»„:视觉感知;自主定ä½ï¼›åœ°å›¾æž„建;è¿åŠ¨è§„åˆ’ï¼›æ— äººè½¦


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.1601873

CLC number:

TP181

Download Full Text:

Click Here

Downloaded:

4146

Download summary:

<Click Here> 

Downloaded:

2246

Clicked:

8265

Cited:

3

On-line Access:

2024-08-27

Received:

2023-10-17

Revision Accepted:

2024-05-08

Crosschecked:

2017-01-10

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE