Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars

Abstract: Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

Key words: Visual perception, Self-localization, Mapping, Motion planning, Robotic car

Chinese Summary  <73> æ— äººè½¦è‡ªä¸»å®šä½å’Œéšœç¢ç‰©æ„ŸçŸ¥çš„视觉主导多传感器èžåˆæ–¹æ³•

概è¦ï¼šäººç±»é©¾é©¶ä¸Žè‡ªä¸»é©¾é©¶åœ¨å¯¹äº¤é€šçŽ¯å¢ƒçš„ç†è§£æ–¹å¼ä¸Šæœ‰ç€æ˜Žæ˜¾å·®åˆ«ã€‚首先,人主è¦é€šè¿‡è§†è§‰æ¥ç†è§£äº¤é€šåœºæ™¯ï¼Œè€Œæœºå™¨æ„ŸçŸ¥éœ€è¦èžåˆå¤šç§å¼‚æž„çš„ä¼ æ„Ÿä¿¡æ¯æ‰èƒ½ä¿è¯è¡Œè½¦å®‰å…¨ã€‚其次,一个熟练的驾驶员å¯ä»¥è½»æ¾é€‚应å„ç§åŠ¨æ€äº¤é€šçŽ¯å¢ƒï¼Œä½†çŽ°æœ‰çš„机器感知系统å´ä¼šç»å¸¸è¾“出有噪声的感知结果,而自主驾驶è¦æ±‚感知结果近乎100%准确。本文æ出了一ç§ç”¨äºŽæ— äººè½¦äº¤é€šçŽ¯å¢ƒæ„ŸçŸ¥çš„视觉主导的多传感器èžåˆè®¡ç®—框架,通过几何和语义约æŸèžåˆæ¥è‡ªç›¸æœºã€æ¿€å…‰é›·è¾¾ï¼ˆLIDAR)åŠåœ°ç†ä¿¡æ¯ç³»ç»Ÿï¼ˆGIS)的信æ¯ï¼Œä¸ºæ— äººè½¦æ供高精度的自主定ä½å’Œå‡†ç¡®é²æ£’çš„éšœç¢ç‰©æ„ŸçŸ¥ï¼Œå¹¶è¿›ä¸€æ­¥è®¨è®ºäº†å·²æˆåŠŸé›†æˆåˆ°ä¸Šè¿°æ¡†æž¶å†…çš„é²æ£’的视觉算法,主è¦åŒ…括从训练数æ®æ”¶é›†ã€ä¼ æ„Ÿå™¨æ•°æ®å¤„ç†ã€ä½Žçº§ç‰¹å¾æå–到障ç¢ç‰©è¯†åˆ«å’ŒçŽ¯å¢ƒåœ°å›¾åˆ›å»ºç­‰å¤šä¸ªå±‚次的视觉算法。所æ出的框架里已用于自主研å‘的无人车,并在å„ç§çœŸå®žåŸŽåŒºçŽ¯å¢ƒä¸­è¿›è¡Œäº†é•¿è¾¾å…«å¹´çš„实地测试,实验结果验è¯äº†è§†è§‰ä¸»å¯¼çš„多传感èžåˆæ„ŸçŸ¥æ¡†æž¶çš„é²æ£’性和高效性。

关键è¯ç»„:视觉感知;自主定ä½ï¼›åœ°å›¾æž„建;è¿åŠ¨è§„划;无人车


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.1601873

CLC number:

TP181

Download Full Text:

Click Here

Downloaded:

3648

Download summary:

<Click Here> 

Downloaded:

1865

Clicked:

6683

Cited:

3

On-line Access:

2017-01-20

Received:

2016-12-29

Revision Accepted:

2017-01-08

Crosschecked:

2017-01-10

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE