Full Text:   <3398>

Summary:  <2068>

CLC number: TP317.4

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2014-02-19

Cited: 0

Clicked: 8707

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Journal of Zhejiang University SCIENCE C 2014 Vol.15 No.3 P.174-186

http://doi.org/10.1631/jzus.C1300194


K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps


Author(s):  Li-wei Liu, Yang Li, Ming Zhang, Liang-hao Wang, Dong-xiao Li

Affiliation(s):  Institute of Information and Communication Engineering, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):   lllw19870907@zju.edu.cn, lychina@zju.edu.cn, wanglianghao@zju.edu.cn

Key Words:  Depth map, Passive stereo, Time-of-flight camera, Fusion


Li-wei Liu, Yang Li, Ming Zhang, Liang-hao Wang, Dong-xiao Li. K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps[J]. Journal of Zhejiang University Science C, 2014, 15(3): 174-186.

@article{title="K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps",
author="Li-wei Liu, Yang Li, Ming Zhang, Liang-hao Wang, Dong-xiao Li",
journal="Journal of Zhejiang University Science C",
volume="15",
number="3",
pages="174-186",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C1300194"
}

%0 Journal Article
%T K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps
%A Li-wei Liu
%A Yang Li
%A Ming Zhang
%A Liang-hao Wang
%A Dong-xiao Li
%J Journal of Zhejiang University SCIENCE C
%V 15
%N 3
%P 174-186
%@ 1869-1951
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1300194

TY - JOUR
T1 - K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps
A1 - Li-wei Liu
A1 - Yang Li
A1 - Ming Zhang
A1 - Liang-hao Wang
A1 - Dong-xiao Li
J0 - Journal of Zhejiang University Science C
VL - 15
IS - 3
SP - 174
EP - 186
%@ 1869-1951
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1300194


Abstract: 
Both time-of-flight (ToF) cameras and passive stereo can provide the depth information for their corresponding captured real scenes, but they have innate limitations. ToF cameras and passive stereo are intrinsically complementary for certain tasks. It is desirable to appropriately leverage all the available information by ToF cameras and passive stereo. Although some fusion methods have been presented recently, they fail to consider ToF reliability detection and ToF based improvement of passive stereo. As a result, this study proposes an approach to integrating ToF cameras and passive stereo to obtain high-accuracy depth maps. The main contributions are: (1) An energy cost function is devised to use data from ToF cameras to boost the stereo matching of passive stereo; (2) A fusion method is used to combine the depth information from both ToF cameras and passive stereo to obtain high-accuracy depth maps. Experiments show that the proposed approach achieves improved results with high accuracy and robustness.

基于K-最近邻域搜寻的ToF深度摄像机和被动立体深度获取的融合技术研究

研究目的:场景深度获取是目前三维显示研究领域中最关键的技术。目前主流的深度获取方式主要有两种:基于双目图像的被动立体匹配技术和基于主动光源测距的ToF深度摄像系统。两种方式获取的深度各有优缺点,本文旨在分析两者优缺点,融合两种方法获取的结果,形成质量更好的场景深度。
创新要点:利用ToF深度摄像机的优势区域指导立体匹配的过程,优化了立体匹配的结果,同时提出了一种新的代价优化深度融合算法,将ToF深度摄像机的采集结果和立体匹配产生的深度融合成精度更高的深度图。
研究方法:主要包含两部分算法,流程可见图1。首先,利用ToF深度摄像机提供的深度测量图和对应的光强振幅图构建能量函数,利用该能量函数,结合K-最近邻域算法,指导原始立体匹配过程。然后,将优化后的立体匹配结果和TOF深度图结合,构建代价函数,选取最优深度解,作为最终融合深度。
重要结论:实验结果显示,本文采用的算法获取的深度图优于单一主动或被动方法获取的深度图,也优于另一类全局优化的深度融合算法。

关键词:深度图;被动立体;ToF深度摄像机;融合

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Attamimi, M., Mizutani, A., Nakamura, T., et al., 2010. Real-time 3D visual sensor for robust object recognition. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.4560-4565.

[2]Buehler, C., Bosse, M., McMillan, L., et al., 2001. Unstructured lumigraph rendering. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, p.425-432.

[3]Canesta, 2006. Canestavision Electronic Perception Development Kit. Available from http://www.canesta.com/

[4]Chen, Q., Li, D., Tang, C., 2012. KNN matting. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.869-876.

[5]De-Maeztu, L., Mattoccia, S., Villanueva, A., et al., 2011. Linear stereo matching. Proc. IEEE Int. Conf. on Computer Vision, p.1708-1715.

[6]Diebel, J., Thrun, S., 2005. An application of Markov random fields to range sensing. Adv. Neur. Inform. Process. Syst., 18:291-298.

[7]Gandhiy, V., Cech, J., Horaud, R., 2012. High-resolution depth maps based on TOF-stereo fusion. IEEE Int. Conf. on Robotics and Automation, p.4742-4749.

[8]Gudmundsson, S.A., Aanaes, H., Larsen, R., 2008. Fusion of stereo vision and time-of-flight imaging for improved 3D estimation. Int. J. Intell. Syst. Technol. Appl., 5(3-4):425-433.

[9]Kanade, T., Okutomi, M., 1994. A stereo matching algorithm with an adaptive window: theory and experiment. IEEE Trans. Patt. Anal. Mach. Intell., 16(9):920-932.

[10]Lee, C., Song, H., Choi, B., et al., 2011. 3D scene capturing using stereoscopic cameras and a time-of-flight camera. IEEE Trans. Consum. Electron., 57(3):1370-1376.

[11]Lindner, M., Kolb, A., Hartmann, K., 2007. Data-fusion of PMD-based distance-information and high-resolution RGB-images. Proc. Int. Symp. on Signals, Circuits and Systems, p.1-4.

[12]May, S., Werner, B., Surmann, H., et al., 2006. 3D time-of-flight cameras for mobile robotics. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.790-795.

[13]Opencv, 2012. Open Source Computer Vision Library (opencv). Available from www.intel.com/technology/computing/opencv/

[14]PMD, 2009. Camcube Series. Available from http://www.pmdtec.com/

[15]PMD, 2010. Camcube 3.0 Products. Available from http://www.pmdtec.com/products-services/pmdvisionr-cameras/pmdvisionr-camcub%e-30/

[16]Ringbeck, T., Hagebeuker, B., 2007. A 3D time of flight camera for object detection. Proc. 8th Conf. on Optical 3-D Measurement Techniques.

[17]Scharstein, D., Szeliski, R., 2002a. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis., 47(1-3):7-42.

[18]Scharstein, D., Szeliski, R., 2002b. Middlebury Stereo Evaluation - version 2. Available from http://vision.middlebury.edu/stereo/eval

[19]Wang, L., Lou, L., Yang, C., et al., 2013. Portrait drawing from corresponding range and intensity images. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 14(7):530-541.

[20]Xu, Z., Schwarte, R., Heinol, H., et al., 1998. Smart pixel-photonic mixer device (PMD). Proc. 5th Int. Conf. on Mechatronics and Machine Vision in Practice, p.259-264.

[21]Yang, Q., 2012. A non-local cost aggregation method for stereo matching. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1402-1409.

[22]Yao, L., Li, D., Zhang, J., et al., 2012. Accurate real-time stereo correspondence using intra- and inter-scanline optimization. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 13(6):472-482.

[23]Z-cam, 2004. 3DV Systems. Available from http://www.3dvsystems.com

[24]Zhang, J., Li, D., Zhang, M., 2010. Fast stereo matching algorithm based on adaptive window. Proc. Int. Conf. on Audio Language and Image Processing, p.138-142.

[25]Zhang, Z., 1999. Flexible camera calibration by viewing a plane from unknown orientations. Proc. 7th IEEE Int. Conf. on Computer Vision, p.666-673.

[26]Zhu, J., Wang, L., Yang, R., et al., 2008. Fusion of time-of-flight depth and stereo for high accuracy depth maps. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE