CLC number: TP317.4
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2014-02-19
Cited: 0
Clicked: 8707
Li-wei Liu, Yang Li, Ming Zhang, Liang-hao Wang, Dong-xiao Li. K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps[J]. Journal of Zhejiang University Science C, 2014, 15(3): 174-186.
@article{title="K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps",
author="Li-wei Liu, Yang Li, Ming Zhang, Liang-hao Wang, Dong-xiao Li",
journal="Journal of Zhejiang University Science C",
volume="15",
number="3",
pages="174-186",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C1300194"
}
%0 Journal Article
%T K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps
%A Li-wei Liu
%A Yang Li
%A Ming Zhang
%A Liang-hao Wang
%A Dong-xiao Li
%J Journal of Zhejiang University SCIENCE C
%V 15
%N 3
%P 174-186
%@ 1869-1951
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1300194
TY - JOUR
T1 - K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps
A1 - Li-wei Liu
A1 - Yang Li
A1 - Ming Zhang
A1 - Liang-hao Wang
A1 - Dong-xiao Li
J0 - Journal of Zhejiang University Science C
VL - 15
IS - 3
SP - 174
EP - 186
%@ 1869-1951
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1300194
Abstract: Both time-of-flight (ToF) cameras and passive stereo can provide the depth information for their corresponding captured real scenes, but they have innate limitations. ToF cameras and passive stereo are intrinsically complementary for certain tasks. It is desirable to appropriately leverage all the available information by ToF cameras and passive stereo. Although some fusion methods have been presented recently, they fail to consider ToF reliability detection and ToF based improvement of passive stereo. As a result, this study proposes an approach to integrating ToF cameras and passive stereo to obtain high-accuracy depth maps. The main contributions are: (1) An energy cost function is devised to use data from ToF cameras to boost the stereo matching of passive stereo; (2) A fusion method is used to combine the depth information from both ToF cameras and passive stereo to obtain high-accuracy depth maps. Experiments show that the proposed approach achieves improved results with high accuracy and robustness.
[1]Attamimi, M., Mizutani, A., Nakamura, T., et al., 2010. Real-time 3D visual sensor for robust object recognition. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.4560-4565.
[2]Buehler, C., Bosse, M., McMillan, L., et al., 2001. Unstructured lumigraph rendering. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, p.425-432.
[3]Canesta, 2006. Canestavision Electronic Perception Development Kit. Available from http://www.canesta.com/
[4]Chen, Q., Li, D., Tang, C., 2012. KNN matting. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.869-876.
[5]De-Maeztu, L., Mattoccia, S., Villanueva, A., et al., 2011. Linear stereo matching. Proc. IEEE Int. Conf. on Computer Vision, p.1708-1715.
[6]Diebel, J., Thrun, S., 2005. An application of Markov random fields to range sensing. Adv. Neur. Inform. Process. Syst., 18:291-298.
[7]Gandhiy, V., Cech, J., Horaud, R., 2012. High-resolution depth maps based on TOF-stereo fusion. IEEE Int. Conf. on Robotics and Automation, p.4742-4749.
[8]Gudmundsson, S.A., Aanaes, H., Larsen, R., 2008. Fusion of stereo vision and time-of-flight imaging for improved 3D estimation. Int. J. Intell. Syst. Technol. Appl., 5(3-4):425-433.
[9]Kanade, T., Okutomi, M., 1994. A stereo matching algorithm with an adaptive window: theory and experiment. IEEE Trans. Patt. Anal. Mach. Intell., 16(9):920-932.
[10]Lee, C., Song, H., Choi, B., et al., 2011. 3D scene capturing using stereoscopic cameras and a time-of-flight camera. IEEE Trans. Consum. Electron., 57(3):1370-1376.
[11]Lindner, M., Kolb, A., Hartmann, K., 2007. Data-fusion of PMD-based distance-information and high-resolution RGB-images. Proc. Int. Symp. on Signals, Circuits and Systems, p.1-4.
[12]May, S., Werner, B., Surmann, H., et al., 2006. 3D time-of-flight cameras for mobile robotics. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.790-795.
[13]Opencv, 2012. Open Source Computer Vision Library (opencv). Available from www.intel.com/technology/computing/opencv/
[14]PMD, 2009. Camcube Series. Available from http://www.pmdtec.com/
[15]PMD, 2010. Camcube 3.0 Products. Available from http://www.pmdtec.com/products-services/pmdvisionr-cameras/pmdvisionr-camcub%e-30/
[16]Ringbeck, T., Hagebeuker, B., 2007. A 3D time of flight camera for object detection. Proc. 8th Conf. on Optical 3-D Measurement Techniques.
[17]Scharstein, D., Szeliski, R., 2002a. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis., 47(1-3):7-42.
[18]Scharstein, D., Szeliski, R., 2002b. Middlebury Stereo Evaluation - version 2. Available from http://vision.middlebury.edu/stereo/eval
[19]Wang, L., Lou, L., Yang, C., et al., 2013. Portrait drawing from corresponding range and intensity images. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 14(7):530-541.
[20]Xu, Z., Schwarte, R., Heinol, H., et al., 1998. Smart pixel-photonic mixer device (PMD). Proc. 5th Int. Conf. on Mechatronics and Machine Vision in Practice, p.259-264.
[21]Yang, Q., 2012. A non-local cost aggregation method for stereo matching. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1402-1409.
[22]Yao, L., Li, D., Zhang, J., et al., 2012. Accurate real-time stereo correspondence using intra- and inter-scanline optimization. J. Zhejiang Univ.-Sci. C (Comput. & Electron.), 13(6):472-482.
[23]Z-cam, 2004. 3DV Systems. Available from http://www.3dvsystems.com
[24]Zhang, J., Li, D., Zhang, M., 2010. Fast stereo matching algorithm based on adaptive window. Proc. Int. Conf. on Audio Language and Image Processing, p.138-142.
[25]Zhang, Z., 1999. Flexible camera calibration by viewing a plane from unknown orientations. Proc. 7th IEEE Int. Conf. on Computer Vision, p.666-673.
[26]Zhu, J., Wang, L., Yang, R., et al., 2008. Fusion of time-of-flight depth and stereo for high accuracy depth maps. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.
Open peer comments: Debate/Discuss/Question/Opinion
<1>