CLC number: TP391.4
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2017-09-06
Cited: 1
Clicked: 7285
Citations: Bibtex RefMan EndNote GB/T7714
Hao Zhu, Qing Wang, Jingyi Yu. Light field imaging: models, calibrations, reconstructions, and applications[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(9): 1236-1249.
@article{title="Light field imaging: models, calibrations, reconstructions, and applications",
author="Hao Zhu, Qing Wang, Jingyi Yu",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="9",
pages="1236-1249",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1601727"
}
%0 Journal Article
%T Light field imaging: models, calibrations, reconstructions, and applications
%A Hao Zhu
%A Qing Wang
%A Jingyi Yu
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 9
%P 1236-1249
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1601727
TY - JOUR
T1 - Light field imaging: models, calibrations, reconstructions, and applications
A1 - Hao Zhu
A1 - Qing Wang
A1 - Jingyi Yu
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 9
SP - 1236
EP - 1249
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1601727
Abstract: light field imaging is an emerging technology in computational photography areas. Based on innovative designs of the imaging model and the optical path, light field cameras not only record the spatial intensity of three-dimensional (3D) objects, but also capture the angular information of the physical world, which provides new ways to address various problems in computer vision, such as 3D reconstruction, saliency detection, and object recognition. In this paper, three key aspects of light field cameras, i.e., model, calibration, and reconstruction, are reviewed extensively. Furthermore, light field based applications on informatics, physics, medicine, and biology are exhibited. Finally, open issues in light field imaging and long-term application prospects in other natural sciences are discussed.
[1]Babacan, S.D., Ansorge, R., Luessi, M., et al., 2012. Compressive light field sensing. IEEE Trans. Image Process., 21(12):4746-4757.
[2]Belden, J., Truscott, T.T., Axiak, M.C., et al., 2010. Three-dimensional synthetic aperture particle image velocimetry. Meas. Sci. Technol., 21(12):125403.
[3]Bergamasco, F., Albarelli, A., Cosmo, L., et al., 2015. Adopting an unconstrained ray model in light-field cameras for 3D shape reconstruction. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3003-3012.
[4]Birklbauer, C., Bimber, O., 2014. Panorama light-field imaging. Comput. Graph. Forum, 33(2):43-52.
[5]Birklbauer, C., Opelt, S., Bimber, O., 2013. Rendering gigaray light fields. Comput. Graph. Forum, 32(2pt4):469-478.
[6]Bishop, T.E., Favaro, P., 2012. The light field camera: extended depth of field, aliasing, and superresolution. IEEE Trans. Patt. Anal. Mach. Intell., 34(5):972-986.
[7]Bok, Y., Jeon, H.G., Kweon, I.S., 2014. metric calibration of micro-lens-based light-field cameras using line features. Proc. European Conf. on Computer Vision, p.47-61.
[8]Broxton, M., Grosenick, L., Yang, S., et al., 2013. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Expr., 21(21):25418-25439.
[9]Buehler, C., Bosse, M., McMillan, L., et al., 2001. Unstructured lumigraph rendering. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, p.425-432.
[10]Chen, C., Lin, H., Yu, Z., et al., 2014. Light field stereo matching using bilateral statistics of surface cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1518-1525.
[11]Cho, D., Lee, M., Kim, S., et al., 2013. Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. Proc. IEEE Int. Conf. on Computer Vision, p.3280-3287.
[12]Dansereau, D.G., Pizarro, O., Williams, S.B., 2013. Decoding, calibration and rectification for lenselet-based plenoptic cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1027-1034.
[13]Dansereau, D.G., Pizarro, O., Williams, S.B., 2015. Linear volumetric focus for light field cameras. ACM Trans. Graph., 34(2):15.1-15.20.
[14]Edussooriya, C.U.S., 2015. Low-Complexity Multi-dimensional Filters for Plenoptic Signal Processing. PhD Thesis, University of Victoria, Canada. h://hdl.handle.net/1828/6894
[15]Fahringer, T., Thurow, B.S., 2012. Tomographic reconstruction of a 3-D flow field using a plenoptic camera. Proc. 42nd AIAA Fluid Dynamics Conf. and Exhibit, p.1-13.
[16]Georgiev, T., Lumsdaine, A., 2009. Superresolution with Plenoptic 2.0 cameras. Proc. Frontiers in Optics / Laser Science XXV / Fall OSA Optics & Photonics Technical Digest.
[17]Georgiev, T., Lumsdaine, A., 2010. Focused plenoptic camera and rendering. J. Electron. Imag., 19(2):021106.
[18]Georgiev, T., Lumsdaine, A., 2012. The multifocus plenoptic camera. Proc. Digital Photography VIII.
[19]Georgiev, T., Zheng, K.C., Curless, B., et al., 2006. Spatio-angular resolution tradeoffs in integral photography. Proc. 17th Eurographics Conf. on Rendering Techniques, p.263-272.
[20]Georgiev, T., Chunev, G., Lumsdaine, A., 2011. Superresolution with the focused plenoptic camera. Proc. Computational Imaging IX, p.78730X.
[21]Ghasemi, A., Vetterli, M., 2014. Detecting planar surface using a light-field camera with application to distinguishing real scenes from printed photos. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, p.4588-4592.
[22]Gortler, S.J., Grzeszczuk, R., Szeliski, R., et al., 1996. The lumigraph. Proc. 23rd Annual Conf. on Computer Graphics and Interactive Techniques, p.43-54.
[23]Guo, X., Yu, Z., Kang, S.B., et al., 2016. Enhancing light fields through ray-space stitching. IEEE Trans. Vis. Comput. Graph., 22(7):1852-1861.
[24]Hahne, C., Aggoun, A., Haxha, S., et al., 2014. Light field geometry of a standard plenoptic camera. Opt. Expr., 22(22):26659-26673.
[25]Hahne, C., Aggoun, A., Velisavljevic, V., 2015. The refocusing distance of a standard plenoptic photograph. Proc. 3DTV-Conf.: the True Vision-Capture, Transmission and Display of 3D Video, p.1-4.
[26]Iffa, E., Wetzstein, G., Heidrich, W., 2012. Light field optical flow for refractive surface reconstruction. Proc. Applications of Digital Image Processing XXXV, p.84992H.
[27]Isaksen, A., McMillan, L., Gortler, S.J., 2000. Dynamically reparameterized light fields. Proc. 27th Annual Conf. on Computer Graphics and Interactive Techniques, p.297-306.
[28]Jeon, H.G., Park, J., Choe, G., et al., 2015. Accurate depth map estimation from a lenslet light field camera. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1547-1555.
[29]Johannsen, O., Heinze, C., Goldluecke, B., et al., 2013. On the calibration of focused plenoptic cameras. In: Grzegorzek, M., Theobalt, C., Koch, R., et al. (Eds.), Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications, p.302-317.
[30]Johannsen, O., Sulc, A., Goldluecke, B., 2015. On linear structure from motion for light field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.720-728.
[31]Johannsen, O., Sulc, A., Goldluecke, B., 2016. What sparse light field coding reveals about scene structure. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3262-3270.
[32]Kalantari, N.K., Wang, T.C., Ramamoorthi, R., 2016. Learning-based view synthesis for light field cameras. ACM Trans. Graph., 35(6):193.1-193.10.
[33]Kim, C., Zimmer, H., Pritch, Y., et al., 2013. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph., 32(4):73.1-73.12.
[34]Kim, S., Ban, Y., Lee, S., 2014. Face liveness detection using a light field camera. Sensors, 14(12):22471-22499.
[35]Landy, M., Movshon, J.A., 1991. The Plenoptic Function and the Elements of Early Vision. MIT Press, USA, p.3-20.
[36]Levin, A., Durand, F., 2010. Linear view synthesis using a dimensionality gap light field prior. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1831-1838.
[37]Levin, A., Fergus, R., Durand, F., et al., 2007. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph., 26(3):70.
[38]Levoy, M., Hanrahan, P., 1996. Light field rendering. Proc. 23rd Annual Conf. on Computer Graphics and Interactive Techniques, p.31-42.
[39]Levoy, M., Ng, R., Adams, A., et al., 2006. Light field microscopy. ACM Trans. Graph., 25(3):924-934.
[40]Li, J., Lu, M., Li, Z.N., 2015. Continuous depth map reconstruction from light fields. IEEE Trans. Image Process., 24(11):3257-3265.
[41]Li, N., Ye, J., Ji, Y., et al., 2014. Saliency detection on light field. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.2806-2813.
[42]Liang, C.K., Shih, Y.C., Chen, H.H., 2011. Light field analysis for modeling image formation. IEEE Trans. Image Process., 20(2):446-460.
[43]Lin, H., Chen, C., Kang, S.B., et al., 2015. Depth recovery from light field using focal stack symmetry. Proc. IEEE Int. Conf. on Computer Vision, p.3451-3459.
[44]Liu, J., Xu, T., Yue, W., et al., 2015. Light-field moment microscopy with noise reduction. Opt. Expr., 23(22):29154-29162.
[45]Lumsdaine, A., Georgiev, T., 2008. Full Resolution Lightfield Rendering. Indiana University and Adobe Systems, Technical Report.
[46]Lytro Inc., 2011. Lytro Cinema Brings Revolutionary Light Field Technology to Film and TV Production. Technical Report. http://www.lytro.com
[47]Maeno, K., Nagahara, H., Shimada, A., et al., 2013. Light field distortion feature for transparent object recognition. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.2786-2793.
[48]Marwah, K., Wetzstein, G., Bando, Y., et al., 2013. Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph., 32(4):46.1-46.12.
[49]Maximilian, D., 2016. Light-Field Imaging and Heterogeneous Light Fields. PhD Thesis, Heidelberg University, Germany.
[50]Mignard-Debise, L., Ihrke, I., 2015. Light-field microscopy with a consumer light-field camera. Proc. Int. Conf. on 3D Vision, p.335-343.
[51]Mihara, H., Funatomi, T., Tanaka, K., et al., 2016. 4D light field segmentation with spatial and angular consistencies. Proc. Int. Conf. on Computational Photography, p.54-61.
[52]Ng, R., 2005. Fourier slice photography. ACM Trans. Graph., 24(3):735-744.
[53]Ng, R., 2006. Digital Light Field Photography. PhD Thesis, Stanford University, USA.
[54]Ng, R., Levoy, M., Brédif, M., et al., 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Technical Report, CTSR 2005-02, Stanford University, USA.
[55]Niu, C.Y., Qi, H., Huang, X., et al., 2016. Efficient and robust method for simultaneous reconstruction of the temperature distribution and radiative properties in absorbing, emitting, and scattering media. J. Quant. Spectros. Rad. Transfer, 184:44-57.
[56]Orth, A., Crozier, K.B., 2013. Light field moment imaging. Opt. Lett., 38(15):2666-2668.
[57]Perwaß, C., Wietzke, L., 2012. Single lens 3D-camera with extended depth-of-field. Proc. Human Vision and Electronic Imaging XVII.
[58]Perwaß, U., Perwaß, C., 2013. Digital Imaging System, Plenoptic Optical Device and Image Data Processing Method. US Patents.
[59]Pérez, F., Pérez, A., Rodríguez, M., et al., 2012. Fourier slice super-resolution in plenoptic cameras. Proc. IEEE Int. Conf. on Computational Photography, p.1-11.
[60]Raghavendra, R., Raja, K.B., Busch, C., 2015. Presentation attack detection for face recognition using light field camera. IEEE Trans. Image Process., 24(3):1060-1075.
[61]Sabater, N., Drazic, V., Seifi, M., et al., 2014. Light-Field Demultiplexing and Disparity Estimation. Technical Report, Technicolor Research and Innovation, France.
[62]Seifi, M., Sabater, N., Drazic, V., et al., 2014. Disparity-guided demosaicking of light field images. Proc. IEEE Int. Conf. on Image Processing, p.5482-5486.
[63]Shi, L., Hassanieh, H., Davis, A., et al., 2014. Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph., 34(1):12.1-12.13.
[64]Shum, H., Kang, S.B., 2000. Review of image-based rendering techniques. Proc. Visual Communications and Image Processing, p.2-13.
[65]Skupsch, C., Brücker, C., 2013. Multiple-plane particle image velocimetry using a light-field camera. Opt. Expr., 21(2):1726-1740.
[66]Srinivasan, P.P., Tao, M.W., Ng, R., et al., 2015. Oriented light-field windows for scene flow. Proc. IEEE Int. Conf. on Computer Vision, p.3496-3504.
[67]Tao, M.W., Hadap, S., Malik, J., et al., 2013. Depth from combining defocus and correspondence using light-field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.673-680.
[68]Tao, M.W., Su, J.C., Wang, T.C., et al., 2016. Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Trans. Patt. Anal. Mach. Intell., 38(6):1155-1169.
[69]Thomason, C.M., Thurow, B.S., Fahringer, T., 2014. Calibration of a microlens array for a plenoptic camera. Proc. 52nd Aerospace Sciences Meeting, p.1456-1460.
[70]Thurow, B.S., Fahringer, T., 2013. Recent development of volumetric PIV with a plenoptic camera. Proc. 10th Int. Symp. on Particle Image Velocimetry, p.1-7.
[71]Tosic, I., Berkner, K., 2014. Light field scale-depth space transform for dense depth estimation. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.435-442.
[72]Vaish, V., 2007. Synthetic Aperture Imaging Using Dense Camera Arrays. PhD Thesis, Stanford University, USA.
[73]Vaish, V., Garg, G., Talvala, E., et al., 2005. Synthetic aperture focusing using a shear-warp factorization of the viewing transform. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 3:129.
[74]Vaish, V., Levoy, M., Szeliski, R., et al., 2006. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.2331-2338.
[75]Venkataraman, K., Lelescu, D., Duparré, J., et al., 2013. PiCam: an ultra-thin high performance monolithic camera array. ACM Trans. Graph., 32(6):166.1-166.13.
[76]Wang, T.C., Efros, A.A., Ramamoorthi, R., 2015. Occlusion-aware depth estimation using light-field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.3487-3495.
[77]Wang, T.C., Chandraker, M., Efros, A.A., et al., 2016a. SVBRDF-invariant shape and reflectance estimation from light-field cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.5451-5459.
[78]Wang, T.C., Zhu, J.Y., Hiroaki, E., et al., 2016b. A 4D light-field dataset and CNN architectures for material recognition. Proc. European Conf. on Computer Vision, p.121-138.
[79]Wanner, S., Goldluecke, B., 2012a. Globally consistent depth labeling of 4D light fields. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.41-48.
[80]Wanner, S., Goldluecke, B., 2012b. Spatial and angular variational super-resolution of 4D light fields. Proc. European Conf. on Computer Vision, p.608-621.
[81]Wanner, S., Goldluecke, B., 2014. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Patt. Anal. Mach. Intell., 36(3):606-619.
[82]Wanner, S., Fehr, J., Jähne, B., 2011. Generating EPI representations of 4D light fields with a single lens focused plenoptic camera. Proc. Int. Symp. on Visual Computing, p.90-101.
[83]Wanner, S., Meister, S., Goldluecke, B., 2013. Datasets and benchmarks for densely sampled 4D light fields. In: Bronstein, M., Favre, J., Hormann, K. (Eds.), Vision, Modeling and Visualization, p.225-226.
[84]Wilburn, B., 2004. High Performance Imaging Using Arrays of Inexpensive Cameras. PhD Thesis, Stanford University, USA.
[85]Williem, W., Park, I.K., 2016. Robust light field depth estimation for noisy scene with occlusion. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.4396-4404.
[86]Xiao, Z., Wang, Q., Si, L., et al., 2014. Reconstructing scene depth and appearance behind foreground occlusion using camera array. Proc. IEEE Int. Conf. on Image Processing, p.41-45.
[87]Xu, Y., Nagahara, H., Shimada, A., et al., 2015. TransCut: transparent object segmentation from a light-field image. Proc. IEEE Int. Conf. on Computer Vision, p.3442-3450.
[88]Yoon, Y., Jeon, H.G., Yoo, D., et al., 2015. Learning a deep convolutional network for light-field image super-resolution. Proc. IEEE Int. Conf. on Computer Vision, p.24-32.
[89]Yu, J., McMillan, L., 2004. General linear cameras. Proc. European Conf. on Computer Vision, p.14-27.
[90]Yu, Z., Yu, J., Lumsdaine, A., et al., 2012. An analysis of color demosaicing in plenoptic cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.901-908.
[91]Yu, Z., Guo, X., Lin, H., et al., 2013. Line assisted light field triangulation and stereo matching. Proc. IEEE Int. Conf. on Computer Vision, p.2792-2799.
[92]Yuan, Y., Liu, B., Li, S., et al., 2016. Light-field-camera imaging simulation of participatory media using Monte Carlo method. Int. J. Heat Mass Transfer, 102:518-527.
[93]Zhang, C., Ji, Z., Wang, Q., 2016. Rectifying projective distortion in 4D light field. Proc. IEEE Int. Conf. on Image Processing, p.1464-1468.
[94]Zhang, Z., Liu, Y., Dai, Q., 2015. Light field from micro-baseline image pair. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3800-3809.
[95]Zhou, C., Miau, D., Nayar, S.K., 2012. Focal Sweep Camera for Space-Time Refocusing. Technical Report CUCS-021-12, Department of Computure Science, Columbia University, USA.
Open peer comments: Debate/Discuss/Question/Opinion
<1>