Full Text:   <10184>

Summary:  <468>

CLC number: O438

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2021-05-18

Cited: 0

Clicked: 6379

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Qiming QI

https://orcid.org/0000-0001-9141-4767

Hongqi FAN

https://orcid.org/0000-0002-9990-9163

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2022 Vol.23 No.6 P.823-844

http://doi.org/10.1631/FITEE.2100058


Multi-aperture optical imaging systems and their mathematical light field acquisition models


Author(s):  Qiming QI, Ruigang FU, Zhengzheng SHAO, Ping WANG, Hongqi FAN

Affiliation(s):  National Key Laboratory of Science and Technology on ATR, College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China

Corresponding email(s):   qiqiming19@163.com, fanhongqi@nudt.edu.cn

Key Words:  Multi-aperture optical imaging system, Artificial compound eye, Light field camera, Camera array, Light field acquisition model


Share this article to: More |Next Article >>>

Qiming QI, Ruigang FU, Zhengzheng SHAO, Ping WANG, Hongqi FAN. Multi-aperture optical imaging systems and their mathematical light field acquisition models[J]. Frontiers of Information Technology & Electronic Engineering, 2022, 23(6): 823-844.

@article{title="Multi-aperture optical imaging systems and their mathematical light field acquisition models",
author="Qiming QI, Ruigang FU, Zhengzheng SHAO, Ping WANG, Hongqi FAN",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="23",
number="6",
pages="823-844",
year="2022",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2100058"
}

%0 Journal Article
%T Multi-aperture optical imaging systems and their mathematical light field acquisition models
%A Qiming QI
%A Ruigang FU
%A Zhengzheng SHAO
%A Ping WANG
%A Hongqi FAN
%J Frontiers of Information Technology & Electronic Engineering
%V 23
%N 6
%P 823-844
%@ 2095-9184
%D 2022
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2100058

TY - JOUR
T1 - Multi-aperture optical imaging systems and their mathematical light field acquisition models
A1 - Qiming QI
A1 - Ruigang FU
A1 - Zhengzheng SHAO
A1 - Ping WANG
A1 - Hongqi FAN
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 23
IS - 6
SP - 823
EP - 844
%@ 2095-9184
Y1 - 2022
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2100058


Abstract: 
Inspired by the compound eyes of insects, many multi-aperture optical imaging systems have been proposed to improve the imaging quality, e.g., to yield a high-resolution image or an image with a large field-of-view. Previous research has reviewed existing multi-aperture optical imaging systems, but few papers emphasize the light field acquisition model which is essential to bridge the gap between configuration design and application. In this paper, we review typical multi-aperture optical imaging systems (i.e., artificial compound eye, light field camera, and camera array), and then summarize general mathematical light field acquisition models for different configurations. These mathematical models provide methods for calculating the key indexes of a specific multi-aperture optical imaging system, such as the field-of-view and sub-image overlap ratio. The mathematical tools simplify the quantitative design and evaluation of imaging systems for researchers.

多孔径光学成像系统及其光场采集数学模型

祁启明,傅瑞罡,邵铮铮,王平,范红旗
国防科技大学电子科学学院ATR重点实验室,中国长沙市,410073
摘要:受昆虫复眼启发,为提高光学成像质量,如获得高分辨率图像或大视场图像,研究者提出了许多多孔径光学成像系统。光场采集数学模型是联系多孔径光学成像系统结构设计与应用的纽带,但光场采集数学模型较少被关注。本文系统梳理了典型多孔径光学成像系统(仿生复眼、光场相机、相机阵列),总结了不同结构下多孔径光学成像系统的一般性光场采集数学模型。列出的数学模型既可用于计算特定多孔径光学成像系统的关键指标,如视场大小和子图像重叠比等,也可作为数学工具,便于研究者完成对成像系统的定量设计与评估。

关键词:多孔径光学成像系统;仿生复眼;光场相机;相机阵列;光场采集模型

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Adelson EH, Bergen JR, 1991. The plenoptic function and the elements of early vision. In: Landy MS, Movshon JA (Eds.), Computational Models of Visual Processing. MIT Press, Cambridge, USA, p.3-20.

[2]Afshari H, Jacques L, Bagnato L, et al., 2013. The PANOPTIC camera: a plenoptic sensor with real-time omnidirectional capability. J Signal Process Syst, 70(3):305-328.

[3]Aurenhammer F, 1991. Voronoi diagrams—a survey of a fundamental geometric data structure. ACM Comput Surv, 23(3):345-405.

[4]Barsky BA, Horn DR, Klein SA, et al., 2003. Camera models and optical systems used in computer graphics: Part I, object-based techniques. Int Conf on Computational Science and Its Applications, p.246-255.

[5]Bishop TE, Zanetti S, Favaro P, 2009. Light field super-resolution. IEEE Int Conf on Computational Photography, p.1-9.

[6]Bolles RC, Baker HH, Marimont DH, 1987. Epipolar-plane image analysis: an approach to determining structure from motion. Int J Comput Vis, 1(1):7-55.

[7]Born M, Wolf E, Bhatia A, et al., 1999. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. Cambridge University Press, Cambridge, UK.

[8]Brady DJ, Gehm ME, Stack RA, et al., 2012. Multiscale gigapixel photography. Nature, 486(7403):386-389.

[9]Cao AX, Shi LF, Shi RY, et al., 2014. Image processing algorithm study of large FOV compound eye structure. Acta Photon Sin, 43(5):510005 (in Chinese).

[10]Cao AX, Shi LF, Deng QL, et al., 2015. Structural design and image processing of a spherical artificial compound eye. Optik, 126(21):3099-3103.

[11]Cao AX, Wang JZ, Pang H, et al., 2018. Design and fabrication of a multifocal bionic compound eye for imaging. Bioinspir Biomim, 13(2):026012.

[12]Carles G, Downing J, Harvey AR, 2014. Super-resolution imaging using a camera array. Opt Lett, 39(7):1889-1892.

[13]Chai JX, Tong X, Chan SC, et al., 2000. Plenoptic sampling. Proc 27th Annual Conf on Computer Graphics and Interactive Techniques, p.307-318.

[14]Cheng Y, Cao J, Zhang YK, et al., 2019. Review of state-of-the-art artificial compound eye imaging systems. Bioinspir Biomim, 14(3):031002.

[15]de Berg M, van Kreveld M, Overmars M, et al., 2001. Computational Geometry: Algorithms and Applications (2nd Ed.). Springer-Verlag.

[16]Deng ZF, Chen F, Yang Q, et al., 2016. Dragonfly-eye-inspired artificial compound eyes with sophisticated imaging. Adv Funct Mater, 26(12):1995-2001.

[17]Duparré J, Dannberg P, Schreiber P, et al., 2005. Thin compound-eye camera. Appl Opt, 44(15):2949-2956.

[18]Duparré J, Radtke D, Tünnermann A, 2007. Spherical artificial compound eye captures real images. Proc SPIE 6466, MOEMS and Miniaturized Systems VI, Article 64660K.

[19]Durand F, Holzschuch N, Soler C, et al., 2005. A frequency analysis of light transport. ACM Trans Graph, 24(3):1115-1126.

[20]Georgiev T, Chunev G, Lumsdaine A, 2011. Superresolution with the focused plenoptic camera. Proc SPIE, Computational Imaging IX, Article 7873.

[21]Geyer C, Daniilidis K, 2000. A unifying theory for central panoramic systems and practical implications. Proc 6th European Conf on Computer Vision, p.445-461.

[22]Golish DR, Vera EM, Kelly KJ, et al., 2012. Development of a scalable image formation pipeline for multiscale gigapixel photography. Opt Expr, 20(20):22048-22062.

[23]Gong XW, Yu WX, Zhang HX, et al., 2013. Progress in design and fabrication of artificial compound eye optical systems. Chin J Opt, 6(1):34-45 (in Chinese).

[24]Gortler SJ, Grzeszczuk R, Szeliski R, et al., 1996. The lumigraph. Proc 23rd Annual Conf on Computer Graphics and Interactive Techniques, p.43-54.

[25]Guo F, Zheng YP, Wang KY, 2012. Lenses matching of compound eye for target positioning. Proc SPIE 8420, 6th Int Symp on Advanced Optical Manufacturing and Testing Technologies: Optical System Technologies for Manufacturing and Testing, Article 84200B.

[26]Hahne C, Aggoun A, Haxha S, et al., 2014a. Baseline of virtual cameras acquired by a standard plenoptic camera setup. 3DTV-Conf: the True Vision-Capture, Transmission and Display of 3D Video, p.1-3.

[27]Hahne C, Aggoun A, Haxha S, et al., 2014b. Light field geometry of a standard plenoptic camera. Opt Expr, 22(22):26659-26673.

[28]Hao YP, Li L, 2015. New progress in structure design and imaging systems of artificial compound eye. Laser Infr, 45(12):1407-1412 (in Chinese).

[29]Hartley R, Zisserman A, 2004. Multiple View Geometry in Computer Vision (2nd Ed.). Cambridge University Press, UK.

[30]Heikkila J, Silven O, 1997. A four-step camera calibration procedure with implicit image correction. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.1106-1112.

[31]Isaksen A, McMillan L, Gortler SJ, 2000. Dynamically reparameterized light fields. Proc 27th Annual Conf on Computer Graphics and Interactive Techniques, p.297-306.

[32]Johannsen O, Sulc A, Goldluecke B, 2016. What sparse light field coding reveals about scene structure. IEEE Conf on Computer Vision and Pattern Recognition, p.3262-3270.

[33]Joshi N, Avidan S, Matusik W, et al., 2007. Synthetic aperture tracking: tracking through occlusions. IEEE 11th Int Conf on Computer Vision, p.1-8.

[34]Kannala J, Brandt SS, 2006. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans Patt Anal Mach Intell, 28(8):1335-1340.

[35]Kim C, Zimmer H, Pritch Y, et al., 2013. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans Graph, 32(4):1-12.

[36]Land MF, 1989. Variations in the structure and design of compound eyes. In: Stavenga DG, Hardie RC (Eds.), Facets of Vision. Springer, Berlin, p.90-111.

[37]Leitel R, Brückner A, Buß W, et al., 2014. Curved artificial compound-eyes for autonomous navigation. Proc SPIE 9130, Micro-Optics, Article 91300H.

[38]Levoy M, 2006. Light fields and computational imaging. Computer, 39(8):46-55.

[39]Levoy M, Hanrahan P, 1996. Light field rendering. Proc 23rd Annual Conf on Computer Graphics and Interactive Techniques, p.31-42.

[40]Levoy M, Ng R, Adams A, et al., 2006. Light field microscopy. ACM Trans Graph, 25(3):924-934.

[41]Li L, Yi AY, 2012. Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera. Appl Opt, 51(12):1843-1852.

[42]Liang CK, Shih YC, Chen HH, 2011. Light field analysis for modeling image formation. IEEE Trans Image Process, 20(2):446-460.

[43]Lim JG, Ok HW, Park BK, et al., 2009. Improving the spatial resolution based on 4D light field data. Proc 16th IEEE Int Conf on Image Processing, p.1169-1172.

[44]Lin HT, Chen C, Kang SB, et al., 2015. Depth recovery from light field using focal stack symmetry. IEEE Int Conf on Computer Vision, p.3451-3459.

[45]Lin ZC, Shum HY, 2004. A geometric analysis of light field rendering. Int J Comput Vis, 58(2):121-138.

[46]Lindlein N, Leuchs G, 2012. Geometrical optics. In: Träger F (Ed.), Springer Handbook of Lasers and Optics. Springer, Berlin, p.35-87.

[47]Lumsdaine A, Georgiev T, 2009. The focused plenoptic camera. IEEE Int Conf on Computational Photography, p.1-8.

[48]Luo JS, Guo YC, Wang X, 2015. Development of a multi-focusing artificial compound eye with decreasing focal length. Acta Photon Sin, 44(10):1022002 (in Chinese).

[49]McMillan L, Bishop G, 1995. Plenoptic modeling: an image-based rendering system. Proc 22nd Annual Conf on Computer Graphics and Interactive Techniques, p.39-46.

[50]Moon P, Spencer DE, 1953. Theory of the photic field. J Franklin Inst, 255(1):33-50.

[51]Ng R, 2005. Fourier slice photography. ACM Trans Graph, 24(3):735-744.

[52]Ng R, Hanrahan P, 2005. Light field photography with a hand-held plenoptic camera. Proc SPIE 6342, Int Optical Design Conf, Article 63421E.

[53]O’Shea DC, Zajac A, 1986. Elements of modern optical design. Phys Today, 39(5):87-88.

[54]Pang K, Fang FZ, Song L, et al., 2017. Bionic compound eye for 3D motion detection using an optical freeform surface. J Opt Soc Am B, 34(5):B28-B35.

[55]Popovic V, Seyid K, Akin A, et al., 2014. Image blending in a high frame rate FPGA-based multi-camera system. J Signal Process Syst, 76(2):169-184.

[56]Shi CY, Wang YY, Liu CY, et al., 2017. SCECam: a spherical compound eye camera for fast location and recognition of objects at a large field of view. Opt Expr, 25(26):32333-32345.

[57]Soler C, Subr K, Durand F, et al., 2009. Fourier depth of field. ACM Trans Graph, 28(2):1-12.

[58]Song YM, Xie YZ, Malyarchuk V, et al., 2013. Digital cameras with designs inspired by the arthropod eye. Nature, 497(7447):95-99.

[59]Suo JL, Ji XY, Dai QH, 2012. An overview of computational photography. Sci China Inform Sci, 55(6):1229-1248.

[60]Tanida J, Kumagai T, Yamada K, et al., 2000. Thin Observation Module by Bound Optics (TOMBO): an optoelectronic image capturing system. Proc SPIE 4089, Optics in Computing, Article 2000.

[61]Tardif JP, Sturm P, Roy S, 2006. Self-calibration of a general radially symmetric distortion model. Proc 9th European Conf on Computer Vision, p.186-199.

[62]Vaish V, Wilburn B, Joshi N, et al., 2004. Using plane + parallax for calibrating dense camera arrays. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2-9.

[63]Vaish V, Levoy M, Szeliski R, et al., 2006. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2331-2338.

[64]Venkataraman K, Lelescu D, Duparré J, et al., 2013. Picam: an ultra-thin high performance monolithic camera array. ACM Trans Graph, 32(6):1-13.

[65]Wang TC, Chandraker M, Efros AA, et al., 2018. SVBRDF-invariant shape and reflectance estimation from a light-field camera. IEEE Trans Patt Anal Mach Intell, 40(3):740-754.

[66]Wang YQ, Yang JG, Guo YL, et al., 2019. Selective light field refocusing for camera arrays using bokeh rendering and superresolution. IEEE Signal Process Lett, 26(1):204-208.

[67]Wang YW, Cai BL, Lu Y, et al., 2017. Optical system design of artificial compound eye based on field stitching. Microw Opt Technol Lett, 59(6):1277-1279.

[68]Wanner S, Goldluecke B, 2012a. Globally consistent depth labeling of 4D light fields. IEEE Conf on Computer Vision and Pattern Recognition, p.41-48.

[69]Wanner S, Goldluecke B, 2012b. Spatial and angular variational super-resolution of 4D light fields. Proc 12th European Conf on Computer Vision, p.608-621.

[70]Wanner S, Goldluecke B, 2014. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans Patt Anal Mach Intell, 36(3):606-619.

[71]Wen C, Ma T, Wang C, et al., 2019. Progress in research on the compound eye structure and visual navigation of insects. Chin J Appl Entomol, 56(1):28-36 (in Chinese).

[72]Weng J, Cohen P, Herniou M, 1992. Camera calibration with distortion models and accuracy evaluation. IEEE Trans Patt Anal Mach Intell, 14(10):965-980.

[73]Wilburn B, Joshi N, Vaish V, et al., 2005. High performance imaging using large camera arrays. ACM Trans Graph, 24(3):765-776.

[74]Williem W, Park IK, 2016. Robust light field depth estimation for noisy scene with occlusion. IEEE Conf on Computer Vision and Pattern Recognition, p.4396-4404.

[75]Wu G, Masia B, Jarabo A, et al., 2017. Light field image processing: an overview. IEEE J Sel Top Signal Process, 11(7):926-954.

[76]Wu SD, Jiang T, Zhang GX, et al., 2017. Artificial compound eye: a survey of the state-of-the-art. Artif Intell Rev, 48(4):573-603.

[77]Wu SD, Zhang GX, Zhu M, et al., 2018a. Geometry based three-dimensional image processing method for electronic cluster eye. Integr Comput-Aided Eng, 25(3):213-228.

[78]Wu SD, Zhang GX, Jiang T, et al., 2018b. Multi-aperture stereo reconstruction for artificial compound eye with cross image belief propagation. Appl Opt, 57(7):B160-B169.

[79]Wu SD, Zhang GX, Neri F, et al., 2019. A multi-aperture optical flow estimation method for an artificial compound eye. Integr Comput-Aided Eng, 26(2):139-157.

[80]Yang JC, Everett M, Buehler C, et al., 2002. A real-time distributed light field camera. Proc 13th Eurographics Workshop on Rendering, p.77-86.

[81]Yang T, Zhang YN, Yu JY, et al., 2014. All-in-focus synthetic aperture imaging. Proc 13th European Conf on Computer Vision, p.1-15.

[82]Yu XD, Zhang YJ, Wang YY, et al., 2019. Optical design of a compound eye camera with a large-field of view for unmanned aerial vehicles. Acta Photon Sin, 48(7):0722003 (in Chinese).

[83]Yuan W, Li LH, Lee WB, et al., 2018. Fabrication of microlens array and its application: a review. Chin J Mech Eng, 31(1):16.

[84]Zhang C, Chen T, 2003. Spectral analysis for sampling image-based rendering data. IEEE Trans Circ Syst Video Technol, 13(11):1038-1050.

[85]Zhang JM, Chen Y, Tan HQ, et al., 2020. Optical system of bionic compound eye with large field of view. Opt Prec Eng, 28(5):1012-1020 (in Chinese).

[86]Zhang YK, Du JL, Shi LF, et al., 2010. Artificial compound-eye imaging system with a large field of view based on a convex solid substrate. Proc SPIE 7848, Holography, Diffractive Optics, and Applications IV, Article 78480U.

[87]Zhang ZY, 2000. A flexible new technique for camera calibration. IEEE Trans Patt Anal Mach Intell, 22(11):1330-1334.

[88]Zhang ZZ, Qiu S, Jin WQ, et al., 2018. Image mosaic of bionic compound eye imaging system based on image overlap rate prior. Proc SPIE 10846, Optical Sensing and Imaging Technologies and Applications, Article 108462C.

[89]Zhou PL, Yu HB, Zhong Y, et al., 2020. Fabrication of waterproof artificial compound eyes with variable field of view based on the bioinspiration from natural hierarchical micro-nanostructures. Nano-Micro Lett, 12(1):166.

[90]Zhu H, Wang Q, Yu JY, 2017. Light field imaging: models, calibrations, reconstructions, and applications. Front Inform Technol Electron Eng, 18(9):1236-1249.

[91]Zhu L, Zhang YL, Sun HB, 2019. Miniaturising artificial compound eyes based on advanced micronanofabrication techniques. Light Adv Manuf, 2(1):84-100.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE