Full Text:   <2987>

Summary:  <2102>

CLC number: TV149.2

On-line Access: 2014-01-03

Received: 2013-09-24

Revision Accepted: 2013-12-03

Crosschecked: 2013-12-20

Cited: 1

Clicked: 6433

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
1.  Introduction
2.  Camera distortion calibration
3.  Mapping transformation between 2D image and 3D space coordinates
4.  Transformation between 2D image grey value and 3D space coordinates
5.  Conclusions
6. Reference List
Open peer comments

Journal of Zhejiang University SCIENCE A 2014 Vol.15 No.1 P.68-82

http://doi.org/10.1631/jzus.A1300317


A three-dimensional topographic survey based on two-dimensional image information*


Author(s):  Xiao-long Song1, Yu-chuan Bai1,2, Chao Ying3

Affiliation(s):  1. State Key Laboratory of Hydraulic Engineering Simulation and Safety, Tianjin University, Tianjin 300072, China; more

Corresponding email(s):   sxlong2013@foxmail.com

Key Words:  Riverbed topographic survey, Radial distortion calibration, Projection transformation, Grey information transformation


Share this article to: More <<< Previous Article|

Xiao-long Song, Yu-chuan Bai, Chao Ying. A three-dimensional topographic survey based on two-dimensional image information[J]. Journal of Zhejiang University Science A, 2014, 15(1): 68-82.

@article{title="A three-dimensional topographic survey based on two-dimensional image information",
author="Xiao-long Song, Yu-chuan Bai, Chao Ying",
journal="Journal of Zhejiang University Science A",
volume="15",
number="1",
pages="68-82",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.A1300317"
}

%0 Journal Article
%T A three-dimensional topographic survey based on two-dimensional image information
%A Xiao-long Song
%A Yu-chuan Bai
%A Chao Ying
%J Journal of Zhejiang University SCIENCE A
%V 15
%N 1
%P 68-82
%@ 1673-565X
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.A1300317

TY - JOUR
T1 - A three-dimensional topographic survey based on two-dimensional image information
A1 - Xiao-long Song
A1 - Yu-chuan Bai
A1 - Chao Ying
J0 - Journal of Zhejiang University Science A
VL - 15
IS - 1
SP - 68
EP - 82
%@ 1673-565X
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.A1300317


Abstract: 
A riverbed topographic survey is one of the most important tasks for river model experiments. To improve measurement efficiency and solve the riverbed interference problem in traditional methods, this study discussed two measurement methods that use digital image-processing technology to obtain topographic information. A new and improved approach for calibrating camera radial distortion, which comes from originally distorted images captured by our camera, was proposed to enhance the accuracy of image measurement. Based on perspective projection transformation, we described a 3D reconstruction method based upon multiple images, which is characterized by using an approximated maximum likelihood estimation method (AMLE) considering the first-order error propagation of the residual error to compute transformation parameters. Moreover, a theoretical derivation of 3D topography according to grey information from a single image was carried out. With the diffuse illumination model, assuming that the ideal grey value and topographic elevation value are positively correlated, we derived a novel closed formula to explain the relationship of 3D topographic elevation, grey value, grey gradient, and the solar direction vector. Experimental results showed that our two methods both have some positive advantages even if they are not perfect.

一种基于二维图像信息的三维地形测量

研究目的:利用数字图像测量技术对河流模型实验中的河床地形测量研究。
创新方法:以高质量的图像径向畸变校正为基础,依据多幅图像间映射换算以及单幅图像灰度信息变换两方面的研究,来提取三维地形信息。
研究手段:使用考虑残差的一阶误差传播的最大似然估计方法求解图像与地形坐标的映射变换参数;借助漫反射光照模型,通过适当假设,推导出地形高程与图像灰度、灰度梯度及太阳方向的关系式。
重要结论:1.以Devernay基于直线的图像畸变校正方法为基础,通过使用更高精度的Canny边缘检测算子提取图像边缘,黄金分割法和二次插值法相结合的最优化方法求取畸变系数等,可以得到较好的校正效果,求算的畸变系数精度更高,求算过程更快;2.利用图像与地面间的透视投影关系方法求解地形高程坐标,均匀并尽可能多地将控制点布置在控制区域内,使计算值更接近实测值,并满足河流演变实验的精度要求;3.通过推导关系式,理论上可以根据图像灰度、灰度梯度、太阳方向向量算得地形高程。

关键词:河床地形测量;数字图像测量技术;径向畸变矫正;投影变换;灰度信息变换

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Article Content

1.  Introduction

 The riverbed topographic survey is one of the most complex tasks in river model experiments. Traditional topographic surveying tools, such as electronic level, electronic transit, and full-station instruments, are difficult to apply in a riverbed topographic survey. At present, the resistive topography meter is the device generally used. It has good adaptability and can measure many kinds of topography. It is suitable for different sediment concentrations and sandy conditions. However, its low measurement efficiency and the riverbed interference problem are its major shortcomings.

 To meet the needs of dynamic measurement, a wide range of topographic measurement methods have been evolved. Terrestrial laser scanning, aerial light detection and ranging (LiDAR), multibeam sonar, real-time kinematic (RTK) GPS, and total station surveys (Heritage and Hetherington, 2007; Alho et al., 2009; Notebaert et al., 2009; Brasington, 2010; Höfle and Rutzinger, 2011; Hohenthal et al., 2011) are continuously being developed to satisfy increasing research interest and engineering applications in the geomorphic sciences. However, such methods are much more suitable for large-scale river models, even actual rivers. Moreover, much software and hardware resource, especially, time, is called for. In recent years, with the development of methods allowing for the accurate calibration of non-metric cameras and the increasingly reliable automation of the process, photogrammetry is becoming accessible to a wide user base (Butler et al., 2001; Chandler et al., 2002; Lohry and Zhang, 2012; Feng et al., 2013; Zwick et al., 2013). Using an online structure-from-motion (SfM) program, Fonstad et al. (2013) created high-resolution digital elevation models of a river environment from ordinary photographs produced from a workflow that takes advantage of free and open source software. Bouratsis et al. (2013) used a pair of commercial cameras to record the evolution of the bed, and a computational approach that consisted of a set of computer-vision and image-processing algorithms was employed to analyze the videos and reconstruct the instantaneous 3D surface of the bed. James and Robson (2012) integrated SfM and multiview-stereo (MVS) algorithms to study topographic measurements in a method requiring little expertise and enabling automated processing. Astruc et al. (2012) studied a stereoscopic technique to measure sand-bed elevation in the swash zone at the wave time-scale. This method is non-intrusive, leading to an accuracy of height estimation of the order of a sand grain size whilst temporal resolution allows the wave cycle to be captured. In addition, Lu et al. (2008) made some related research about thematic mapper imagery of plain and highland terrains.

 Image data are simply 2D. However, many clues might be found in single images or multiple images through 3D reconstruction of the image scene. Typical reconstruction methods under sunlight conditions include the shape from shading method (Zhang et al., 1999), the shape from texture method (Forsyth and Ponce, 2002), and the manual interaction method (Shashua, 1997). There are also some methods based on multiple images, such as the stereo vision method, the motion image sequence method, and the photometric stereo method (Li, 1991; Zhang Y.J., 2000; Pollefeys and Gool, 2002). It is also worth noting that the 3D reconstruction application is widespread as well as the topographic survey. There are some significant applications in other fields. Gomez et al. (2013) carried out a new method for reconstructing a 3D+t velocity field from multiple 3D+t color Doppler images. Miks et al. (2013) focused on scanning beam deflectometry for surface reconstruction, and proposed and analyzed a new mathematical method to solve the 3D surface shape reconstruction problem from measurements of a surface gradient of specular surfaces. Yoo (2013) proposed a novel depth extraction method for 3D objects using the windowing technique in computation integral imaging with a lenslet array. Moreover, image analysis techniques have also been used for retrieving water surface elevation fields spatially and temporally from charge-coupled device (CCD)-images and CCD-image-sequences (Benetazzo, 2006; Gallego et al., 2011). To a certain extent these techniques are similar to the topographic survey. They use binocular stereogrammetry to recover topographic information from a sequence of synchronous, overlapping video images.


2.  Camera distortion calibration

 Image non-linear distortion calibration is the essential step for image surveying and 3D reconstruction. It is the geometric model of the CCD camera that determines the relationship of 3D geometry information of object surfaces and the corresponding image matching points. Due to the difference between the structure of an optical lens and the ideal linear pinhole imaging model, non-linear geometry distortion is generated in the images captured from a CCD camera. It seriously affects the accuracy of 3D information gained through 2D images.

2.1.  Mathematical model

 In the process of design and manufacture, a lens inevitably produces aberrations in images. The general effects can be expressed by mathematical models: radial distortion, decentering distortion, and thin prism distortion. The lens distortion can usually be expressed as , , where x d and y d are the original distorted image coordinates, and x u and y u are the ideal undistorted image coordinates. δxd and δyd are the distortion in the x and y directions, respectively and can be described by , , where the first term is the radial distortion, the second term is the decentering distortion, and the third term is the thin prism distortion; k 1, k 2, p 1, p 2, s 1, and s 2 are the nonlinear distortion parameters (Weng et al., 1992; Ahmed and Farag, 2005; Wang et al., 2008).

 Previous studies (Zhang Z.Y., 2000; Jiang et al., 2001; Perš and Kovacic, 2002; Miks and Novak, 2012) indicated that lens radial distortion is much more severe than the other aberrations. Tangential distortion and prism distortion need not be considered. Too many non-linear parameters will not improve the accuracy of the solution but, rather, will cause instability in it.

 The polynomial model (PM) that is most commonly used to describe radial distortion can be written as , where r d and r u are the respective distances from the distorted point (x d, y d) and the undistorted point (x u, y u) to the distorted center P; ki (i=1, 2, …) is the radial distortion parameter. Devernay and Faugeras (1995) showed that the first-order radial symmetric distortion parameter, k 1, can achieve some accuracy. To achieve a higher accuracy, we use the first- and second-order distortion parameters, k 1 and k 2, to measure the distortion degree of distorted images. Since the image center is a good approximation of the center of distortion (Tordoff and Murray, 2000), we simply take the center of an image as the distortion center P(x c, y c). The correction formula for image coordinates can be expressed as .

 Then, after reformulation, the undistorted coordinates and distorted coordinates are given by

2.2.  Calibration process

 To find the distortion parameters, we use Devernay and Faugeras (1995)’s straight line method and make some new improvements. This method uses the fundamental property (Devernay and Faugeras, 2001) that the projection of every straight line in space onto a pinhole camera is a straight line. As a result, if we can find a transformation on a radial distorted image so that every straight line in space is viewed as straight line in the transformed image, then we can estimate the distortion parameters of an image. By using this property, an iterative process is employed to estimate the distortion parameters k 1 and k 2.

2.2.1.  Edge detection

 An image edge has two properties: direction and magnitude. The edge pixels change gently along the edge direction; however, there is a dramatic change in the direction perpendicular to the edge. Firstly, a Canny edge detector with a new arithmetic (Mohammed et al., 2013) was used to obtain the magnitude and orientation of the edge, and then the data were processed by Gaussian low-pass filter. Secondly, non-maxima suppression and hysteresis thresholding were used for the precise location of the edge with sub-pixel accuracy. This method can improve the resolution of the detected edge, protect the low intensity edge, and enhance noise immunity.

2.2.2.  Extract distorted line segments

 After edge detection, we need to extract distorted line segments which are most probably straight lines in 3D space. Since some segments may be broken by the edge detector, we join broken segments together when the distances between edge ends are less than a threshold of 0.5 pixels. We also put a minimum threshold of 2 pixels on the segment length, on the grounds that shorter segments may contain more noise than useful information about distortion.

2.2.3.  Measure distortion error

 To obtain the distortion parameters, the curvature of a distorted line segment is calculated to measure by how much it is distorted. We use the points on a distorted segment to form a straight line by least square approximation. In general, the equation of the straight line joining end points (x 1, y 1) and (x 2, y 2) can be parameterized by .

 For any point (x, y) of a line segment, we use as the distance from the point to the straight line, and then the distortion error is the sum of squares of all the distances. As a result, the distortion error is zero, when a distortion segment is a straight line. The larger the curvature of the distorted segment is, the larger the distortion error will be.

2.2.4.  Optimization method

 We combined the golden section search method (GSSM) and the quadratic interpolation method (QIM) (Gong and Wang, 2009) to calculate the optimal distortion parameters. QIM is accurate, but not very efficient. Therefore, we first get a small extreme value range by GSSM, and then use QIM to get the solution that meets our accuracy requirements.

 The whole optimization process is: firstly, we use the golden section of the range as the initial point and then do an iteration by GSSM, until there is little difference from the f(x) values obtained from the adjacent two-step iteration. Then a single iteration by QIM is done. If there is not much difference between the f(x) values obtained from the two methods, another iteration by QIM is done. Again, if there is not much difference between the f(x) values obtained from the adjacent two-step iteration by QIM, and the difference between independent variables is very small, iteration is continued until the accuracy requirement is met. Otherwise, we change to GSSM. We set the precision as 1×10−10.

 By minimizing the parameters when the data still contain many outliers, there is a risk of moving further from the optimal parameters. For this reason, we first optimize on k 1 only until it gives a stable solution. Then k 2 is added, and finally full optimization on the distortion parameters is performed.

2.3.  Calibration experimental result

 To verify the calibration effectiveness, we conduct the calibration experiment on two images. The original image coordinates are integral. However, after calibration, the image coordinates are non-integral, and the new grey values of the integer coordinates need to be estimated. Considering this problem, we use the cubic spline interpolation method for grey-level interpolation.

 The image edge profile in Fig. 1 shows that the two images both produce barrel distortion. The experimental results by Devernay and Faugeras (1995)’s method are shown in Fig. 2, the computing times are 65.6 s and 32.9 s, the distortion parameters are k 1=7.9415×10−7 and k 1=8.6914×10−7, and the residual errors are 24 and 1.8, respectively. The experimental results by the proposed method are shown in Fig. 3, the computing times are 55.3 s and 30.2 s, the distortion parameters are k 1= 6.9464×10−7, k 2=4.6261×10−12 and k 1=7.9267×10−7, k 2=6.2357 ×10−13, the residual errors are 1 and 0.8, respectively. The results look very reasonable and indicate that the proposed method is as effective as Devernay and Faugeras (1995)’s. We also improve the precision and computational efficiency of the distortion parameters and therefore we can use the calibrated image information to do a more precise image survey.



Fig.1
Two distorted images used for distortion calibration
(a) From our CCD camera; (b) From http://www.21tx.com/dc/2007/11/05/11645.html (accessed on May, 2013)



Fig.2
Calibrated images using the parameters computed by Devernay and Faugeras (1995)’s original method
(a) Calibrated results on Fig. 1a; (b) Calibrated results on Fig. 1b



Fig.3
Calibrated images using the parameters computed by the proposed method
(a) Calibrated results on Fig. 1a; (b) Calibrated results on Fig. 1b


3.  Mapping transformation between 2D image and 3D space coordinates

3.1.  Perspective transformation

 Considering that calibrated images can be viewed as ground central projections, then there exists a perspective projection transformation relationship between the image and the ground. In photogrammetry, the commonly used mathematical models include direct linear transformation (DLT) (Chen et al., 1994) and space resection method (Wang, 2007). The space resection method needs fewer control points, but it has higher requirements for the accuracy of the initial value in the actual iterative process. So we use DLT to calculate the transformation parameters.

 DLT is an analytical expression that directly describes the perspective relationship between the image and the ground. It contains 11 unknown parameters (L 1L 11) and can improve the image linear error. Let (u, v) be the image coordinates, (X, Y, Z) be the corresponding 3D space coordinates, then direct linear relationship can be expressed as

 When given n (n≥6) control points, we usually calculate these 11 parameters by the method of least squares. The traditional method is , where , , .

 The minimum sum of residual squares is the objective function in the traditional method. However, the statistical property between the image coordinate and the corresponding 3D space coordinate is fatally ignored. The precision of the solution is not great enough, and it can only be used for low precision photogrammetry. In this study, we use an approximated maximum likelihood estimation method (AMLE) and consider the first-order error propagation of the residual error (Zhou and Deng, 2011) to compute transformation parameters. Also, a simpler iterative algorithm is applied.

3.2.  Transformation parameter solution based on AMLE method

 Since each point brings two equations, use i to represent the point number, let , , , , then we have the governing equation: . L i =[Xi Yi Zi ui vi ]T is the observed value. Supposing that the observations are independent of each other, , , and is the observation covariance matrix. According to the error propagation law, the variance of A 1,i can be expressed as , where with s being the equivalent error factor determined by the camera photography scale. Because errors in spatial coordinates in photogrammetry are usually expressed in millimeters and image measurement coordinates are in pixels, the pixel error needs to be transformed to millimeters.

 This is a semi-linear errors-in-variables (EIV) model parameter estimation problem (Muhlich and Mester, 2001). In tradition, the problem can be expressed as . Ignoring the residual rights, i.e., cov[ A i ] being an identity matrix, the AMLE problem is a general least squares problem (Yu et al., 1990) and the estimated parameters to be determined are the eigenvectors corresponding to the smallest eigenvalue of matrix A T A . Considering the relative complexity of various methods (Kanatani, 1996; Chojnacki et al., 2000; Matei and Meer, 2000; Muhlich and Mester, 2001), we use an iterative weighted total least square method (WTLS). Its precision can be 10% greater than that of the traditional method, and that will have a significant influence when the observation precision of the control points differs greatly.

 The modified expression is as follows: , where ,

 The parameter solution of the problem is the eigenvector corresponding to the smallest eigenvalue of matrix S under singular value decomposition, while S is determined by the parameters. The detailed iterative process is described as follows:

  1. Let the solution calculated by the traditional method be the initial value;

  2. Use the initial value to calculate S according to Eq. (15);

  3. Carry out a singular value decomposition on S ; the solution is the eigenvector corresponding to the smallest eigenvalue;

  4. Compare the solution with the initial value by the Euclidian 2-norm, if the difference is less than the threshold of 1×10−5, then the calculation stops; if not continue to do the iteration.

 We can get all the 11 parameters of Eq. (10) by this method. If we want 3D coordinates of some point, we can take pictures of the same place from different angles with two different CCD cameras, then use the method to calculate the 11 parameters of the two images and thus obtain the 3D coordinates through the coupled equations. As shown in the following original and derived equations: where (ui , vi ) is the image coordinates of objective point from one CCD camera, Lji (j=1, 2, …, 11) are the solved perspective transformation parameters for the ith image.

 From Eq. (16), it is obvious that, based on couples of image coordinates, the 3D coordinates can be easily obtained. Additionally, we can get a higher measurement precision from three or more cameras. If we capture significant images from two or more positions, and have at least six non-coplanar control points, a single camera can also have a similar effect.

3.3.  Experimental results

 To validate the result, two river model experimental images are employed (Fig. 4). They are captured at different positions by a single camera, provided by Microvision (MV-1394). The images are 640×400 pixels. The river model is the one built for the experimental study of bed topography evolution in alluvial meandering rivers (Xu and Bai, 2013). The experiment was carried out in a small scale water basin with a size of 4 m×1.5 m×0.3 m (length×width×depth). The basin was filled with sand up to a thickness of 10 cm. The images display a stable curved reach shaped by the flow. The area of this reach is about 2 m2. The images have been done with distortion calibration and skew correction (Omar et al., 2012). The control points are the hotspots of images.



Fig.4
Two images captured at different positions
(a) From angle 1; (b) From angle 2

 Fig. 5 shows the measured topographic contour (h). A fiber optic fluid level measuring instrument (IH type) was employed in the measurements. The calculated topographic contour by the proposed method is shown in Fig. 6. A bilinear interpolation algorithm was applied in a scattered data interpolation. From the two images, we can see that the topographic change tendencies of these two images are basically identical. They match almost perfectly in the control region, with some errors just outside that region.



Fig.5
Measured topographic contour



Fig.6
Calculated topographic contour

 We selected 10 control points from the control region, that is (xi , yi ) (i=1, 2, …, 10). Let hi be the ith point interpolated measured elevation value, h 1i be the ith point calculated elevation value using 10 control points, h 2i be the ith point calculated elevation value using eight control points, and h 3i be the ith point calculated elevation value using six control points. The confusion matrix for elevation coordinates from different methods is shown in Table 1.



Table 1

Confusion matrix for elevation coordinates from different data processing methods
Control point xi (m) yi (m) hi (cm) h 1i (cm) h 2i (cm) h 3i (cm)
1 4.716 1.802 6.574 6.309 3.692 8.420
2 4.423 1.731 7.875 7.566 5.607 9.249
3 4.841 1.713 6.663 5.993 3.685 9.799
4 4.699 1.832 6.715 5.666 2.293 9.175
5 5.310 1.458 5.245 4.837 5.631 7.017
6 5.098 1.638 7.171 7.076 6.018 7.590
7 5.394 1.634 5.471 5.951 4.983 8.243
8 5.603 1.467 6.503 6.171 6.682 8.890
9 5.594 1.577 5.004 5.776 5.244 7.695
10 5.255 1.452 5.352 4.820 5.753 6.992

 From Table 1, we can observe that, with more control points, the calculated value and the experimental value are closer. The bias results between measured and calculated data, i.e., (j=1, 2, 3), are 3.1101, 43.8393, and 47.8101, respectively. Considering hi comes from the measured data interpolation, and differs little from the exact actual value, h 1i can meet the precision requirements of the river model experiment. However, as the maximum differences for three occasions are 1.049, 4.422, and 3.316, h 2i and h 3i are obviously not practical. In the actual experiment, many factors could influence the precision of computation, such as the computation precision of perspective transformation parameters, the number and position of control points, the accuracy of the objective point central coordinates, and so on. Zhou and Deng (2011) compared the classical least square method and the AMLE method through an experiment that superimposed Gaussian noise on image coordinates, and demonstrated that if the points in the calculation have unequal accuracy, the results driven by AMLE can be better. In that case, we may reduce the noise more as shown in Fig. 6. In addition, it is also necessary to make further optimization and improvement, and establish improved error correction and compensation mechanisms.


4.  Transformation between 2D image grey value and 3D space coordinates

4.1.  Theoretical assumption and mathematical model

 The grey feature is one of the most important characteristics of 2D images. The image grey value obtained from CCD image sensor is influenced by various conditions. The main influencing factors include the illumination, the atmospheric environment (determined by atmospheric properties: atmospheric attenuation, the scattering of suspended particles in the atmosphere, etc.), the object surface (determined by the object temperature, the object surface characterization, the observation direction, the angle of incidence, etc.), and the optical imaging system (He et al., 2008).

 Assuming an ideal optical lens, an ideal optical imaging system, a scene region without a radiation source, and that the illumination of the region is the same as the illumination from the atmosphere, then the image grey value is only related to the position of each point and the climate conditions (i.e., time).

 The grey value is proportional to the illumination of the corresponding object surface. Assuming no outside influences, at a given moment the ideal grey value and the topographic elevation value have a positive correlation, described as follows: , where IP 0 is the ideal grey value of point P, hP is the corresponding elevation coordinate, and f is a positive correlation function.

 To study the effect of object surface reflection, an illumination model is usually needed. The existing illumination model includes a global and a local illumination model. Global illumination models use ray-tracing technology and have strong sense of reality, but have heavy calculation requirements. Local illumination models mainly include the diffuse reflection Lambertian model and the further added specular reflection Phong model (He et al., 2008). In this study, we use the diffuse reflection illumination model. Let SP be the diffuse reflection grey coefficient, , where CP is the reflection coefficient of a specific wavelength of light at point P, α is the incidence angle, and K e is the diffuse reflection attenuation coefficient.

 Then the actual image grey value can be expressed as , where k is the attenuation coefficient considering climatic factors and indoor factors. Then substituting Eqs. (17) and (18) into Eq. (19), we have: , where cosα can be derived according to the normalized inner product of the object surface normal vector [−p, −q, 1] and the light source (the sun) direction vector [−p s, −q s, 1]: , where and represent the gradients in the x and y directions, respectively. In this study, we use h=h(x, y) to represent the object surface equation.

4.2.  Sun direction vector calculation model

 The sun direction vector is determined by the position of the sun: where θ is the sun azimuth angle, and φ is the sun elevation angle.

 The sun elevation angle changes with solar declination and local time (Wang and Tang, 2001). We use δ to represent solar declination, and ϕ to represent the latitude of the observation location, which are both defined so that north latitude is positive and south latitude is negative. Solar hour angle is expressed by Ω. Then the calculation formula for the sun elevation angle is .

 The sun azimuth angle can be calculated by

 For the above equations, we can consult “the ground meteorological observation specification” (Yu et al., 2003) for solar declination and solar hour angle but to avoid that time-consuming operation we used a real-time solution method for calculating them.

 The solar hour angle (Zhang et al., 2010): , where TT is real solar time, , where S and M are the respective hour value and minute value of the observation time, LC is the longitude modified value, and Eq is the equation of time. , where D L is the degree value of observed longitude. , where and t is made up of two parts: , where N is the day according to coordinated universal time and , where Y is the year, and floor is INT function.

 The solar declination (Zhu and Hu, 2012) is .

 Thus, if we know the longitude, latitude, and time of image acquisition, we can calculate the solar declination and solar hour angle according to Eqs. (25) and (31), and then derive the sun direction vector according to Eqs. (22)–(24).

4.3.  Formula reformulation and derivation

 In the specific experiment, the sun direction vector changes over time. If we let the image acquisition time be T, then the sun direction vector will be [−p s(T), −q s(T), 1]. For any image point (x, y), according to Eqs. (20) and (21), we have: .

 We can make the following reformulation: and substituting the above formulas into Eq. (32), we have , where is the partial derivative of I to h, and Gx and Gy are the grey gradients of the x and y directions, respectively, which can be expressed as Assuming f is a linear function, expressed as f(x)=Dx+E, according to Eq. (33), we can obtain: . Let kCD=F, and K e=0. Thus, after reformulation we can obtain: . Solving Eq. (36), we can obtain the solutions of the fourth-order equation: where

 According to Eqs. (33), (35), and (37), we can obtain: , where r is the characteristic quantity that indicates the relationship between the grey gradient and the sun direction vector. Eq. (38) explains the relationship of topographic elevation coordinate, grey value, grey gradient, and the sun direction vector. It is obviously of great significance if we can get the precise analytical expression of Eq. (38).

4.4.  Tentative fitting experiment

 We conducted a tentative experiment upon the above derivation. The sample image is shown in Fig. 7 (the same curved reach as shown in Fig. 4). According to Google Earth, we get the longitude and latitude coordinates of our experiment place: 39°06′37″N, 117°10′03″E. According to the time of image acquisition of 4:55 pm on June 1, 2013, we obtain p s =−0.8782 and q s=0.1042. After a lot of tentative simplifications (reasonable or maybe unreasonable), we obtain the following fitted expression by use of partial measured data: .



Fig.7
Experimental sample image captured by a CCD camera

 The topographic contour calculated by the fitted Eq. (39) is shown in Fig. 8. Compared with Figs. 5 and 6, we can only observe the basic tendency of topographic change, and there is considerable numerical error. It is clear that the representativeness of tentative fitted Eq. (39) is far from enough. This tentative fitting experiment is not proven very successful. Later in the study, we make extensive attempts to obtain a more precise expression of Eq. (38); unfortunately, we have not got a very satisfactory result. In summary, the main reasons are: (1) we used a local illumination model which only considered diffuse reflection, but did not consider hidden surface removal; (2) we assumed in the derivation process that the unknown positive correlation function f is linear that might not be the case; (3) the image grey value and grey gradient of the corresponding 3D space point obtained through projection transformation were erroneous; (4) the solved sun azimuth angle and sun elevation angle were erroneous; (5) there were some high reflecting discrete noise points in the actual image, whose impact is difficult to eliminate; (6) various reflection coefficients and attenuation coefficients were difficult to determine, due to the complexity of the object materials, environment, and climate. In addition, an equation such as Eq. (39) is only fitted by use of specific image data, i.e., Fig. 7 in this study. Our future work will focus on how to obtain a more precise and more generalized formula.



Fig.8
Fitted topographic contour calculated by Eq. (39)


5.  Conclusions

 In this study, we present two 3D topography reconstruction methods, and make attempts to apply them to a river model experiment. Conclusions can be drawn as follows.

  1. Due to the difference between the structure of optical lens and the ideal linear pinhole imaging model, non-linear geometrical distortion is generated in the images captured from a CCD camera. To get more precise 3D topographic information from 2D image information, we used a distortion calibrated method based on Devernay and Faugeras, (1995)’s linear method with some significant improvements, such as a Canny edge detector with new arithmetic supported by Matlab (2013a), to obtain edge magnitude and orientation, with a combined golden section search method and quadratic interpolation method to calculate the optimal distortion parameter, etc. The experimental results show that our method is as effective as Devernay and Faugeras (1995)’s method, and improves the precision and computational efficiency of distortion parameters to some extent.

  2. The perspective projection relationship between the image and the ground is one of the most important relationships between a 2D image and 3D space. We viewed the calibrated images as a ground central projection, and used direct linear transformation to calculate the transformation parameters. To improve parameter precision, based on traditional methods, we used the AMLE considering the first-order error propagation of the residual error, to compute transformation parameters, and used an iterative weighted total least square method to do an iterative computation. We draw some conclusions. Control points in the control region should be distributed as uniformly as possible; and the more control points, the better. By those means, the calculated value is closer to the experimental value and can meet the precision requirements of a river model experiment. Experimental results show that the method may have some practical value.

  3. The grey feature is one of the most important characteristics of 2D images. This study assumes that the image grey value is only related to the actual spatial position of each point and the climate conditions (time); under that condition, without any external influences, at a given moment there is a positive correlation between the ideal grey value and the topographic elevation value, and the actual grey value is the deformation value of the ideal grey value under the effect of various grey factors. We derived a closed formula relating topographic elevation to actual image grey value, grey gradient, and solar direction vector, with some intermediate variables. However, for various complicated reasons, the obtained fitted formula under limited conditions was not very satisfactory, although we have made extensive attempts. The focus for future work in the field must be on how to get a more precise and more generalized formula.



* Project supported by the State Key Laboratory of Hydraulic Engineering Simulation and Safety, and the National Natural Science Foundation of China (Nos. 51279124, 51021004, and 50979066)


References

[1] Ahmed, M., Farag, A., 2005. Nonmetric calibration of camera lens distortion: differential methods and robust estimation. IEEE Transactions on Image Processing, 14(8):1215-1230. 


[2] Alho, P., Kukko, A., Hyypp, H., 2009. Application of boat-based laser scanning for river survey. Earth Surface Processes and Landforms, 34(13):1831-1838. 


[3] Astruc, D., Cazin, S., Cid, E., 2012. A stereoscopic method for rapid monitoring of the spatio-temporal evolution of the sand-bed elevation in the swash zone. Coastal Engineering, 60:11-20. 


[4] Benetazzo, A., 2006. Measurements of short water waves using stereo matched image sequences. Coastal Engineering, 53(12):1013-1032. 


[5] Bouratsis, P., Diplas, P., Dancey, C.L., 2013. High-resolution 3-D monitoring of evolving sediment beds. Water Resources Research, 49(2):977-992. 


[6] Brasington, J., 2010. From grain to floodplain: hyperscale models of braided rivers. Journal of Hydraulic Research, 48(4):52-53. 

[7] Butler, J.B., Lane, S.N., Chandler, J.H., 2001. Characterization of the structure of river-bed gravels using two-dimensional fractal analysis. Mathematical Geology, 33(3):301-330. 


[8] Chandler, J., Ashmore, P., Paola, C., 2002. Monitoring river-channel change using terrestrial oblique digital imagery and automated digital photogrammetry. Annals of the Association of American Geographers, 92(4):631-644. 


[9] Chen, L., Armstrong, C.W., Raftopoulos, D.D., 1994. An investigation on the accuracy of three-dimensional space reconstruction using the direct linear transformation technique. Journal of Biomechanics, 27(4):493-500. 


[10] Chojnacki, W., Brooks, M.J., van den Hengel, A., 2000. On the fitting of surfaces to data with covariances. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1294-1303. 


[11] Devernay, F., Faugeras, O., 1995. Automatic calibration and removal of distortion from scenes of structured environments. , Investigative and Trial Image Proceedings, San Diego, CA, USA, 62-72. :62-72. 


[12] Devernay, F., Faugeras, O., 2001. Straight lines have to be straight. Machine Vision and Applications, 13(1):14-24. 


[13] Feng, S.J., Chen, Q., Zuo, C., 2013. Automatic identification and removal of outliers for high-speed fringe projection profilometry. Optical Engineering, 52(1):013605


[14] Fonstad, M.A., Dietrich, J.T., Courville, B.C., 2013. Topographic structure from motion: a new development in photogrammetric measurement. Earth Surface Processes and Landforms, 38(4):421-430. 


[15] Forsyth, D.A., Ponce, J., 2002.  Computer Vision: a Modern Approach. Prentice Hall, Inc.,New Jersey, USA :

[16] Gallego, G., Yezzi, A., Fedele, F., 2011. A variational stereo method for the three-dimensional reconstruction of ocean waves. IEEE Transactions on Geoscience and Remote Sensing, 49(11):4445-4457. 


[17] Gomez, A., Pushparajah, K., Simpson, J.M., 2013. A sensitivity analysis on 3D velocity reconstruction from multiple registered echo Doppler views. Medical Image Analysis, 17(6):616-631. 


[18] Gong, C., Wang, Z.L., 2009.  Proficient in Matlab Optimization Calculation. (in Chinese), Publishing House of Electronics Industry,Beijing, China :

[19] He, J., Wang, X.S., Li, Q.H., 2008. Optics imaging model based on DEM. Computer Engineering, (in Chinese),34(1):50-52. 

[20] Heritage, G., Hetherington, D., 2007. Towards a protocol for laser scanning in fluvial geomorphology. Earth Surface Processes and Landforms, 32(1):66-74. 


[21] Hfle, B., Rutzinger, M., 2011. Topographic airborne LiDAR in geomorphology: a technological perspective. Zeitschrift fr Geomorphologie, Supplementary Issues, 55(2):1-29. 


[22] Hohenthal, J., Alho, P., Hyypp, J., 2011. Laser scanning applications in fluvial studies. Progress in Physical Geography, 35(6):782-809. 


[23] James, M.R., Robson, S., 2012. Straightforward reconstruction of 3D surfaces and topography with a camera: accuracy and geoscience application. Journal of Geophysical Research: Earth Surface, 117:F03017


[24] Jiang, D.Z., Yu, Q., Wang, B.Y., 2001. Research and overview of imaging nonlinear distortion in computer vision. Computer Engineering, (in Chinese),27(12):108-110. 

[25] Kanatani, K., 1996.  Statistical Optimization for Geometric Computation: Theory and Practice. Elsevier Science,Amsterdam :

[26] Li, J., 1991.  Computer Vision Theory and Practice. (in Chinese), Shanghai Jiao Tong University Press,Shanghai, China :

[27] Lohry, W., Zhang, S., 2012. Fourier transform profilometry using a binary area modulation technique. Optical Engineering, 51(11):113602


[28] Lu, S.L., Shen, X.H., Zou, L.J., 2008. An integrated classification method for thematic mapper imagery of plain and highland terrains. Journal of Zhejiang University-SCIENCE A, 9(6):858-866. 


[29] Matei, B., Meer, P., 2000. A general method for errors-in-variables problems in computer vision. , Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 18-25. :18-25. 


[30] Miks, A., Novak, J., 2012. Dependence of camera lens induced radial distortion and circle of confusion on object position. Optics & Laser Technology, 44(4):1043-1049. 


[31] Miks, A., Novak, J., Novak, P., 2013. Method for reconstruction of shape of specular surfaces using scanning beam deflectometry. Optics and Lasers in Engineering, 51(7):867-872. 


[32] Mohammed, T.S., Al-Azzo, W.F., Al Mashani, K.M., 2013. Image processing development and implementation: a software simulation using Matlab®. , Proceedings of the 6th International Conference on Information Technology, :

[33] Muhlich, M., Mester, R., 2001. Subspace methods and equilibration in computer vision. , Proceedings of the Scandinavian Conference on Image Analysis, Bergen, Norway, 415-422. :415-422. 

[34] Notebaert, B., Verstraeten, G., Govers, G., 2009. Qualitative and quantitative applications of LiDAR imagery in fluvial geomorphology. Earth Surface Processes and Landforms, 34(2):217-231. 


[35] Omar, K., Ramli, A.R., Mahmod, R., 2012. Skew detection and correction of Jawi images using gradient direction. Jurnal Teknologi, 37:117-126. 


[36] Per, J., Kovacic, S., 2002. Nonparametric, model-based radial lens distortion correction using tilted camera assumption. , Proceedings of the Computer Vision Winter Workshop, Bad Aussee, Austria, 286-295. :286-295. 

[37] Pollefeys, M., Gool, L.V., 2002. From images to 3D models. Communications of the ACM, 45(7):50-55. 


[38] Shashua, A., 1997. On photometric issues in 3D visual recognition from a single 2D image. International Journal of Computer Vision, 21(1-2):99-122. 


[39] Tordoff, B., Murray, D.W., 2000. Violating rotating camera geometry: the effect of radial distortion on self-calibration. , Proceedings of the 15th IEEE International Conference on Pattern Recognition, Barcelona, Spain, 423-427. :423-427. 


[40] Wang, B., Tang, J., 2001. Comparison of the different methods for solor position calculation. Acta Energiae Solaris Sinica, (in Chinese),22(4):413-417. 

[41] Wang, J.H., Shi, F.H., Zhang, J., 2008. A new calibration model of camera lens distortion. Pattern Recognition, 41(2):607-615. 


[42] Wang, Z., 2007.  Photogrammetry Principle. (in Chinese), Wuhan University Press,Wuhan, China :

[43] Weng, J., Cohen, P., Herniou, M., 1992. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(10):965-980. 


[44] Xu, D., Bai, Y.C., 2013. Experimental study on the bed topography evolution in alluvial meandering rivers with various sinuousnesses. Journal of Hydro-environment Research, 7(2):92-102. 


[45] Yoo, H., 2013. Depth extraction for 3D objects via windowing technique in computational integral imaging with a lenslet array. Optics and Lasers in Engineering, 51(7):912-915. 


[46] Yu, W., Zhou, S., Wang, W., 2003.  The Ground Meteorological Observation Specification. (in Chinese), China Meteorological Press,Beijing, China :

[47] Yu, Z., Ce, H., Yu, Z., 1990.  Measuring Adjustment Principle. (in Chinese), Wuhan Technical University of Surveying and Mapping Press,Wuhan, China :

[48] Zhang, C., Lv, D.H., Xu, C.J., 2010. Computation for solar real-time position and its application in illuminant direction of image. Electronic Measurement Technology, (in Chinese),33(11):87-93. 

[49] Zhang, R., Tsai, P.S., Cryer, J.E., 1999. Shape-from-shading: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(8):690-706. 


[50] Zhang, Y.J., 2000.  Image Understanding and Computer Vision. (in Chinese), Tsinghua University Press,Beijing, China :

[51] Zhang, Z.Y., 2000. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334. 


[52] Zhou, Y., Deng, C., 2011. Approximated maximum likelihood estimation approach for direct linear transformation. Geotechnical Investigation & Surveying, (in Chinese),39(12):50-54. 

[53] Zhu, S.H., Hu, H.B., 2012. Analysis of the effect on optical equipment caused by solar position in target flight measure. , Proceedings of SPIE, Optoelectronic Imaging and Multimedia Technology II, 85581V, :


[54] Zwick, S., Heist, S., Franzl, Y., 2013. Wide-band phase-shifting fringe projector on the basis of a tailored free-form mirror. Optical Engineering, 52(2):023001



Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE