Optical Engineering49(10),103601(October2010)
Three-dimensional digital image correlation system for deformation measurement in experimental mechanics
Zheng-Zong Tang
Jin Liang
Zhen-Zhong Xiao
Cheng Guo
Hao Hu
Xi’an Jiaotong University
School of Mechanical Engineering Number28Xianning West Road Xi’an,Shaanxi,710049,China
E-mail: Abstract.A three-dimensional(3-D)digital image correlation system for deformation measurement in experimental mechanics has been devel-oped.The key technologies ap
plied in the system are discusd in detail, including stereo camera calibration,digital image correlation,3-D recon-struction,and3-D displacement/strain computation.A stereo camera lf-calibration algorithm bad on photogrammetry is propod.In the algo-rithm,the interior and exterior orientation parameters of stereo cameras and the3-D coordinates of calibration target points are estimated together, using the bundle adjustment technique,so the3-D coordinates of calibra-tion target points are not needed in advance to get a reliable camera calibration result.An efficient image correlation scheme with high preci-sion is developed using the iterative least-squares nonlinear optimization algorithm,and a method bad on a ed point is propod to provide a reliable initial value for the nonlinear optimization.After the3-D coordi-nates of the object points are calculated using the triangulation method, the3-D displacement/strainfield could then be obtained from them.After calibration,the system accuracy for static profile,displacement,and strain measurement is evaluated through a ries of experiments.The exper-iment results confirm that the propod system is accurate and reliable for deformation measurement in experimental mechanics.C 2010Society of Photo-Optical Instrumentation Engineers.[DOI:10.1117/1.3491204]
Subject terms:stereo vision;digital image correlation;lf-calibration;photogram-metry;ed point.
Paper100213R received Mar.17,2010;revid manuscript received Jul.2,2010; accepted for publicati
on Aug.11,2010;published online Oct.22,2010.
1Introduction
Full-field deformation measurement(displacement/strain) during various loading is a key task in experimental me-chanics.A digital image correlation method,which was originally developed by Sutton et al.in the1980s,1,2is widely ud3–7for full-field deformation measurement due to its advantages of simple equipment,high precision,and noncontact measurement.Two-dimensional(2-D)digital im-age correlation,8which is ud with a single camera,can measure only in-plane displacement/strainfields on plane objects.To overcome the drawback of2-D digital image cor-relation,Luo et al.9propod a3-D digital image correlation technique,which combines digital image correlation with stereo vision and can measure the3-D displacementfield and surface strainfield of3-D object.
It can be en from the principle of the3-D digital image correlation that the two key technologies are stereo camera calibration and digital image correlation.Much work on cam-era calibration has been done.Luo et al.10ud a multiple-precision moving object to calibrate the cameras,which is quite laborious and time-consuming.A popular and practi-cal algorithm was developed by Tsai11using radi
al align-ment constraint,but in this method,initial camera parame-ters are required and only lens radial distortion is considered. Zhang12propod aflexible technique for camera calibration 0091-3286/2010/$25.00C 2010SPIE by viewing a plane from veral arbitrary views,in which the calibration target is assumed to be an ideal plane and the man-ufacturing errors of the target are ignored.For digital image correlation,the Newton-Raphson(N-R)method13is the most commonly ud method.Compared to the N-R method,the iterative least-squares algorithm(ILS)14is more brief and easy to implement,and it is ud in our algorithm.Both methods are nonlinear optimization algorithms,andfinding reliable initial values for them efficiently is a key issue.
Nowadays,there are some commercial3-D digital image correlation systems on the market,such as the ARAMIS system(GOM Company,Braunschweig,Germany)and the VIC-3D system(Correlated Solutions,Columbia,South Carolina).But the systems are usually too expensive for many rearch institutes to afford especially in China,so development of a low-cost3-D digital image correlation system is still needed.Recently,a3-D digital image correlation system(XJTUDIC)has been developed at Xi’an Jiaotong Universiy of China.The XJTUDIC deformation measurement system is described in detail in this paper. Much attention has been paid to high-precision camera cal-ibration and the digital image correlation method.A stereo camera lf-calibration algorithm bad on photogrammetr
y is propod,in which a10-parameter lens distortion model is adopted.Using the propod method,the stereo cameras can be calibrated in high precision without any accurate calibration target.High-precision image correlation is realized using the ILS algorithm,and to solve the problem
Fig.1Hardware construction of the XJTUDIC system.
in calculating initial values for the nonlinear optimization,a method bad on a ed point is developed that can provide reliable initial values for the nonlinear optimization.After calibration,three experiments are carried out to validate the XJTUDIC system,and experiment results show that the XJTUDIC system can satisfy the requirements of deformation measurement in experimental mechanics.2
System Description
2.1Hardware Components
Figure 1shows the hardware components of the XJTUDIC system developed at Xi’an Jiaotong University,which consists of the following parts:(1)two CMOS cameras (1280*960pixels,8bits)for image acquisition,(2)two high-frequency LED lights for illumination,(3)a control box for the control of cameras and LED lights,(4)a tripod for sup-port,and (5)a computer for software installation.
2.2XJTUDIC Software
The software of the XJTUDIC system was developed us-ing the C ++programming language.Figure 2shows the software interface,which has the following screen elements:(1)toolbar,(2)menu bar,(3)Op
enGL 3-D view for result dis-play (3-D points/displacement/strain),(4)project tree win-dow for image list display,(5)control panel for camera
and
Fig.2Software
interface.
Fig.3System workflow.
light control,(6)2-D view for the left camera image display,(7)2-D view for the right camera image display,(8)curve display window,and (9)status bar.
2.3System Workflow
Figure 3shows the workflow of the XJTUDIC system,which consists of the following main steps:
1.Spray the specimen with a stochastic speckle if the specimen surface does not have enough features.
2.Calibrate the stereo cameras for the first u or if the relative position of the stereo cameras has changed.
3.Capture the images during the deformation—e.g.,tensile test.
4.Select the calculation area in the left image of the first stage,and the software will divide the calculation area into subts (where one subt reprents a point).Then,all the other images are procesd using the digital image correlation method,and corresponding points in all the stages are obtained.
5.Reconstruct the 3-D coordinates of all the points using the triangulation method.
6.Calculate the 3-D displacement/strain using the 3-D coordinates of the points.
7.Display the displacement/strain field in the OpenGL 3-D view.3Stereo Camera Self-Calibration
Stereo vision is a technique for building a 3-D description of a scene from two different viewpoints.15As shown in Fig.4,O w X w Y w Z w is the world coordinate system,O 1X 1Y 1Z 1is the left camera coordinate system,O 2X 2Y 2Z 2is the right camera coordinate system,and OXY is the image coordinate system.P 1(x 1,y 1)and P 2(x 2,y 2)are two corre-sponding image points of object points P (x w ,y w ,z w )in the two cameras.The 3-D coordinates of object point P can be obtained by the triangulation 16method if (1)the stereo cam-eras are calibrated and (2)the image coordinates of P 1and P 2are known.A real construction of stereo vision can be found in Fig.5.
The mathematical model of lf-calibration bad on photogrammetry is the well-known colinearity equations,17
求的音序
Fig.4Stereo vision model.
which reprent the transformation between image 2-D space and object 3-D space:x −x 0+d x =−f
a 1(X −X s )+
b 1(Y −Y s )+
c 1(Z −Z s )
a 3(X −X s )+
b 3(Y −Y s )+
c 3(Z −Z s ),
(1)
y −y 0+d y =−f
a 2(X −X s )+开学安全教育
b 2(Y −Y s )+
c 2(Z −Z s )
a 3(X −X s )+
b 3(Y −Y s )+
c 3(Z −Z s )
,
where (X ,Y ,Z )are the world coordinates of object point,(X s ,Y s ,Z s )are the coordinates of the perspective cen-ter,(x ,y )are the measured image coordinates,(x 0,y 0)are the coordinates of camera principle points,(d x ,d y )ac-counts for the lens distortions,f is the principle distance,and
R =⎡⎢⎣a 1b 1c 1a 2b 2c 2a 3b 3c 3⎤⎥⎦
is the rotation matrix between the world coordinate system and the camera coordinate system.
The calibration terms in Eq.(1)include the interior orien-tation parameters (the coordinates of principle point x 0,y 0,the principle distance f )and the lens distortion parameters.In order to improve the calibration precision,a more
complete
Fig.5Construction of stereo vision.
lens distortion model 18is adopted:
酸菜肉饺子dx =A 1∗x ∗r 2+A 2∗x ∗r 4+A 3∗x ∗r 6+B 1∗(r 2+2x 2)
+2∗x ∗y ∗B 2+C 1∗x +C 2∗y ,
dy =A 1∗y ∗r 2+A 2∗y ∗r 4+A 3∗y ∗r 6
+B 2∗(r 2+2y 2)+2∗x ∗y ∗B 1,(2)
中夜where A 1,A 2,A 3are radial distortion coefficients,B 1,B 2are tangential distortion coefficients,C 1,C 2are thin prism distortion coefficients,and r is the radial distance from the principle point (r 2=x 2+y 2).So,there are altogether 10pa-rameters (x 0,y 0,f ,A 1,A 2,A 3,B 1,B 2,C 1,C 2)ud in the lf-calibration algorithm.
A planar target with 17coded points and 208uncoded points is employed,as shown in Fig.6.In traditional meth-ods,calibration targets must be manufactured in very high precision to achieve accurate calibration results,which is quite consuming and expensive.In this study,a more accu-rate and flexible calibration method bad on photogramme-try is propod.A reliable calibration result can be achieved without any accurate calibration target.This means that the accurate positions of all
the points on the calibration target are not needed in advance.All we need is an ac-curate distance between two diagonal coded points as a scale.
The structure of Eq.(1)allows the direct formulation of primary obrved values (image coordinates)as functions of all unknown parameters (3-D coordinates of object points,interior and exterior orientation parameters,lens distortion parameters).All the unknowns can be iteratively deter-mined using image coordinates as obrvations.The obr-vation equations can be obtained through the linearization of Eq.(1):
V =AX 1+B X 2+C X 3−L ,
(3)
where V is the residual of reprojection;X 1,X 2,and X 3are the partial derivatives of the interior orientation parameters (including distortion parameters),exterior orientation param-eters,and 3-D coordinates of object
points.
Fig.6Calibration target with coded points and uncoded points.
For Eq.(3),if the interior orientation parameters and 3-D coordinates of at least three object points are already known,the exterior parameters of a single image can be obtained by space rection.Similarly,if the interior and exterior ori-entation parameters are already known,the 3-D coordinates of the object point can be computed via space interction.If the interior and exterior orientation parameters,along with the 3-D coordinates of object points,are refined simulta-neously,the procedure is called bundle adjustment.19The procedure of the camera lf-calibration algorithm bad on photogrammetry is a combination of space rection,space interction,and bundle adjustment.
The calibration algorithm consists of the following six steps:
1.Place the calibration target 360mm from the mea-surement device,and capture eight pairs of images in different locations by moving the calibration target.
2.Determine the image coordinates of both coded points and uncoded points in the eight groups of i
mages.The Canny operator is first ud to detect the edge of the circle points.Then,the subpixel edge is obtained us-ing the gradient of adjacent pixel.Last,a least-squares fitting algorithm is adopted to locate the center coor-dinates of circle points.In addition,for coded points,the ID is recognized.
3.Calculate the relative orientation using the coplanarity equation for the first two images,and reconstruct the 3-D coordinates of the coded points.
4.Compute the exterior orientation parameters of the other images using space rection,and reconstruct the 3-D coordinates of all the uncoded points using space interction.
5.Optimize all the interior and exterior orientation pa-rameters of the two cameras and the 3-D coordinates of object points iteratively using the bundle adjust-ment method.
6.Compute the rotation matrix R and translation matrix T between the two camera coordinate systems using the calculated exterior orientation parameters.4
Digital Image Correlation Method
4.1Mathematical Model
The digital image correlation method us the random speckle pattern to match the corresponding points precily on two images.As shown in Fig.7,the left image is the reference image,and the right image is the deformed im-age.In the reference image,a square reference subt of (2M +1)*(2M +1)pixels centered at point (x ,y )is lected.The matching procedure is to find the corresponding
subt
Fig.7Basic principle of digital image ed at point (x ,y )in the deformed image that has the maximum similarity with the reference subt.Then,the two center points (x ,y )and (x ,y )are a pair of corresponding points of the two images.Obviously,the relative relationship of gray level in the reference image does not change in the deformed image,so any point (x i ,y i )in the reference sub-t can be mapped to a point (x i ,y i )in the deformed image according to a mapping function.The first-order mapping function is ud in our algorithm,which allows translation,rotation,shear,normal strains,and their combinations of the subt:
x i =x 0+ x +u +u x x +u y y ,
胡克实y i =y 0+ y +v +v x x +v y y ,
(4)
where x , y are the distances from the subt center to point (x i ,y i ),u and v are the displacement components of the reference subt center in the x and y directions,and u x ,u y ,v x ,v y are the first-order displacement gradients of the reference subt.
The gray value of points (x i ,y i )and (x i ,y i )are f (x i ,y i )and g (x i ,y i ),respectively.They are theoretically identical.But in fact,they are not equal,becau of illumination and random noi.So the relationship between them can be ex-presd as
f (x i ,y i )−e (x i ,y i )=r 0+r ∗
当然用英语怎么说1g (x i ,y i ),
(5)
where e (x i ,y i )stands for the noi component,and r 0,r 1are ud to compensate the gray value difference caud by illu-mination diversity.It has to be noticed that an interpolation scheme is needed in the realization becau the coordinates of points in the deformed image are not integer pixels,and a bicubic spline interpolation scheme 20is adopted in our algorithm.
Assume that there are n pixels in the reference subt and that the image pixels are corrupted by ind
ependent and identically distributed noi.The corresponding subt in the deformed image that has the maximum similarity with the reference subt can be obtained by minimizing the following function:
C SS
D (p )=n i =1
[f (x i ,y i )−r 0−r ∗
1g (x i ,y i )]2,(6)
where p =[u ,u x ,u y ,v ,v x ,v y ,r 0,r 1]reprents the vector of correlation parameters.This is a nonlinear minimization problem,which can be solved by using the ILS algorithm.To solve the problem,the initial value of correlation parameters must be provided in advance.In the traditional method,the initial values of u and v can be obtained by a coar arch scheme pixel by pixel.The initial values of the rest of the correlation parameters can be given as follows:u x =u y =v x =v y =r 0=0,r 1=1.
(7)
4.2Calculation of Initial Value of Correlation
Parameters Bad on Seed Point
As mentioned earlier,the initial value of correlation parame-ters are needed in the ILS algorithm.Inaccurate initial values may decrea the calculation speed or even lead to a wrong convergence result.As can be en in Fig.8(a),usually a calculation area has to be specified and then divided into
Fig.8Digital image correlation procedure bad on ed point:(a) ed point is matched,and(b)all points are matched.
evenly spaced subts(green rectangles)in the reference im-age.In the traditional method,for each subt to be matched, the coar arch scheme pixel by pixel is ud to get the initial value of correlation parameters,which is quite time-consuming and unstable.And in this method,only initial values of u and v are considered,which may fail to work, especially in a large deformation situation.
A method bad on a ed point is propod to calculate the initial values of correlation parameters.As we can e in Fig.8(a),after the calculation area is specified and divided into subts,one subt is chon as the ed point (red rectangle).The traditional method is adopted to match the lected ed pointfirst.Considering the continuity of deformation,the ed point is then ud to calculate the initial value of correlation parameters for its four neighbor points(left,right,up,and down).An estimate of the location of the neighbor points in a deformed image can be obtained according to Eq.(4),which can be ud to get the initial values of u and v directly.The initial values of the remaining correlation parameters are t equal to the ed point.Then, the ILS algorithm is ud to refine the correlation parameters of the neighbor points.Once the four neighbor points are matched successfully,they can act as ed points for their neighbor points.The process repeats until all the points are matched,as can be en in Fig.8(b).Using this method, not only the computing time is much reduced,but also the precision of initial values is improved.
53-D Reconstruction and3-D Displacement/ Strain Calculation
5.13-D Reconstruction
Three-dimensional reconstruction involves all the stages in the deformation process,and each stage has two images cap-tured by the stereo cameras.Figure9displays the whole match process of all the images.First,the calculation area is specified and divided into subts in the left image of the reference stage(stage1).Then,all the images are
procesd
Fig.9Match process of3-D reconstruction. according to the following rules:the left image of each stage matches with the left image of the reference stage,and the right image of each stage matches with the left image of the same stage.After all the images are matched,for each stage, 3-D coordinates of all the points can be obtained through the triangulation method using the calibration parameters of the stereo cameras and the corresponding image points in the left and right images.
5.23-D Displacement/Strain Calculation
If the3-D reconstruction of all the stages isfinished,the3-D displacement of any point in a stage can be obtained directly by comparing its3-D coordinates in the current stage and the reference stage.
The calculation of strain is relative complex.21As can be en in Fig.10,the3-D coordinates of eight neighbor points are ud to calculate the strain of point P.The detailed calculation steps are as follows:
1.In the reference stage,calculate a tangential plane us-
ing the neighbor points of point p.Project the neigh-
bor points onto the tangential plane and get a t of
2-D points(P r)in an arbitrary2-D coordinate system
OXY.
2.Repeat exactly the same process in the current stage,
and get a t of2-D points(P c)in an arbitrary2-D
coordinate system O X Y
.
Fig.10Strain calculated using3-D points.
Fig.11Pair of images ud in the calibration.
3.Calculate the deformation gradient tensor F (2*2ma-trix)using the two ts P r and P c .They have func-tional relationship as follows:P c =u +F ∗P r ,
(8)
where u stands for the rigid body translation between P r and P c .To solve for F ,a standard least-squares algorithm can be adopted.The deformation gradient tensor F =RU can be split to the rotation matrix R and stretch tensor U ,and the strain of point p in the current stage can be acquired from U directly.6
Experiments Results and Analysis
6.1Camera Calibration Experiment
Calibrate the stereo cameras using the method illustrated in Sec.3.Figure 11shows a pair of images ud in the calibration procedure.The calibrated interior orientation and lens distortion parameters of binocular cameras are listed in Table 1,respectively.
In addition,the rotation matrix R and translation matrix T between the two camera coordinate systems are obtained as follows:
R =
9.404e −001−1.100e −002−3.398e −0011.100e −0029.999e −001−4.428e −0033.398e −0017.732e −0049.405e −001 ,
T = −144.746−1.647−22.531 T .
Table 1Interior orientation and lens distortion parameters of the two cameras.
Left camera
Right camera f /pixel 9.003530e +0038.744157e +003x 0/pixel 5.350642e +001−1.341878e +001y 0/pixel
3.413115e +001−9.775422e +000A1−2.088668e-010−1.081924e-009A2 6.218764e-015 1.668371e-014A3−6.682331e-021−2.002049e-020B1 2.135709e-007 3.216488e-007B2 2.475766e-007−2.423612e-007E1−1.850259e-004−1.384058e-004E2
−3.266422e-004
2.839439e-005
Fig.12Standard cylinder (a)and measurement result
(b).
Fig.133-D points and their displacement vectors in OpenGL
view.
Fig.14Curve of calculated displacement
magnitude.
雅宝网Fig.15Specimen:(a)specimen size and (b)steel specimen.
长毛怪