Routines for Relative Po of Two Calibrated
Cameras from5Points
Bill Triggs
INRIA Rhˆo ne-Alpes,
655avenue de l’Europe,38330Montbonnot,France.
www.inrialpes.fr/movi/people/Triggs
Bill.Triggs@inrialpes.fr
July23,2000
1Introduction
This report describes a library of C routines forfinding the relative po of two cali-brated perspective cameras given the images offive unknown3D points.The relative po is the translational and rotational displacement between the two camera frames, also called camera motion and relative orientation.
As images contain no record of the overall spatial scale factor,the scale of the inter-camera translation can not be recovered.So the relative po problem has5degrees of freedom:three angles giving the angular orientation of the cond camera in the frame of thefirst,and two angles giving the direction of the inter-camera translation. According to the usual coplanarity(epipolar)constraint,each pair of corresponding image points gives one constraint on the degrees of freedom,so in principle5point pairs are enough for a solution.There are a number of ways to parametrize the prob-lem algebraically and solve the resulting system of polynomial constraints to obtain the solution.The method below us a quaternion bad formulation and multiresultants and eigendecomposition for the solution.There are a great many previous references on this problem,although relatively few give algorithms suitable for numerical imple-mentation.For a sample and further references,e[2,10,11,6,4,3,1,5,9,7].
A well-formulated polynomial system for the5point problem generically has20 possible algebraic solutions[1,4,5].However many of the are often complex,and of the real ones at least half have negative point depths,corresponding to invisible points behind the camera.In fact,the20solutions fall into10“twisted pairs”—solution pairs differing only by an additional rotation of the cond camera by180◦about the axis joining the two cameras.The two members of a twisted pair always give opposi
te relative signs for the two depths of any3D point,so they can not both be physically realizable.Hence,the5point relative po problem has at most10real,physically feasible solutions.More often it has between one andfive such ,[10]), but in rare cas10feasible solutions are indeed possible[1,4,3].
1
The relative po problem has various singularities[7,11],both for special point configurations and for special motion types.We will not go detail on the.However, note that for an inter-camera translation of zero,even though the inter-camera rotation can still be found accurately,the corresponding translation direction is undefined and the point-camera distances can not be recovered as there is no baline for triangulation. In fact,pure camera rotation is a singular ca in all formulations of the general5point problem of which we are aware,and has to be handled as a special ca.
2Code Organization
The main entry point to the library is the driver routine relorient5().This checks for zero translation using relorient rot(),then calls the main multire-sultant bad solver relorient5m().LAPACK is ud for
linear algebra,namely LU decomposition,SVD and nonsymmetric eigensystems.Various small utility rou-tines are also for converting quaternions to and from3×3rotation matrices. For usage of the routines and further information,e the comments at the start of each file.
A test program is provided in test
1K= f0u00f v0001 for a perspective camera with focal length f and principal point(u0,v0).
2
and then)multiplying them by K−1,so that the effective perspective matrix becomes (R|t).Such normalized‘points’reprent3D vectors expresd in camera frame coor-dinates,pointing outwards along the optical ray of the corresponding image point.For convenience,we assume that the3-vectors have been normalized to unit vectors.So by“image point”,we really mean a unit-norm3D direction vector giving the outwards direction of the corresponding optical ray in the camera frame.
We u3D coordinates centred on the frame of thefirst camera,so the two camera matrices are(I|0)and(R(q)|t).We need to recover R or q,and t up to scale.Let x i,y i,5,be the normalized image points(optical ray direction vectors)in respectively thefirst and the cond cameras.Each imag
e point/direction vector has an associated3D depth(point-camera distance),calledλi for x i andµi for y i.The corresponding3D point is justλi x i in thefirst camera frame andµi y i in the cond, so by elementary3D geometry:
µi y i=(R|t) λi x i1 =R(λi x i)+t(1) Among other things,this is the basis of interction/triangulation forfinding the point depthsλi,µi.As the three terms of(1)must be coplanar,we get the well-known coplanarity or epipolar constraint:
y i,R x i,t =0(2)
贪官where a,b,c is the3-vector triple product a·(b∧c)=det(a,b,c).The coplanarity constraint(2)gives one scalar constraint on(R|t)per point pair.However note that the constraint vanishes identically for t→0,so any relative po method bad on it is likely to fail in this ca.
We will u a quaternion bad form of the coplanarity constraint for our relative po method.If you are not familiar with quaternion algebra,you will have to take the rest of this ction on trust.Quaternions are a way of encoding3D rotations,al-gebraically convenient in the n that only4numbers(a4-vector q containing a 3-vector q and a scalar q0)with one nonlinear constraint( q 2=1)are needed,rather than the9components of a3×3rotation matrix R subject to the6quadratic const
raints R R⊤=I.Quaternions have a bilinear product that encodes rotation composition,and a conjugation operation
q(where jux-taposition denotes quaternion multiplication),and the triple product of three3-vectors is the scalar part of their quaternion product a,b,c =(a b c)0.
Putting the elements together,we can write the coplanarity constraint(2)as a bilinear constraint in two quaternion(4-vector)unknowns q and p=
q t)0=(y i q x i p)0=q⊤B(y i,x i)p(3) where the4×4matrix B turns out to be:
B(y i,x i)= x i y⊤i+y i x⊤i−(y i·x i)I y i∧x i
−(y i∧x i)⊤−y i·x i (4)
3
(Here,‘∧’is cross product,and the scalar component of the quaternions is written last). We get one of the bilinear equations for each point pair.Also,owing to the form of p=
q t q)0=0(5) This gives a total of5+1=6bilinear equations on the4+4=8components of q,p.As q,p are defined only up to scale they have just6degrees of freedom between them,and the polynomial system turns out to be(generically)well-constrained,with 20roots.
3.2Spar Multiresultant Polynomial Solver
3.2.1General Approach
暖暖的图片Of the many ways to solve the above6polynomial system,we will u a multiresultant approach.We can not describe this in any detail here.See[2]for a description and references,and[8]for a general tutorial on methods of solving polynomial systems using matrices.In our ca,the method builds a large(60×60)but fairly spar matrix from the polynomial system using multiresultant techniques,and us linear algebra to reduce this to a20×20nonsymmetric matrix who eigenvalues and eigenvectors encode the20roots of the system.
蒸红薯的做法川普女儿To get a general idea of the approach,note that any polynomial is a sum of monomi-als in its unknowns,multiplied by scalar coefficients.If we choo a t of monomials, we can reprent any polynomial on them as a row vector who entries are the coef-ficients and who columns are labelled by the corresponding monomials.This allows us to u linear algebra to manipulate systems
路飞怎么画of polynomials.In fact,each row and column of the60×60and20×20matrices that we build corresponds to a specific monomial in the unknown variables q and p.The real art of the method lies infinding suitable ts of row and column monomials,where“suitable”means that the resulting matrices are both nonsingular and relatively small.Everything el follows almost in-evitably from this choice.The choice requires some advanced theory in principle,and brute force arch in practice.
Suppo that we restrict attention to polynomials on a given monomial t A.In linear algebra language,to evaluate the polynomial at a point(t of variable values), we dot-product the polynomial’s row vector with a column vector containing the corre-sponding monomials evaluated at the point.If the point is a root of the polynomial,the dot product(polynomial value)must vanish.So the root’s monomial vector is orthogo-nal to the polynomial’s row vector.If we can generate a ries of independent polyno-mials with the same root,the will give a ries of linear constraints on the monomial vector.With luck,the will eventually suffice to restrict the monomial vector to a1D subspace,and hence give it uniquely up to scale.Given this,it is a trivial manipulation to read off the corresponding variable values at the root from the up-to-scale monomial vector.If there are veral roots,their monomial vectors all lie in the orthogonal com-plement of the constraints.As different monomial vectors are linearly independent, we can only hope to constrain the monomial vector to a subspace of dimension equal
健康的用英语怎么说
4
to at least the number of independent roots,but it turns out that eigendecomposition methods can be ud to extract the roots from this residual subspace.
To create a ries of independent polynomials with the same root,we work as follows.Given a t A of column-label monomials and an input polynomial p,we can form the t of all multiples of p by arbitrary monomials q,such that the monomials of the polynomial q p are all contained in A.This corresponds to forming the t of row vectors q p who nonzero entries lie entirely within the columns labelled by A.If p has a root at some point,q p must as well,so all of the rows will be orthogonal to the root’s monomial vector.If we are interested in the simultaneous roots of a system of veral polynomials,we can form the t of admissible row vectors for polynomial parately,and stack them together into a big“multiresultant”matrix to get further constraints on the root monomials.
If the system is generic and has only a single isolated root,it turns out that this construction eventually succeeds in isolating the1D subspace spanned by the root’s monomial vector.All that is needed for this is a sufficiently large(and otherwi suitable)column monomial t A.There exist a number of theoretical multiresultant construction methods that give sufficient ts for A under various
kinds of genericity assumptions on the input polynomials.We will not go into the,becau the details are complicated and in any ca they ldom give minimal ts for A.A practical mul-tiresultant method can usually do better by some kind of combinatorial arch over the possible monomial ts A,which is exactly how the monomial ts given below were generated.
In our ca there are multiple roots so we can not u the above construction as it stands.However,by treating one of the variables in the problem as if it were a part of the coefficients,not the monomials),we can run through the same process to get a multiresultant matrix who entries are no longer scalars,but rather polynomials in the“hidden”(treated-as-constant)variable.Roots of the system are still null-vectors of this matrix,provided that the hidden variable is given its correct value for the root.So we canfind roots by looking for values of the hidden variable for which the multiresultant matrix is singular(has a nontrivial null space,corresponding to the root’s monomial vector).If the matrix is actually linear in the hidden variable(which ours is),this requirement already has the form of a so-called generalized eigenproblem for which standard linear algebra methods exist.If not,it can still be converted into an eigenproblem—e.g.by using companion matrices—but we will not need this here.
3.2.2Details of the5Point Method
In the implementation of the5point relative po method,the multiresultant matrix is constructed by taking the following10multiples of each of the5+1=6input polynomials(3)and(5):
[1,q1,q2,q3,q21,q1q2,q1q3,q22,q2q3,q33]
5
The multiples give a60×60multiresultant matrix with columns labelled respectively by the following three lists of10,30and20monomials:
[p1q31,p1q21q2,p1q21q3,p1q1q22,p1q1q2q3,
(6)
p1q1q23,p1q21,p1q1q2,p1q1q3,p1q1]
[p1q32,p1q22q3,p1q2q23,p1q33,p2q31,p2q21q2,p2q21q3,p2q1q22,
p2q1q2q3,p2q1q23,p2q32,p2q22q3,p2q2q23,p2q33,p1q22,
(7)
p1q2q3,p1q23,p2q21,p2q1q2,p2q1q3,p2q22,p2q2q3,
p2q23,p1q2,p1q3,p2q1,p2q2,p2q3,p1,p2]
[q31,q21q2,q21q3,q1q22,q1q2q3,q1q23,q32,q22q3,q2q23,q33,
(8)
满意近义词q21,q1q2,q1q3,q22,q2q3,q23,q1,q2,q3,1]
Note that in the above monomials,we have normalized p,q to p0=1,q0=1.(Equiv-alently,the above monomials could each be homogenized parately in p0for p and q0for q).The component p3does not appear above,becau it has been treated as a constant and“hidden”in the polynomial coeffients.This means that the coefficients of the60×60multiresultant matrix are linear polynomials in p3(becau(3,5)are linear in p3),with coefficients given by the above B(y i,x i)matrices(4)for(3),and constant coefficients B=diag(−1,−1,−1,1)for(5).
The ordering of the above column monomials was chon so that:(i)thefirst10 monomials give a nonsingular leading10×10submatrix with constant coefficients on the10rows from(5);(ii)only the last20columns contain non-constant nonzero linear terms in p3.The properties are u
d in three steps as follows.辩证法与形而上学
First,in the implementation,we have already eliminated thefirst10columns us-ing the constant10×10submatrix from the(5)rows.This reduces the problem to a 50×50one involving only coplanarity equations(3),which decreas the matrix de-composition work required for this stage by about40%without any significant increa in complexity.
Second,we build the reduced50×50multiresultant matrix M from the B matrices, as a50×50constant matrix M0and a50×20one M1with
M=M0+p3(050×30|M1)
This is already in the form of a generalized eigensystem in p3:
(M0+p3(050×30|M1))x=0
so it could be solved directly LAPACK’s dgegv()or dggev().However, many of the columns do not involve p3so there would be many unwanted roots at infinity.Instead,we extract the20×20submatrix A containing the last20rows of M−10M1,and solve the standard nonsymmetric eigensystem2(A+λI)x=0,where λ=1/p3.