banner



How Many Known Points To Calculate Camera Extrinsic Matrix

Let me place some context. Consider the following picture, (from https://docs.opencv.org/two.four/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html):

enter image description here

The camera has "attached" a rigid reference frame (90,Yc,Zc). The intrinsic calibration that you successfully performed allows you to convert a signal (Ninety,Yc,Zc) into its projection on the image (u,five), and a indicate (u,v) in the image to a ray in (Ninety,Yc,Zc) (you can only get it up to a scaling factor).

In practice, you want to identify the camera in an external "world" reference frame, let's call information technology (X,Y,Z). So there is a rigid transformation, represented by a rotation matrix, R, and a translation vector T, such that:

          |Xc|    |Ten| |Yc|= R |Y| + T |Zc|    |Z|                  

That'due south the extrinsic calibration (which can be written too every bit a 4x4 matrix, that's what you call the extrinsic matrix).

Now, the answer. To obtain R and T, y'all tin do the following:

  1. Fix your world reference frame, for instance the ground can exist the (x,y) airplane, and choose an origin for it.

  2. Set some points with known coordinates in this reference frame, for instance, points in a square grid in the floor.

  3. Take a movie and get the corresponding 2nd image coordinates.

  4. Use solvePnP to obtain the rotation and translation, with the following parameters:

    • objectPoints: the 3D points in the earth reference frame.
    • imagePoints: the corresponding 2d points in the image in the same order every bit objectPoints.
    • cameraMatris: the intrinsic matrix yous already have.
    • distCoeffs: the baloney coefficients you already have.
    • rvec, tvec: these will be the outputs.
    • useExtrinsicGuess: false
    • flags: you tin use CV_ITERATIVE
  5. Finally, get R from rvec with the Rodrigues role.

You lot will demand at least iii non-collinear points with respective 3D-2d coordinates for solvePnP to work (link), but more is better. To accept practiced quality points, you could print a big chessboard pattern, put information technology flat in the flooring, and use information technology as a grid. What'southward important is that the pattern is not too small in the image (the larger, the more than stable your calibration will be).

And, very important: for the intrinsic calibration, you used a chess pattern with squares of a certain size, but you told the algorithm (which does kind of solvePnPs for each pattern), that the size of each foursquare is one. This is not explicit, but is done in line ten of the sample code, where the grid is built with coordinates 0,1,2,...:

objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,ii)

And the calibration of the globe for the extrinsic calibration must match this, so you have several possibilities:

  1. Use the same scale, for example by using the same grid or by measuring the coordinates of your "world" plane in the same scale. In this instance, you "world" won't be at the right scale.

  2. Recommended: redo the intrinsic scale with the right calibration, something similar:

    objp[:,:2] = (size_of_a_square*np.mgrid[0:7,0:6]).T.reshape(-1,ii)

    Where size_of_a_square is the real size of a square.

  3. (Haven't washed this, but is theoretically possible, exercise it if you tin can't do 2) Reuse the intrinsic calibration by scaling fx and fy. This is possible because the camera sees everything up to a scale factor, and the declared size of a square but changes fx and fy (and the T in the pose for each foursquare, but that'south another story). If the bodily size of a foursquare is L, then replace fx and fy 50fx and Lfy before calling solvePnP.

Source: https://stackoverflow.com/questions/55220229/extrinsic-matrix-computation-with-opencv

Posted by: olsonacien1935.blogspot.com

0 Response to "How Many Known Points To Calculate Camera Extrinsic Matrix"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel