First let me open by saying projector-camera calibration is NOT EASY. But it’s technically not complicated too.
It is however, an amalgamation of optimizations that accrue and accumulate error with each step, so that the end product is not far from a random guess.
So 3D reconstructions I was able to get from my calibrated pro-cam were just a distorted mess of points.
Nevertheless, here come the deets.
I based my method on something I saw randomly on YouTube: https://www.youtube.com/watch?v=pCq7u2TvlxU&t=1s
And actually the code they provide checks out: https://github.com/kikko/ofxCvCameraProjectorCalibration/blob/master/src/ofxCvCameraProjectorCalibration.cpp and is not much dissimilar to mine.
Although I’ve only seen the code after I wrote mine.
There’s also the 2009 SIGGRAPH course by Doug Lanman: http://mesh.brown.edu/byo3d/index.html
First off, we need a calibration board that has a ChARUco board pasted on it.
One half of it will be used for the camera and the other half for the projector to project on.
As you can see, we’re being scrappy and simply used a cut up corrugated-cardboard box. It’s just something to reflect the projector’s pattern.
Then we display a circles pattern image with the projector, and take a video from the camera moving the board around.
Save the video to a file.
The worst thing about computer vision work with a camera is to actually work with a live feed. Always save your data to a file and process it offline. This way you can do it at 4am, too. Works out when you have kids.
First step is to calibrate the camera, easily done with ARUco, which is now baked into OpenCV 3.3+ as a contrib module.
# --------- detect ChAruco board ----------- corners, ids, rejected = cv2.aruco.detectMarkers(frame, cb.dictionary) corners, ids, rejected, recovered = cv2.aruco.refineDetectedMarkers(frame, cb, corners, ids, rejected, cameraMatrix=K, distCoeffs=dist_coef) if corners == None or len(corners) == 0: continue ret, charucoCorners, charucoIds = cv2.aruco.interpolateCornersCharuco(corners, ids, frame, cb) charucoCornersAccum += [charucoCorners] charucoIdsAccum += [charucoIds] if number_charuco_views == 40: print("calibrate camera") print("camera calib mat before\n%s"%K) # calibrate camera ret, K, dist_coef, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(charucoCornersAccum, charucoIdsAccum, cb, (w, h), K, dist_coef, flags = cv2.CALIB_USE_INTRINSIC_GUESS) print("camera calib mat after\n%s"%K) print("camera dist_coef %s"%dist_coef.T) print("calibration reproj err %s"%ret)
An that looks like this:
Sometimes it misses out on the board, but it’s far more robust than the regular chessboard…
After 40 frames we calibrate the camera.
Moving on to the projector, we detect the circles in the camera and ray-plane intersect for the 3D positions using the ChAruco board transform.
# --------- detect circles ----------- ret, circles = cv2.findCirclesGrid(gray, circles_grid_size, flags=cv2.CALIB_CB_SYMMETRIC_GRID) img = cv2.drawChessboardCorners(img, circles_grid_size, circles, ret) # ray-plane intersection: circle-center to chessboard-plane circles3D = intersectCirclesRaysToBoard(circles, rvec, tvec, K, dist_coef) # re-project on camera for verification circles3D_reprojected, _ = cv2.projectPoints(circles3D, (0,0,0), (0,0,0), K, dist_coef) for c in circles3D_reprojected: cv2.circle(img, tuple(c.astype(np.int32)[0]), 3, (255,255,0), cv2.FILLED)
It looks like so:
Here’s the ray-plane intersect:
def intersectCirclesRaysToBoard(circles, rvec, t, K, dist_coef): circles_normalized = cv2.convertPointsToHomogeneous(cv2.undistortPoints(circles, K, dist_coef)) if not rvec.size: return None R, _ = cv2.Rodrigues(rvec) # https://stackoverflow.com/questions/5666222/3d-line-plane-intersection plane_normal = R[2,:] # last row of plane rotation matrix is normal to plane plane_point = t.T # t is a point on the plane epsilon = 1e-06 circles_3d = np.zeros((0,3), dtype=np.float32) for p in circles_normalized: ray_direction = p / np.linalg.norm(p) ray_point = p ndotu = plane_normal.dot(ray_direction.T) if abs(ndotu) < epsilon: print ("no intersection or line is within plane") w = ray_point - plane_point si = -plane_normal.dot(w.T) / ndotu Psi = w + si * ray_direction + plane_point circles_3d = np.append(circles_3d, Psi, axis = 0) return circles_3d
Once we have 3D points from the ray-plane intersection, we have what we need for stereo calibration: 2D camera points, 2D projector points and 3D points.
# calibrate projector print("calibrate projector") print("proj calib mat before\n%s"%K_proj) ret, K_proj, dist_coef_proj, rvecs, tvecs = cv2.calibrateCamera(objectPointsAccum, projCirclePoints, (w_proj, h_proj), K_proj, dist_coef_proj, flags = cv2.CALIB_USE_INTRINSIC_GUESS) print("proj calib mat after\n%s"%K_proj) print("proj dist_coef %s"%dist_coef_proj.T) print("calibration reproj err %s"%ret) print("stereo calibration") ret, K, dist_coef, K_proj, dist_coef_proj, proj_R, proj_T, _, _ = cv2.stereoCalibrate( objectPointsAccum, cameraCirclePoints, projCirclePoints, K, dist_coef, K_proj, dist_coef_proj, (w,h), flags = cv2.CALIB_USE_INTRINSIC_GUESS ) proj_rvec, _ = cv2.Rodrigues(proj_R) print("R \n%s"%proj_R) print("T %s"%proj_T.T) print("proj calib mat after\n%s"%K_proj) print("proj dist_coef %s" %dist_coef_proj.T) print("cam calib mat after\n%s" %K) print("cam dist_coef %s" %dist_coef.T) print("reproj err %f"%ret)
Here’s what we get:
On the right we see the reprojection of the 3D points on the projector image plane (cyan dots). See how they sometimes misalign because the calibration isn’t perfect.
Now we can do some fun stuff like 3D scanning with binary patterns:
Which results in something like this:
That’s what I got!
Enjoy
Roy
One reply on “Projector-Camera Calibration – the "easy" way”
Nice! What did you use to scan the binary patterns?