Structure from Multiple Image Views. Figure by Snavely et al.

Project 3: Camera Calibration and Fundamental Matrix Estimation with RANSAC
CS 6476: Computer Vision

Brief

Overview

The goal of this project is to introduce you to camera and scene geometry. Specifically we will estimate the camera projection matrix, which maps 3D world coordinates to image coordinates, as well as the fundamental matrix, which relates points in one scene to epipolar lines in another. The camera projection matrix and the fundamental matrix can each be estimated using point correspondences. To estimate the projection matrix (camera calibration), the input is corresponding 3d and 2d points. To estimate the fundamental matrix the input is corresponding 2d points across two images. You will start out by estimating the projection matrix and the fundamental matrix for a scene with ground truth correspondences. Then you'll move on to estimating the fundamental matrix using point correspondences from ORB, which is an alternative to SIFT.

Remember these challenging images of Gaudi's Episcopal Palace from Project 2? By using RANSAC to find the fundamental matrix with the most inliers, we can filter away spurious matches and achieve near perfect point to point matching as shown below:

Episcopal Gaudi

Setup

Please reinstall your conda environment as we have made changes to the list of packages installed.

  1. Install Miniconda. It doesn't matter whether you use 2.7 or 3.6 because we will create our own environment anyways.
  2. Create a conda environment using the appropriate command. On Windows, open the installed "Conda prompt" to run this command. On MacOS or Linux, you can just use a terminal window to run the command. Modify the command based on your OS (linux, mac, or win).
    conda env create -f environment_<OS>.yml
  3. This should create an environment named cs6476p3. Activate it using the following Windows command:
    activate cs6476p3
    or the following MacOS/Linux command:
    source activate cs6476p3
  4. Run the notebook using:
    jupyter notebook ./code/proj3.ipynb
  5. Generate the submission once you've finished the project using:
    python zip_submission.py

Details and Starter Code

For this project, you need to implement three major parts:

Part I: Camera Projection Matrix

The goal is to compute the projection matrix that goes from world 3D coordinates to 2D image coordinates. Recall that using homogeneous coordinates the equation for moving from 3D world to 2D camera coordinates is:

Another way of writing this equation is:

At this point, you're almost able to set up your linear regression to find the elements of the matrix M. There's only one problem, the matrix M is only defined up to a scale. So these equations have many different possible solutions, in particular M = all zeros is a solution which is not very helpful in our context. The way around this is to first fix a scale and then do the regression. There are several options for doing this (1) You can fix the last element (m_{34}) to 1 and then find the remaining coefficients, or you can use the singular value decomposition to directly solve the constrained optimization problem:

To make sure that your code is correct, we are going to give you a set of “normalized points” in the files ./pts2d-norm-pic_a.txt and ./pts3d-norm.txt. If you solve for M using all the points you should get a matrix that is a scaled equivalent of the following (the negative of this matrix is a scaled equivalent):

For example, this matrix will take the last normalized 3D point which is < 1.2323, 1.4421, 0.4506, 1.0 > and will project it to the < u, v > of < 0.1419, −0.4518 > where we converted the homogeneous 2D point < us, vs, s > to its inhomogeneous version (the transformed pixel coordinate in the image) by dividing by s.

The first task for you is to write the least squares regression to solve for M given the corresponding normalized points. The starter code will load 20 corresponding normalized 2D and 3D points. You have to write the code to set up the linear system of equations, solve for the unknown entries of M, and reshape it into the estimated projection matrix. To validate that you've found a reasonable projection matrix, we've provided evaluation code which computes the total "residual" between the projected 2d location of each 3d point and the actual location of that point in the 2d image. The residual is just the distance (square root of the sum of squared differences in u and v). This should be very small.

Once you have an accurate projection matrix M, it is possible to tease it apart into the more familiar and more useful matrix K of intrinsic parameters and matrix [R | T] of extrinsic parameters. For this project we will only ask you to estimate one particular extrinsic parameter: the camera center in world coordinates. Let us define M as being made up of a 3x3 we’ll call Q and a 4th column will call m_4 :

From class we said that the center of the camera C could be found by:

To debug your code: If you use you the normalized 3D points to get the M given above you would get a camera center of:

We've also provided a visualization which will show the estimated 3d location of the camera with respect to the normalized 3d point coordinates.

Part II: Fundamental Matrix Estimation

The next part of this project is estimating the mapping of points in one image to lines in another by means of the fundamental matrix. This will require you to use similar methods to those in part 1. We will make use of the corresponding point locations listed in pts2d-pic_a.txt and pts2d-pic_b.txt. Recall that the definition of the Fundamental Matrix is:

Note: the fundamental matrix is sometimes defined as the transpose of the above matrix with the left and right image points swapped. Both are valid fundamental matrices, but the visualization functions in the starter code assume you use the above form.

And another way of writing this matrix equations is:

Which is the same as:

Starting to see the regression equations? Given corresponding points you get one equation per point pair. With 8 or more points you can solve this (why 8?). As in part I there's an issue here where the matrix is only defined up to scale and the degenerate zero solution solves these equations. So you need to solve using the same method you used in part 1 of first fixing the scale and then solving the regression.

The least squares estimate of F is full rank; however, the fundamental matrix is a rank 2 matrix. As such we must reduce its rank. In order to do this we can decompose F using singular value decomposition into the matrices U ΣV' = F. We can then estimate a rank 2 matrix by setting the smallest singular value in Σ to zero thus generating Σ2 . The fundamental matrix is then easily calculated as F = U Σ2 V'. You can check your fundamental matrix estimation by plotting the epipolar lines using the plotting function provided in the starter code.

Coordinate normalization

As discussed in lecture, your estimate of the fundamental matrix can be improved by normalizing the coordinates before computing the fundamental matrix. It is suggested you perform the normalization through linear transformations as described below to make the mean of the points zero and the average magnitude about 1.0 or some other small number (square root of 2 is also recommended).

The transform matrix T is the product of the scale and offset matrices. c_u and c_v are the mean coordinates. To compute a scale s you could estimate the standard deviation after substracting the means. Then the scale factor s would be the reciprocal of whatever estimate of the scale you are using. Or you could find the maximum absolute value. Or you could scale the coordinates such that the average squared distance from the origin (after centering) is 2. You could use one scale matrix based on the statistics of the coordinates from both images or you could do it per image.

However you scale your coordinates, you will need to use the scaling matrices to adjust your fundamental matrix so that it can operate on the original pixel coordiantes as follows:

You won't actually notice a difference in part II because the input correspondences are pretty much perfect. In part III (which calls estimate_fundamental_matrix()) you are more likely to notice an improvement. Alternatively, you could implement the scaling based on the distribution of all feature coordinates and not just the handful passed into estimate_fundamental_matrix(). In your writeup you should highlight one before and after case where normalization improved your results.

Part III: Fundamental Matrix with RANSAC

For two photographs of a scene it's unlikely that you'd have perfect point corresponence with which to do the regression for the fundamental matrix. So, next you are going to compute the fundamental matrix with unreliable point correspondences computed with ORB. As discussed in class, least squares regression alone is not appropriate in this scenario due to the presence of multiple outliers. In order to estimate the fundamental matrix from this noisy data you'll need to use RANSAC in conjunction with your fundamental matrix estimation.

You'll use these putative point correspondences and RANSAC to find the "best" fundamental matrix. You will iteratively choose some number of point correspondences (8, 9, or some small number), solve for the fundamental matrix using the function you wrote for part II, and then count the number of inliers. Inliers in this context will be point correspondences that "agree" with the estimated fundamental matrix. In order to count how many inliers a fundamental matrix has, you'll need a distance metric based on the fundamental matrix. (Hint: For a point correspondence (x',x) what properties does the fundamental matrix have?). You'll need to pick a threshold between inlier and outlier and your results are very sensitive to this threshold so explore a range of values. You don't want to be too permissive about what you consider an inlier nor do you want to be too conservative. Return the fundamental matrix with the most inliears

Recall from lecture the expected number of iterations of RANSAC to find the "right" solution in the presence of outliers. For example, if half of your input correspondences are wrong, then you have a 0.5^8 = 0.39% chance to randomly pick 8 correspondences when estimating the fundamental matrix. Hopefully that correct fundamental matrix will have more inliers than one created from spurious matches, but to even find it you should probably do thousands of iterations of RANSAC.

For many real images, the fundamental matrix that is estimated will be "wrong" (as in it implies a relationship between the cameras that must be wrong, e.g. an epipole in the center of one image when the cameras actually have a large translation parallel to the image plane). The estimated fundamental matrix can be wrong because the points are coplanar or because the cameras are not actually pinhole cameras and have lens distortions. Still, these "incorrect" fundamental matrices tend to do a good job at removing incorrect ORB matches (and, unfortunately, many correct ones).

Evaluation and Visualization

For part I we've told you the expected output (matrix M and camera center C). These are numerical estimates so we won't be checking for exact numbers, just approximately correct locations.

For part II you'll be evaluated based on your estimate of the fundamental matrix. You can test how good your estimate of the fundamental matrix is by drawing the epipolar lines on one image which correspond to a point in the other image. You should see all of the epipolar lines crossing through the corresponding point in the other image, like this:

Epipolar Lines

We provide you with a function in the starter code that draws an epipolar line in an image given the fundamental matrix, and a point from the other image.

Part III is the most open ended part of this assignment. Your goal should be to demonstrate that you can estimate a fundamental matrix for a real image pair and use it to reject spurious keypoint matches. We may check the fundamental matrix you estimate and you should have a visualization of the epipolar lines in your writeup. The Gaudi image pair shown above is fairly difficult and you might not be able to find a reasonable fundamental matrix without doing the normalization in the fundamental matrix estimation. Notre Dame is also difficult because the keypoint matches are mostly planar and thus the fundamental matrix is not well constrained.

We do not include the keypoint accuracy evaluation used in project 2. You should be able to get nearly perfect accuracy (very few outliers) among the keypoints you designate as inliers.

Data

You'll be provided with 2D and 3D ground truth point correspondences for the base image pair (pic_a.jpg and pic_b.jpg), as well as other images which will not have any ground truth dataset.

Useful Functions

np.linalg.svd(). This function returns the singular value decomposition of a matrix. Useful for solving the linear systems of equations you build and reducing the rank of the fundamental matrix.

np.linalg.inv(). This function returns the inverse of a matrix.

np.random.randint(). Let's you pick integers from a range. Useful for RANSAC.

Banned Functions

You may not use the SciPy constrained least squares function scipy.optimize.lsq_linear(), or any similar function. You may also not using anyone else's code that estimates the fundamental matrix or performs RANSAC for you.

Write up

For this project, and all other projects, you must do a project report in HTML. In the report you will describe your algorithm and any decisions you made to write your algorithm a particular way. Discuss any extra credit you did and show what contribution it had on the results (e.g. performance with and without each extra credit component).

Make sure to include the following in your write-up:

Extra credit

We don't have good suggestions for additional extra credit yet. If you have ideas come talk to us!

Web-Publishing Results

All the results for each project will be put on the course website so that the students can see each other's results. In class we will highlight the best projects as determined by the professor and TAs. If you do not want your results published to the web, you can choose to opt out.

Handing in

This is very important as you will lose points if you do not follow instructions. Every time you do not follow instructions, you will lose 5 points. The folder you hand in must contain the following:

Hand in your project as a zip file through Canvas.

Rubric

Credits

This project is based on a project from Aaron Bobick's offering of 4495 and has been expanded and edited by Cusuh Ham, Samarth Brahmbhatt, Henry Hu, Grady Williams, and James Hays.