Creates a photo with an ultra wide-angle view.
Two images are taken in the same position before and after a camera rotation. How do we stitch or merge these images into one image? The goal of the project is to create a photo with panoramic view.
The corresponding features are regarded as landmarks, and we could use these features to estimate camera poses. Here, we adopt SIFT, Scale-Invariant Feature Transform, to compare the corresponding features between two photos. SIFT algorithm can find distinct key-points in the images. In the image below, two key-points that connected by a line are the corresponding features. These connections show the overlap of the two images. We can find that most of the corresponding features are distributed between the small tree and the right-hand-side of the building.
When camera rotates, the relationship between the first image and the second image is a projective transformation. The homography matrix (a 3x3 matrix) is a linear mapping from the points in the first image to the points in the second image. We use the coordinates of the corresponding features to calculate the homography matrix. This method is called Direct Linear Transformation (DLT) algorithm. I implemented the DLT algorithm in C++, and you could refer to my source code for more details. In OpenCV, you could simply call the function, findHomography() , to compute the homography matrix.
After finding the homography matrix, we could map all the points in the second image to the first image. We use the OpenCV function, wrapPerspective() , to fulfill the mapping computation. Finally, we create an panoramic photo as the image below.