Project 1: Image Processing
Jackie Dai, CS180 Fall 2024
Background
Sergei Mikhailovich Prokudin-Gorskii traveled the Russian empire taking colored photographs. In a world without colored printing, he achieved his physical colored photos by taking three snapshots of the same scene but using three different color filters: a red filter, blue filter, and a green filter. The snapshots were projected onto a single glass plate in rapid succession, resulting in a colorful final image. In this project, we hope to utilize modern image processing and imitate the alignment and stacking of the three color channels of a image to produce a colorized output.
Approach
We are given images with separated color channels (RGB). First, I sliced the image into three separate versions (R, G, B). Next, I had to align and stack the three versions to output a fully colorized image. The trick to this is the alignment and there are many approaches to it.
Here is a example of stacking the three color channels WITHOUT any alignment
It appears to be fuzzy which is due to the misalignment of the color channels.
Single-scale Alignment
Alignment starts with searching through a window of possible shifts in the x and y direction and checking the match between the displaced channel and the anchor channel (G). This method brute forces the search process by looping over a [-15, 15] pixel search window , scoring each displacement with a scoring metric, and taking the best score to align the two images. This method works fine for lower resolution images. However, it becomes a exhaustively slow process when it comes to higher resolution images that require a larger search window.
I tried three different scoring metrics to find the one who outputted the most satisfying image
- Sum of square differences (SSD)
- Normalized cross correlation (NCC)
- Structural similarity index (SSIM)
Ultimately, I found SSD to output the most satisfying images, which is what I went with for the rest of the experiments.
Next, I implemented a crop function and cropped all my channel’s borders by a factor of 0.1 before performing my alignments. This way I can get rid of the visual artifacts on the border left behind by the displacements.
The Problem
When I moved on to the larger images in the dataset. The single scale implementation was not going to cut it. There were far too many pixels to check and required too large of a search window. church.tif took 3 minutes to load and had unsatisfying results.
Pyramid Search
One solution to the long process times lies in pyramid imaging. This method downscales the image by a factor of 2 and performs the naive alignment algorithm on the lower resolution to find the best displacement vector. The displacement vector returned will be used to align the image for the previous scaled image. This will repeat recursively until we reach the original image size. Each window size to search for the alignment will become smaller each recursive frame as the resolution increases to save computational power. The alignment vectors for each resolution is passed up the recursive steps, adding to the final alignment. Additionally, each alignment vector passed up the recursive stack had to be scaled by a factor of 2 to account for the downscaled version of the image from which the alignment vector came from.
Here are the results: