<< Chapter < Page Chapter >> Page >
Gives the highlights of the projective flow algorithm we used for image registration.

Overview

Superresolution is based on the idea that images with slight shifts can be aligned and combined into a single, higher resolution image. While aligning images with respect to one another may not seem too complex at first, there are a number of technical details that muddy the waters. Additionally, superresolution requires registration results that are accurate to the subpixel level. While an error of one or two pixels sounds just fine, it would lead to a poor quality image after combination.

Motivation

Let's first look at the impact of registration before delving into the details of the algorithm we chose. Consider the following two test images:

Registration test images

Two slightly shifted images (Source: http://lcavwww.epfl.ch/software/superresolution/superresolution_dataset1.tar.gz)

While these two images may appear to be the same, they are actually just slightly different from each other. While we could try to combine the images as they are now, we need to register them first to achieve a better result. Below two difference graphs illustrate the impact of registration, the first showing the difference before registration, and the other after.

Difference graphs

Before registration
After registration

Even though it is hard to see a visible difference when viewing the two images separately, the difference graphs above show how registration can still detect difference and account for them.

Algorithm

While there are a number of different techniques that can be used to register images, many of them are feature-based . They attempt to track the same set of points as they move from image to image. This can work well, but only if the points are detect accurately each time. Instead, we chose to use a featureless algorithm, which avoids feature points by using the flow of all pixels in the image. It improves upon the optical flow concept discussed in a previous module by allowing for changes in translation, rotation, scale, pan, and tilt between each image. The algorithm is detailed below.

  • Calculate the vertical, horizontal, and time derivatives between the two images. This is same process mentioned in Optical Flow .
  • From these spatiotemporal derivatives, estimate an approximate model (q) of the projective parameters. There are several models that can be used, such as bilinear and pseudo-projective. The system used to estimate the bilinear model is shown below.

    Bilinear approximation model

    System of equations that relates derivative to the bilinear approximation model (Source: 1)
  • Using the four corners of the image, calculate their new coordinates from the approximate model. In the formulas for the bilinear model below, u m + x and v m + y denote the new x and y coordinates respectively.

    Bilinear coordinate formulas

    Relates old and new coordinate via approximate (q) parameters (Source: 1)
  • These old and new coordinates now completely determine the projective parameters in the exact model (p).
  • Apply these new parameters (p) to one of the images and iterate until the difference is negligible.

Improving accuracy

To get better results, we can create a multi-resolution pyramid for each image first. This means that we generate several levels of increasingly blurry images. Starting at the blurriest level, we apply several iterations of the algorithm as described above. Then, we move up to a less blurry level and repeat, but we carry over the result from the previous level and use that as our starting point.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2006. OpenStax CNX. Sep 27, 2007 Download for free at http://cnx.org/content/col10462/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2006' conversation and receive update notifications?

Ask