<< Chapter < Page Chapter >> Page >
Our approach for detecting the speed of a known object.

Simulating compressed sensing

Because compressed sensing cameras are not yet availiable, we used a Matlab routine written by Ilan Goodman to simulate CS measurements from a standard pixel image file [1]. Only the compressed sensing measurements are passed into our suite of calculation programs which run exactly as if the CS measurements came from a hardware-implemented CS camera.

To implement compressed sensing on an image (matrix) according to the definition, a random matrix the same size of the image is generated. The projection (inner product) of the image onto the random basis matrix gives a single compressed sensing measurement. This is repeated with different (fixed) random matricies until the desired compressed sensing resolution is achieved. This is computationally intensive and so a different approach is used in practice to simulate our data: first, every pixel of the image is randomly mapped to a different location to randomize the image. Next, the DCT (discrete cosine transform) is taken on the randomized image. This process of randomization and projection is equivalent to projection onto a random basis [1].

Random, on average

We exploited two key facts about compressed sensing on a random basis to calculate speed:

1) The average value of the elements of the random basis used is 1.

2) On a given image, a fixed random basis yields the same projections every time it is used.

While seemingly trivial, this basic data allows us to determine velocity quite accurately based on a fewobservations in the pixel domain.

Consider the following moving rectangles:

Rectangles moving horizontally at different speeds

Two rectangles moving to the right with constant speeds

Now consider the difference between subsequent frames showing the motion of the rectangles:

Moving rectangles: the difference between subsequent frames

In each subsequent frame, areas where the current and previous rectangles overlap remain the same while new area is added in the direction of motion and old area is lost opposite the direction of motion.

Since the red rectangle is moving faster, there is less overlap between subsequent frames and more area is both added to and subtracted from the image area. Therefore, we would expect that since the difference between subsequent images is greater for the red rectangle that the difference between consecutive compressed sensing projections along the same basis element is also greater. Taking a simple difference between consecutive CS measurements should yield a measure of the change between frames.

This basic intuition can also be supported rigorously. The difference between frames can be thought of as an image itself with a positive region on the leading edge of motion and a negative region at the trailing edge. Since the CS measurement process is linear time invariant assuming a fixed basis, the difference between projections in subsequent frames is the same as the projection of the difference image [2]. If the background behind the moving objects has zero value, then the CS projection values of the difference image is based solely on the difference between frames of the original images. The larger the non-zero area of the difference image, the larger the inner product with the CS basis elements are expected to be and a positive relationship exists between speed and frame difference measured from the compressed sensing data.

These calculations yeild a ratio between the change between subsequent frames along the direction of motion with respect to the total intensity in the frame. The same shape either moving in a different direction or with different orientation will produce different results. For more complicated shapes, the amount of change is not linear with speed and we expect the measurement of change will be more complicated, but still deterministic.

Difference images for other objects

Different objects, or objects shaped differently with respect to the direction of motion, produce differing overlap areas between subsequent frames.

Resolution limit

Calculating velocity in this way is limited to sampling rates that show overlap between consecutive frames. If the object is moving so quickly that there is no overlap, it is unclear how far it has moved: a speed where the frames are just slightly discontinuous will give similar results to a much faster speed.

[1] I.N. Goodman&D.H. Johnson. Look at This: Goal-Directed Imaging with Selective Attention. (poster) 2005Rice Affiliates Day, Rice University, 2005.

[2] Goodman, I.N. Personal conversation. 9 December 2005.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Intelligent motion detection using compressed sensing. OpenStax CNX. Dec 23, 2005 Download for free at http://cnx.org/content/col10311/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Intelligent motion detection using compressed sensing' conversation and receive update notifications?

Ask