Analyzing and Enhancing Images Using Matlab
Analyzing and Enhancing Images Using Matlab

Removing Noise from Images

Understanding Sources of Noise in Digital Images

Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process that result in pixel values that do not reflect the true intensities of the real scene. There are several ways that noise can be introduced into an image, depending on how the image is created. For example:

  • If the image is scanned from a photograph made on film, the film grain is a source of noise. Noise can also be the result of damage to the film, or be introduced by the scanner itself.

  • If the image is acquired directly in a digital format, the mechanism for gathering the data (such as a CCD detector) can introduce noise.

  • Electronic transmission of image data can introduce noise.

To simulate the effects of some of the problems listed above, the toolbox provides the imnoise function, which you can use to add various types of noise to an image. The examples in this section use this function.

 

Removing Noise By Linear Filtering

You can use linear filtering to remove certain types of noise. Certain filters, such as averaging or Gaussian filters, are appropriate for this purpose. For example, an averaging filter is useful for removing grain noise from a photograph. Because each pixel gets set to the average of the pixels in its neighborhood, local variations caused by grain are reduced.

See Designing and Implementing Linear Filters in the Spatial Domain for more information about linear filtering using imfilter.

Removing Noise By Median Filtering

Median filtering is similar to using an averaging filter, in that each output pixel is set to an average of the pixel values in the neighborhood of the corresponding input pixel. However, with median filtering, the value of an output pixel is determined by the median of the neighborhood pixels, rather than the mean. The median is much less sensitive than the mean to extreme values (called outliers). Median filtering is therefore better able to remove these outliers without reducing the sharpness of the image. The medfilt2 function implements median filtering.

Note   Median filtering is a specific case of order-statistic filtering, also known as rank filtering. For information about order-statistic filtering, see the reference page for the ordfilt2 function.

The following example compares using an averaging filter and medfilt2 to remove salt and pepper noise. This type of noise consists of random pixels' being set to black or white (the extremes of the data range). In both cases the size of the neighborhood used for filtering is 3-by-3.

  1. Read in the image and display it.

    I = imread('eight.tif');  imshow(I)

  2. Add noise to it.

    J = imnoise(I,'salt & pepper',0.02);  figure, imshow(J)

  3. Filter the noisy image with an averaging filter and display the results.

    K = filter2(fspecial('average',3),J)/255;  figure, imshow(K)

  4. Now use a median filter to filter the noisy image and display the results. Notice that medfilt2 does a better job of removing noise, with less blurring of edges.

    L = medfilt2(J,[3 3]);  figure, imshow(L)

 

Removing Noise By Adaptive Filtering

The wiener2 function applies a Wiener filter (a type of linear filter) to an image adaptively, tailoring itself to the local image variance. Where the variance is large, wiener2 performs little smoothing. Where the variance is small, wiener2 performs more smoothing.

This approach often produces better results than linear filtering. The adaptive filter is more selective than a comparable linear filter, preserving edges and other high-frequency parts of an image. In addition, there are no design tasks; the wiener2 function handles all preliminary computations and implements the filter for an input image. wiener2, however, does require more computation time than linear filtering.

wiener2 works best when the noise is constant-power ("white") additive noise, such as Gaussian noise. The example below applies wiener2 to an image of Saturn that has had Gaussian noise added.

  1. Read in an image. Because the image is a truecolor image, the example converts it to grayscale.

    RGB = imread('saturn.png');  I = rgb2gray(RGB);
  2. The example then add Gaussian noise to the image and then displays the image. Because the image is quite large, the figure only shows a portion of the image.

    J = imnoise(I,'gaussian',0,0.025);  imshow(J)

    Portion of the Image with Added Gaussian Noise

  3. Remove the noise, using the wiener2 function. Again, the figure only shows a portion of the image

    K = wiener2(J,[5 5]);  figure, imshow(K)

    Portion of the Image with Noise Removed by Wiener Filter

 

Adjusting Pixel Intensity Values

Understanding Intensity Adjustment

Image enhancement techniques are used to improve an image, where "improve" is sometimes defined objectively (e.g., increase the signal-to-noise ratio), and sometimes subjectively (e.g., make certain features easier to see by modifying the colors or intensities).

Intensity adjustment is an image enhancement technique that maps an image's intensity values to a new range. To illustrate, this figure shows a low-contrast image with its histogram. Notice in the histogram of the image how all the values gather in the center of the range.

I = imread('pout.tif');  imshow(I)  figure, imhist(I,64)

If you remap the data values to fill the entire intensity range [0, 255], you can increase the contrast of the image.

The functions described in this section apply primarily to grayscale images. However, some of these functions can be applied to color images as well. For information about how these functions work with color images, see the reference pages for the individual functions.

Adjusting Intensity Values to a Specified Range

You can adjust the intensity values in an image using the imadjust function, where you specify the range of intensity values in the output image.

For example, this code increases the contrast in a low-contrast grayscale image by remapping the data values to fill the entire intensity range [0, 255].

I = imread('pout.tif'); J = imadjust(I); imshow(J) figure, imhist(J,64)

This figure displays the adjusted image and its histogram. Notice the increased contrast in the image, and that the histogram now fills the entire range.

Adjusted Image and Its Histogram

Specifying the Adjustment Limits

You can optionally specify the range of the input values and the output values using imadjust. You specify these ranges in two vectors that you pass to imadjust as arguments. The first vector specifies the low- and high-intensity values that you want to map. The second vector specifies the scale over which you want to map them.

    Note   Note that you must specify the intensities as values between 0 and 1 regardless of the class of I. If I is uint8, the values you supply are multiplied by 255 to determine the actual values to use; if I is uint16, the values are multiplied by 65535. To learn about an alternative way to set these limits automatically, see Setting the Adjustment Limits Automatically.

For example, you can decrease the contrast of an image by narrowing the range of the data. In the example below, the man's coat is too dark to reveal any detail. imadjust maps the range [0,51] in the uint8 input image to [128,255] in the output image. This brightens the image considerably, and also widens the dynamic range of the dark portions of the original image, making it much easier to see the details in the coat. Note, however, that because all values above 51 in the original image are mapped to 255 (white) in the adjusted image, the adjusted image appears washed out.

I = imread('cameraman.tif'); J = imadjust(I,[0 0.2],[0.5 1]); imshow(I) figure, imshow(J)

Image After Remapping and Widening the Dynamic Range

Setting the Adjustment Limits Automatically

To use imadjust, you must typically perform two steps:

  1. View the histogram of the image to determine the intensity value limits.

  2. Specify these limits as a fraction between 0.0 and 1.0 so that you can pass them to imadjust in the [low_in high_in] vector.

For a more convenient way to specify these limits, use the stretchlim function. (The imadjust function uses stretchlim for its simplest syntax, imadjust(I).)

This function calculates the histogram of the image and determines the adjustment limits automatically. The stretchlim function returns these values as fractions in a vector that you can pass as the [low_in high_in] argument to imadjust; for example:

I = imread('rice.png');  J = imadjust(I,stretchlim(I),[0 1]);

By default, stretchlim uses the intensity values that represent the bottom 1% (0.01) and the top 1% (0.99) of the range as the adjustment limits. By trimming the extremes at both ends of the intensity range, stretchlim makes more room in the adjusted dynamic range for the remaining intensities. But you can specify other range limits as an argument to stretchlim. See the stretchlim reference page for more information.

Gamma Correction

imadjust maps low to bottom, and high to top. By default, the values between low and high are mapped linearly to values between bottom and top. For example, the value halfway between low and high corresponds to the value halfway between bottom and top.

imadjust can accept an additional argument that specifies the gamma correction factor. Depending on the value of gamma, the mapping between values in the input and output images might be nonlinear. For example, the value halfway between low and high might map to a value either greater than or less than the value halfway between bottom and top.

Gamma can be any value between 0 and infinity. If gamma is 1 (the default), the mapping is linear. If gamma is less than 1, the mapping is weighted toward higher (brighter) output values. If gamma is greater than 1, the mapping is weighted toward lower (darker) output values.

The figure below illustrates this relationship. The three transformation curves show how values are mapped when gamma is less than, equal to, and greater than 1. (In each graph, the x-axis represents the intensity values in the input image, and the y-axis represents the intensity values in the output image.)

Plots Showing Three Different Gamma Correction Settings

The example below illustrates gamma correction. Notice that in the call to imadjust, the data ranges of the input and output images are specified as empty matrices. When you specify an empty matrix, imadjust uses the default range of [0,1]. In the example, both ranges are left empty; this means that gamma correction is applied without any other adjustment of the data.

[X,map] = imread('forest.tif'); I = ind2gray(X,map); J = imadjust(I,[],[],0.5); imshow(I) figure, imshow(J)

Image Before and After Applying Gamma Correction

Adjusting Intensity Values Using Histogram Equalization

The process of adjusting intensity values can be done automatically by the histeq function. histeq performs histogram equalization, which involves transforming the intensity values so that the histogram of the output image approximately matches a specified histogram. (By default, histeq tries to match a flat histogram with 64 bins, but you can specify a different histogram instead; see the reference page for histeq.)

This example illustrates using histeq to adjust a grayscale image. The original image has low contrast, with most values in the middle of the intensity range. histeq produces an output image having values evenly distributed throughout the range.

I = imread('pout.tif'); J = histeq(I); imshow(J) figure, imhist(J,64)

Image After Histogram Equalization with Its Histogram

histeq can return a 1-by-256 vector that shows, for each possible input value, the resulting output value. (The values in this vector are in the range [0,1], regardless of the class of the input image.) You can plot this data to get the transformation curve. For example:

I = imread('pout.tif'); [J,T] = histeq(I); figure,plot((0:255)/255,T);

Notice how this curve reflects the histograms in the previous figure, with the input values mostly between 0.3 and 0.6, while the output values are distributed evenly between 0 and 1.

 

Adjusting Intensity Values Using Contrast-Limited Adaptive Histogram Equalization

As an alternative to using histeq, you can perform contrast-limited adaptive histogram equalization (CLAHE) using the adapthisteq function. While histeq works on the entire image, adapthisteq operates on small regions in the image, called tiles. Each tile's contrast is enhanced, so that the histogram of the output region approximately matches a specified histogram. After performing the equalization, adapthisteq combines neighboring tiles using bilinear interpolation to eliminate artificially induced boundaries.

To avoid amplifying any noise that might be present in the image, you can use adapthisteq optional parameters to limit the contrast, especially in homogeneous areas.

To illustrate, this example uses adapthisteq to adjust the contrast in a grayscale image. The original image has low contrast, with most values in the middle of the intensity range. adapthisteq produces an output image having values evenly distributed throughout the range.

I = imread('pout.tif'); J = adapthisteq(I); imshow(J) figure, imhist(J,64)

Image After CLAHE Equalization with Its Histogram

 

Enhancing Color Separation Using Decorrelation Stretching

Decorrelation stretching enhances the color separation of an image with significant band-to-band correlation. The exaggerated colors improve visual interpretation and make feature discrimination easier. You apply decorrelation stretching with the decorrstretch function. See Adding a Linear Contrast Stretch on how to add an optional linear contrast stretch to the decorrelation stretch.

The number of color bands, NBANDS, in the image is usually three. But you can apply decorrelation stretching regardless of the number of color bands.

The original color values of the image are mapped to a new set of color values with a wider range. The color intensities of each pixel are transformed into the color eigenspace of the NBANDS-by-NBANDS covariance or correlation matrix, stretched to equalize the band variances, then transformed back to the original color bands.

To define the bandwise statistics, you can use the entire original image or, with the subset option, any selected subset of it. See the decorrstretch reference page.

Simple Decorrelation Stretching

You can apply decorrelation and stretching operations on the library of images available in the imdemos directory. The library includes a LANDSAT image of the Little Colorado River. In this example, you perform a simple decorrelation stretch on this image:

  1. The image has seven bands, but just read in the three visible colors:

    A = multibandread('littlecoriver.lan', [512, 512, 7], ... 'uint8=>uint8', 128, 'bil', 'ieee-le', ... {'Band','Direct',[3 2 1]});
  2. Then perform the decorrelation stretch:

    B = decorrstretch(A);
  3. Now view the results:
    imshow(A)  figure, imshow(B) 

Compare the two images. The original has a strong violet (red-bluish) tint, while the transformed image has a somewhat expanded color range.

Little Colorado River Before (left) and After (right) Decorrelation Stretch

A color band scatterplot of the images shows how the bands are decorrelated and equalized:

rA = A(:,:,1); gA = A(:,:,2); bA = A(:,:,3); figure, plot3(rA(:),gA(:),bA(:),'.') grid on xlabel('Red (Band 3)') ylabel('Green (Band 2)') zlabel('Blue (Band 1)') rB = B(:,:,1); gB = B(:,:,2); bB = B(:,:,3); figure, plot3(rB(:),gB(:),bB(:),'.') grid on xlabel('Red (Band 3)') ylabel('Green (Band 2)') zlabel('Blue (Band 1)')

Color Scatterplot Before (left) and After (right) Decorrelation Stretch

Adding a Linear Contrast Stretch

Now try the same transformation, but with a linear contrast stretch applied after the decorrelation stretch:

imshow(A)  C = decorrstretch(A,'Tol',0.01);  figure, imshow(C)

Compare the transformed image to the original.

Little Colorado River After Decorrelation Stretch Followed by Linear Contrast Stretch

Adding the linear contrast stretch enhances the resulting image by further expanding the color range. In this case, the transformed color range is mapped within each band to a normalized interval between 0.01 and 0.99, saturating 2%.

See the stretchlim function reference page for more about Tol. Without the Tol option, decorrstretch applies no linear contrast stretch.

    Note   You can apply a linear contrast stretch as a separate operation after performing a decorrelation stretch, using stretchlim and imadjust. This alternative, however, often gives inferior results for uint8 and uint16 images, because the pixel values must be clamped to [0 255] (or [0 65535]). The Tol option in decorrstretch circumvents this limitation.

 

Analyzing the Texture of an Image

Understanding Texture Analysis

The toolbox supports a set of functions that you can use for texture analysis. Texture analysis refers to the characterization of regions in an image by their texture content. Texture analysis attempts to quantify intuitive qualities described by terms such as rough, smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense, the roughness or bumpiness refers to variations in the intensity values, or gray levels.

Texture analysis is used in a variety of applications, including remote sensing, automated inspection, and medical image processing. Texture analysis can be used to find the texture boundaries, called texture segmentation. Texture analysis can be helpful when objects in an image are more characterized by their texture than by intensity, and traditional thresholding techniques cannot be used effectively.

Using Texture Filter Functions

The toolbox includes several texture analysis functions that filter an image using standard statistical measures, listed in the following table.

Function Description
rangefilt Calculates the local range of an image.
stdfilt Calculates the local standard deviation of an image.
entropyfilt Calculates the local entropy of a grayscale image. Entropy is a statistical measure of randomness.

These statistics can characterize the texture of an image because they provide information about the local variability of the intensity values of pixels in an image. For example, in areas with smooth texture, the range of values in the neighborhood around a pixel will be a small value; in areas of rough texture, the range will be larger. Similarly, calculating the standard deviation of pixels in a neighborhood can indicate the degree of variability of pixel values in that region.

The following sections provide additional information about the texture functions:

Understanding the Texture Filter Functions

The functions all operate in a similar way: they define a neighborhood around the pixel of interest, calculate the statistic for that neighborhood, and use that value as the value of the pixel of interest in the output image.

This example shows how the rangefilt function operates on a simple array.

A = [ 1 2 3 4 5; 6 7 8 9 10; 11 12 13 14 15; 16 17 18 19 20 ] A = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 B = rangefilt(A) B = 6 7 7 7 6 11 12 12 12 11 11 12 12 12 11 6 7 7 7 6

The following figure shows how the value of element B(2,4) was calculated from A(2,4). By default, the rangefilt function uses a 3-by-3 neighborhood but you can specify neighborhoods or different shapes and sizes.

Determining Pixel Values in Range Filtered Output Image

 

The stdfilt and entropyfilt functions operate similarly, defining a neighborhood around the pixel of interest and calculating the statistic for the neighborhood to determine the pixel value in the output image. The stdfilt function calculates the standard deviation of all the values in the neighborhood.

The entropyfilt function calculates the entropy of the neighborhood and assigns that value to the output pixel. Note that, by default, the entropyfilt function defines a 9-by-9 neighborhood around the pixel of interest. To calculate the entropy of an entire image, use the entropy function.

Example: Using the Texture Functions

The following example illustrates how the texture filter functions can detect regions of texture in an image. In the figure, the background is smooth; there is very little variation in the gray-level values. In the foreground, the surface contours of the coins exhibit more texture. In this image, foreground pixels have more variability and thus higher range values. Range filtering makes the edges and contours of the coins more visible.

To see an example of using filtering functions, view the Texture Segmentation Using Texture Filters demo.

  1. Read in the image and display it.

    I = imread('eight.tif');  imshow(I)

  2. Filter the image with the rangefilt function and display the results. Note how range filtering highlights the edges and surface contours of the coins.

    K = rangefilt(I);  figure, imshow(K)

Using a Gray-Level Co-Occurrence Matrix (GLCM)

A statistical method of examining texture that considers the spatial relationship of pixels is the gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix. The texture filter functions, described in Using Texture Filter Functions, cannot provide information about shape, i.e., the spatial relationships of pixels in an image.

Creating a Gray-Level Co-Occurrence Matrix

To create a GLCM, use the graycomatrix function. The graycomatrix function creates a gray-level co-occurrence matrix (GLCM) by calculating how often a pixel with the intensity (gray-level) value i occurs in a specific spatial relationship to a pixel with the value j. By default, the spatial relationship is defined as the pixel of interest and the pixel to its immediate right (horizontally adjacent), but you can specify other spatial relationships between the two pixels. Each element (i,j) in the resultant glcm is simply the sum of the number of times that the pixel with value i occurred in the specified spatial relationship to a pixel with value j in the input image.

The number of gray levels in the image determines the size of the GLCM. By default, graycomatrix uses scaling to reduce the number of intensity values in an image to eight, but you can use the NumLevels and the GrayLimits parameters to control this scaling of gray levels. See the graycomatrix reference page for more information.

The gray-level co-occurrence matrix can reveal certain properties about the spatial distribution of the gray levels in the texture image. For example, if most of the entries in the GLCM are concentrated along the diagonal, the texture is coarse with respect to the specified offset. You can also derive several statistical measures from the GLCM. See Deriving Statistics from a GLCM for more information.

To illustrate, the following figure shows how graycomatrix calculates the first three values in a GLCM. In the output GLCM, element (1,1) contains the value 1 because there is only one instance in the input image where two horizontally adjacent pixels have the values 1 and 1, respectively. glcm(1,2) contains the value 2 because there are two instances where two horizontally adjacent pixels have the values 1 and 2. Element (1,3) in the GLCM has the value 0 because there are no instances of two horizontally adjacent pixels with the values 1 and 3. graycomatrix continues processing the input image, scanning the image for other pixel pairs (i,j) and recording the sums in the corresponding elements of the GLCM.

Process Used to Create the GLCM

Specifying the Offsets

By default, the graycomatrix function creates a single GLCM, with the spatial relationship, or offset, defined as two horizontally adjacent pixels. However, a single GLCM might not be enough to describe the textural features of the input image. For example, a single horizontal offset might not be sensitive to texture with a vertical orientation. For this reason, graycomatrix can create multiple GLCMs for a single input image.

To create multiple GLCMs, specify an array of offsets to the graycomatrix function. These offsets define pixel relationships of varying direction and distance. For example, you can define an array of offsets that specify four directions (horizontal, vertical, and two diagonals) and four distances. In this case, the input image is represented by 16 GLCMs. When you calculate statistics from these GLCMs, you can take the average.

You specify these offsets as a p-by-2 array of integers. Each row in the array is a two-element vector, [row_offset, col_offset], that specifies one offset. row_offset is the number of rows between the pixel of interest and its neighbor. col_offset is the number of columns between the pixel of interest and its neighbor. This example creates an offset that specifies four directions and 4 distances for each direction. For more information about specifying offsets, see the graycomatrix reference page.

offsets = [ 0 1; 0 2; 0 3; 0 4;... -1 1; -2 2; -3 3; -4 4;... -1 0; -2 0; -3 0; -4 0;... -1 -1; -2 -2; -3 -3; -4 -4];

The figure illustrates the spatial relationships of pixels that are defined by this array of offsets, where D represents the distance from the pixel of interest.

Deriving Statistics from a GLCM

After you create the GLCMs, you can derive several statistics from them using the graycoprops function. These statistics provide information about the texture of an image. The following table lists the statistics you can derive. You specify the statistics you want when you call the graycoprops function.

Statistic

Description

Contrast

Measures the local variations in the gray-level co-occurrence matrix.

Correlation

Measures the joint probability occurrence of the specified pixel pairs.

Energy

Provides the sum of squared elements in the GLCM. Also known as uniformity or the angular second moment.

Homogeneity

Measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal.

Example: Plotting the Correlation

This example shows how to create a set of GLCMs and derive statistics from them and illustrates how the statistics returned by graycoprops have a direct relationship to the original input image.

  1. Read in a grayscale image and display it. The example converts the truecolor image to a grayscale image and then rotates it 90° for this example.

    circuitBoard = rot90(rgb2gray(imread('board.tif')));  imshow(circuitBoard)

  2. Define offsets of varying direction and distance. Because the image contains objects of a variety of shapes and sizes that are arranged in horizontal and vertical directions, the example specifies a set of horizontal offsets that only vary in distance.

    offsets0 = [zeros(40,1) (1:40)'];
  3. Create the GLCMs. Call the graycomatrix function specifying the offsets.

    glcms = graycomatrix(circuitBoard,'Offset',offsets0)
  4. Derive statistics from the GLCMs using the graycoprops function. The example calculates the contrast and correlation.

    stats = graycoprops(glcms,'Contrast Correlation');
  5. Plot correlation as a function of offset.

    figure, plot([stats.Correlation]); title('Texture Correlation as a function of offset'); xlabel('Horizontal Offset') ylabel('Correlation')

    The plot contains peaks at offsets 7, 15, 23, and 30. If you examine the input image closely, you can see that certain vertical elements in the image have a periodic pattern that repeats every seven pixels. The following figure shows the upper left corner of the image and points out where this pattern occurs.

Analyzing Images

The toolbox also includes functions that return information about the texture of an image. See Analyzing the Texture of an Image for more information.

Detecting Edges Using the edge Function

In an image, an edge is a curve that follows a path of rapid change in image intensity. Edges are often associated with the boundaries of objects in a scene. Edge detection is used to identify the edges in an image.

To find edges, you can use the edge function. This function looks for places in the image where the intensity changes rapidly, using one of these two criteria:

  • Places where the first derivative of the intensity is larger in magnitude than some threshold

  • Places where the second derivative of the intensity has a zero crossing

edge provides a number of derivative estimators, each of which implements one of the definitions above. For some of these estimators, you can specify whether the operation should be sensitive to horizontal edges, vertical edges, or both. edge returns a binary image containing 1's where edges are found and 0's elsewhere.

The most powerful edge-detection method that edge provides is the Canny method. The Canny method differs from the other edge-detection methods in that it uses two different thresholds (to detect strong and weak edges), and includes the weak edges in the output only if they are connected to strong edges. This method is therefore less likely than the others to be fooled by noise, and more likely to detect true weak edges.

The following example illustrates the power of the Canny edge detector by showing the results of applying the Sobel and Canny edge detectors to the same image:

  1. Read image and display it.

    I = imread('coins.png');  imshow(I)

  2. Apply the Sobel and Canny edge detectors to the image and display them.

    BW1 = edge(I,'sobel');  BW2 = edge(I,'canny');  imshow(BW1)  figure, imshow(BW2)

 

Detecting Corners Using the corner Function

Corners are the most reliable feature you can use to find the correspondence between images. The following diagram shows three pixels—one inside the object, one on the edge of the object, and one on the corner. If a pixel is inside an object, its surroundings (solid square) correspond to the surroundings of its neighbor (dotted square). This is true for neighboring pixels in all directions. If a pixel is on the edge of an object, its surroundings differ from the surroundings of its neighbors in one direction, but correspond to the surroundings of its neighbors in the other (perpendicular) direction. A corner pixel has surroundings different from all of its neighbors in all directions.

The corner function identifies corners in an image. Two methods are available—the Harris corner detection method (the default) and Shi and Tomasi's minimum eigenvalue method. Both methods use algorithms that depend on the eigenvalues of the summation of the squared difference matrix (SSD). The eigenvalues of an SSD matrix represent the differences between the surroundings of a pixel and the surroundings of its neighbors. The larger the difference between the surroundings of a pixel and those of its neighbors, the larger the eigenvalues. The larger the eigenvalues, the more likely that a pixel appears at a corner.

The following example demonstrates how to locate corners with the corner function and adjust your results by refining the maximum number of desired corners.

  1. Create a checkerboard image, and find the corners.

    I = checkerboard(40,2,2);  C = corner(I);  
  2. Display the corners when the maximum number of desired corners is the default setting of 200. subplot(1,2,1); imshow(I); hold on plot(C(:,1), C(:,2), '.', 'Color', 'g') title('Maximum Corners = 200') hold off
  3. Display the corners when the maximum number of desired corners is 3.

    corners_max_specified = corner(I,3); subplot(1,2,2); imshow(I); hold on plot(corners_max_specified(:,1), corners_max_specified(:,2), ... '.', 'Color', 'g') title('Maximum Corners = 3') hold off

Tracing Object Boundaries in an Image

The toolbox includes two functions you can use to find the boundaries of objects in a binary image:

The bwtraceboundary function returns the row and column coordinates of all the pixels on the border of an object in an image. You must specify the location of a border pixel on the object as the starting point for the trace.

The bwboundaries function returns the row and column coordinates of border pixels of all the objects in an image.

For both functions, the nonzero pixels in the binary image belong to an object, and pixels with the value 0 (zero) constitute the background.

The following example uses bwtraceboundary to trace the border of an object in a binary image. Then, using bwboundaries, the example traces the borders of all the objects in the image:

  1. Read image and display it.

    I = imread('coins.png');  imshow(I)

  2. Convert the image to a binary image. bwtraceboundary and bwboundaries only work with binary images.

    BW = im2bw(I);  imshow(BW)

  3. Determine the row and column coordinates of a pixel on the border of the object you want to trace. bwboundary uses this point as the starting location for the boundary tracing.

    dim = size(BW) col = round(dim(2)/2)-90; row = min(find(BW(:,col)))
  4. Call bwtraceboundary to trace the boundary from the specified point. As required arguments, you must specify a binary image, the row and column coordinates of the starting point, and the direction of the first step. The example specifies north ('N'). For information about this parameter, see Choosing the First Step and Direction for Boundary Tracing.
    boundary = bwtraceboundary(BW,[row, col],'N');
  5. Display the original grayscale image and use the coordinates returned by bwtraceboundary to plot the border on the image.

    imshow(I) hold on; plot(boundary(:,2),boundary(:,1),'g','LineWidth',3);

  1. To trace the boundaries of all the coins in the image, use the bwboundaries function. By default, bwboundaries finds the boundaries of all objects in an image, including objects inside other objects. In the binary image used in this example, some of the coins contain black areas that bwboundaries interprets as separate objects. To ensure that bwboundaries only traces the coins, use imfill to fill the area inside each coin.

    BW_filled = imfill(BW,'holes'); boundaries = bwboundaries(BW_filled);

    bwboundaries returns a cell array, where each cell contains the row/column coordinates for an object in the image.

  2. Plot the borders of all the coins on the original grayscale image using the coordinates returned by bwboundaries.

    for k=1:10 b = boundaries{k}; plot(b(:,2),b(:,1),'g','LineWidth',3); end

Choosing the First Step and Direction for Boundary Tracing

For certain objects, you must take care when selecting the border pixel you choose as the starting point and the direction you choose for the first step parameter (north, south, etc.).

For example, if an object contains a hole and you select a pixel on a thin part of the object as the starting pixel, you can trace the outside border of the object or the inside border of the hole, depending on the direction you choose for the first step. For filled objects, the direction you select for the first step parameter is not as important.

To illustrate, this figure shows the pixels traced when the starting pixel is on a thin part of the object and the first step is set to north and south. The connectivity is set to 8 (the default).

Impact of First Step and Direction Parameters on Boundary Tracing

 

Detecting Lines Using the Hough Transform

This section describes how to use the Hough transform functions to detect lines in an image. The following table lists the Hough transform functions in the order you use them to perform this task.

Function Description
hough

The hough function implements the Standard Hough Transform (SHT). The Hough transform is designed to detect lines, using the parametric representation of a line:

rho = x*cos(theta) + y*sin(theta)

The variable rho is the distance from the origin to the line along a vector perpendicular to the line. theta is the angle between the x-axis and this vector. The hough function generates a parameter space matrix whose rows and columns correspond to these rho and theta values, respectively.

houghpeaks

After you compute the Hough transform, you can use the houghpeaks function to find peak values in the parameter space. These peaks represent potential lines in the input image.

houghlines

After you identify the peaks in the Hough transform, you can use the houghlines function to find the endpoints of the line segments corresponding to peaks in the Hough transform. This function automatically fills in small gaps in the line segments.

The following example shows how to use these functions to detect lines in an image.

  1. Read an image into the MATLAB workspace.

    I  = imread('circuit.tif');
  2. For this example, rotate and crop the image using the imrotate function.

    rotI = imrotate(I,33,'crop');  fig1 = imshow(rotI);

  3. Find the edges in the image using the edge function.

    BW = edge(rotI,'canny');  figure, imshow(BW);

  4. Compute the Hough transform of the image using the hough function.

    [H,theta,rho] = hough(BW);
  5. Display the transform using the imshow function. figure, imshow(imadjust(mat2gray(H)),[],'XData',theta,'YData',rho,... 'InitialMagnification','fit'); xlabel('\theta (degrees)'), ylabel('\rho'); axis on, axis normal, hold on; colormap(hot)

  6. Find the peaks in the Hough transform matrix, H, using the houghpeaks function.

    P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
  7. Superimpose a plot on the image of the transform that identifies the peaks.
    x = theta(P(:,2));  y = rho(P(:,1));  plot(x,y,'s','color','black');

  8. Find lines in the image using the houghlines function.

    lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',7);
  9. Create a plot that superimposes the lines on the original image.

    figure, imshow(rotI), hold on max_len = 0; for k = 1:length(lines) xy = [lines(k).point1; lines(k).point2]; plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green'); % Plot beginnings and ends of lines plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow'); plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red'); % Determine the endpoints of the longest line segment len = norm(lines(k).point1 - lines(k).point2); if ( len > max_len) max_len = len; xy_long = xy; end end % highlight the longest line segment plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','red');

 

Analyzing Image Homogeneity Using Quadtree Decomposition

Quadtree decomposition is an analysis technique that involves subdividing an image into blocks that are more homogeneous than the image itself. This technique reveals information about the structure of the image. It is also useful as the first step in adaptive compression algorithms.

You can perform quadtree decomposition using the qtdecomp function. This function works by dividing a square image into four equal-sized square blocks, and then testing each block to see if it meets some criterion of homogeneity (e.g., if all the pixels in the block are within a specific dynamic range). If a block meets the criterion, it is not divided any further. If it does not meet the criterion, it is subdivided again into four blocks, and the test criterion is applied to those blocks. This process is repeated iteratively until each block meets the criterion. The result might have blocks of several different sizes.

Example: Performing Quadtree Decomposition

To illustrate, this example performs quadtree decomposition on a 512-by-512 grayscale image.

  1. Read in the grayscale image.

    I = imread('liftingbody.png');
  2. Specify the test criteria used to determine the homogeneity of each block in the decomposition. For example, the criterion might be this threshold calculation.

    max(block(:)) - min(block(:)) <= 0.2

    You can also supply qtdecomp with a function (rather than a threshold value) for deciding whether to split blocks; for example, you might base the decision on the variance of the block. See the reference page for qtdecomp for more information.

  3. Perform this quadtree decomposition by calling the qtdecomp function, specifying the image and the threshold value as arguments.

    S = qtdecomp(I,0.27)

    You specify the threshold as a value between 0 and 1, regardless of the class of I. If I is uint8, qtdecomp multiplies the threshold value by 255 to determine the actual threshold to use. If I is uint16, qtdecomp multiplies the threshold value by 65535.

qtdecomp first divides the image into four 256-by-256 blocks and applies the test criterion to each block. If a block does not meet the criterion, qtdecomp subdivides it and applies the test criterion to each block. qtdecomp continues to subdivide blocks until all blocks meet the criterion. Blocks can be as small as 1-by-1, unless you specify otherwise.

qtdecomp returns S as a sparse matrix, the same size as I. The nonzero elements of S represent the upper left corners of the blocks; the value of each nonzero element indicates the block size.

The following figure shows the original image and a representation of its quadtree decomposition. (To see how this representation was created, see the example on the qtdecomp reference page.) Each black square represents a homogeneous block, and the white lines represent the boundaries between blocks. Notice how the blocks are smaller in areas corresponding to large changes in intensity in the image.

Image and a Representation of Its Quadtree Decomposition

 

Getting Information about Image Pixel Values and Image Statistics

Getting Image Pixel Values Using impixel

To determine the values of one or more pixels in an image and return the values in a variable, use the impixel function. You can specify the pixels by passing their coordinates as input arguments or you can select the pixels interactively using a mouse. impixel returns the value of specified pixels in a variable in the MATLAB workspace.

This example illustrates how to use impixel to get pixel values.

  1. Display an image.

    imshow canoe.tif
  2. Call impixel. When called with no input arguments, impixel associates itself with the image in the current axes.
    vals = impixel
  3. Select the points you want to examine in the image by clicking the mouse. impixel places a star at each point you select.

  4. When you are finished selecting points, press Return. impixel returns the pixel values in an n-by-3 array, where n is the number of points you selected. The stars used to indicate selected points disappear from the image.

    pixel_values = 0.1294 0.1294 0.1294 0.5176 0 0 0.7765 0.6118 0.4196

 

Creating an Intensity Profile of an Image Using improfile

The intensity profile of an image is the set of intensity values taken from regularly spaced points along a line segment or multiline path in an image. For points that do not fall on the center of a pixel, the intensity values are interpolated.

To create an intensity profile, use the improfile function. This function calculates and plots the intensity values along a line segment or a multiline path in an image. You define the line segment (or segments) by specifying their coordinates as input arguments. You can define the line segments using a mouse. (By default, improfile uses nearest-neighbor interpolation, but you can specify a different method. For more information, see Specifying the Interpolation Method.) improfile works best with grayscale and truecolor images.

For a single line segment, improfile plots the intensity values in a two-dimensional view. For a multiline path, improfile plots the intensity values in a three-dimensional view.

If you call improfile with no arguments, the cursor changes to crosshairs when it is over the image. You can then specify line segments by clicking the endpoints; improfile draws a line between each two consecutive points you select. When you finish specifying the path, press Return. improfile displays the plot in a new figure.

In this example, you call improfile and specify a single line with the mouse. In this figure, the line is shown in red, and is drawn from top to bottom.

I = fitsread('solarspectra.fts');  imshow(I,[]);  improfile

improfile displays a plot of the data along the line. Notice the peaks and valleys and how they correspond to the light and dark bands in the image.

Plot Produced by improfile

The example below shows how improfile works with an RGB image. Use imshow to display the image in a figure window. Call improfile without any arguments and trace a line segment in the image interactively. In the figure, the black line indicates a line segment drawn from top to bottom. Double-click to end the line segment.

imshow peppers.png  improfile

RGB Image with Line Segment Drawn with improfile

The improfile function displays a plot of the intensity values along the line segment. The plot includes separate lines for the red, green, and blue intensities. In the plot, notice how low the blue values are at the beginning of the plot where the line traverses the orange pepper.

Plot of Intensity Values Along a Line Segment in an RGB Image

 

Displaying a Contour Plot of Image Data

You can use the toolbox function imcontour to display a contour plot of the data in a grayscale image. A contour is a path in an image along which the image intensity values are equal to a constant. This function is similar to the contour function in MATLAB, but it automatically sets up the axes so their orientation and aspect ratio match the image.

This example displays a grayscale image of grains of rice and a contour plot of the image data:

  1. Read a grayscale image and display it.

    I = imread('rice.png');  imshow(I)

  2. Display a contour plot of the grayscale image.

    figure, imcontour(I,3)

You can use the clabel function to label the levels of the contours. See the description of clabel in the MATLAB Function Reference for details.

 

Creating an Image Histogram Using imhist

An image histogram is a chart that shows the distribution of intensities in an indexed or grayscale image. You can use the information in a histogram to choose an appropriate enhancement operation. For example, if an image histogram shows that the range of intensity values is small, you can use an intensity adjustment function to spread the values across a wider range.

To create an image histogram, use the imhist function. This function creates a histogram plot by making n equally spaced bins, each representing a range of data values. It then calculates the number of pixels within each range.

The following example displays an image of grains of rice and a histogram based on 64 bins. The histogram shows a peak at around 100, corresponding to the dark gray background in the image. For information about how to modify an image by changing the distribution of its histogram, see Adjusting Intensity Values to a Specified Range.

  1. Read image and display it.

    I = imread('rice.png');  imshow(I)

  2. Display histogram of image.

    figure, imhist(I)

 

Getting Summary Statistics About an Image

You can compute standard statistics of an image using the mean2, std2, and corr2 functions. mean2 and std2 compute the mean and standard deviation of the elements of a matrix. corr2 computes the correlation coefficient between two matrices of the same size.

These functions are two-dimensional versions of the mean, std, and corrcoef functions described in the MATLAB Function Reference.

Computing Properties for Image Regions

You can use the regionprops function to compute properties for image regions. For example, regionprops can measure such properties as the area, center of mass, and bounding box for a region you specify. See the reference page for regionprops for more information.

Source: http://www.mathworks.com