Quick ERDAS Tutorial - Tutorial, Lecture Note of ERDAS Software - Completely Image Processing, Remote Sensing, GIS and GPS Tutorial - facegis.com
Quick ERDAS Tutorial

1 Introduction

A remotely sensed image is made up of a rectangular matrix of individual pixels, where each pixel represents a specific ground area. The value of each pixel represents the magnitude of upwelling electromagnetic energy from that given ground area (Mather, 1999). Energy occurs in different wavelengths, and by measuring the energy in different bands or spectra of wavelengths, coming from the same ground area, ground features can be distinguished from each other, as they show typical characteristics in the different bands recorded, e.g. water is different form soil or vegetation.

The values for pixels in one waveband typically ranges from 0 to 255, this being the range of values that can be represented by one byte of computer storage. Thus, the detectors on the sensing devices will be set such that the energy will be recorded digitally within that range. For any one image, the majority of the actual data values recorded may be concentrated within only one small part of the 0-255 range.

A consequence of this is that small variations in pixel values, which may have interest and which might distinguish between features, may be difficult to detect visually (Jones, 1998). Viewing remotely-sensed images is therefore concerned with manipulating the pixel values in such a way that features can be clearly discriminated between when the image is displayed. It is therefore clear that the amount of information that can be retrieved from an image lies not in the cell values per se, but in the way these values can be displayed.

2 Viewing and investigating features

Spectral profile

To distinguish different features in a TM image, the figure below left is of help, as it shows how different features generally display their reflectance values in the different bands of the TM sensor.

Comments on findings in “peak-tm.84”: Firstly, the peak in band 5 is not consistent with the generalised reflectance, which should be lower in band 5 than band 4. Secondly, the peat bog should, if it were wet, show a much lesser reflectance in bands 5 and 7. Even so, the lower overall reflectance value of the peat bog, compared to the other features.

Image Info

Image info yields valuable information about the image and the data contained in each band, such as:

Information about the raster itself, rows and columns, data type etc.
Statistical information, mean, max mode etc.
Image coordinates and pixel size

Image info as requested in practical 1:

Lond-pan94: #rows=6012, #columns=7000, meanDN=44.966, data type= Unsigned 8bit

Leic-tm94-geo:
#rows=836, #columns=872, meanDN in band 4 = 191.660, pixel size=35m

Leic-tm84-geo:
maxDN in band 4= 201, ULx=442045.0, ULy=32110.0, LRx=471025.0, Lry=290870.0

Flevoland:
Data type= Unsigned 16bit, ULx=0, Uly=0, LRx=487.0, LRy=-401.0

AVHRR-4: Data type= Unsigned 8bit

Spatial profile

Profile of band 4 in leic-tm92, across Cemetery, University campus and Victoria Park showing low level of reflectance (less vegetation) in built-up areas and campus, and high reflectance across Victoria Park.

Surface profile of band 4 in area near motorway junction. Assumptions that can be made is that the fields along the motorway are barren or dry (or with crops that have little reflectance), whereas the motorway is clearly flanked by trees and vegetation.

Blend, Swipe and Flicker

When two images have been georeferenced, they can be laid on top of each other. Blend allows blending the displays into each other, Swipe rolls like a curtain over the top layer, hiding or revealing the bottom layer. Flicker allows switching between the images. These functions can be operated manually using buttons/ scrollbars or automatically at a set rate.

3 Grey scale decorrelation, edge enhancement

Grey scale enhancement

One of the most basic enhancement techniques is the contrast stretch. If the screen is set to display 256 grey scale colours, band3 in leic-tm92 will appear dark, as most values are clustered around the mean value of 29, with the overall values ranging from approximately 21 to 55, taking up only 34 different values or 14% of the full value range (0-255) that can be utilised.

Using stretch will distribute the pixel values according to the algorithm used so that the whole value range is used. In doing so the algorithm leaves the pixel value unchanged, but is assigned a display value in a lookup table (LUT). As an example, the pixel value at (276, -230) is 23 in the image; the LUT value is 23 without stretch and 47 with stretch applied.

Decorrelation stretch

The effect of decorrelation stretch is clearly visible in the following images. The effect of this stretch is greater colour differentiation, clearly seen in the histograms, where the pixel values are grouped and distinctively separated from each other.

Histogram – Without decorrelation stretch

Histogram – With decorrelation stretch

Image – Without decorrelation stretch

Image – With decorrelation stretch

Edge enhancement

The “Crisp” function passes an edge-detecting or “high-pass” filter across the image. A high-pass is a sharp change in the grey scale value at a particular pixel point, and may be interpreted as features of the built environment, roads or field boundaries. In the combined image below the right part shows more distinguishable features than the left part, which has not been enhanced. As can be seen in the upper right corner of each picture, differences in neighbouring pixel values make an impact on the crisp picture, whereas the area is more or less consistently blurry green in the original image.

4 Band ratios

Band rationing means dividing the pixels in one band by the corresponding pixels in a second band. The reason for this is twofold: One is that differences between the spectral reflectance curves of surface types can be brought out. The second is that illumination, and consequently radiance, may vary, the ratio between an illuminated and a not illuminated area of the same surface type will be the same. Thus, this will aid image interpretation, particularly the near-infrared/ red (NIR/R) band ratio.

From the general spectral reflectance the following observation can be made:

Vegetation – NIR/R >>> 1, Water – NIR/R < 1, Soil – NIR/R > 1.

NIR/R–images can serve as a crude classifier of images, and indicate vegetated areas in particular. Therefore this ratio has been developed into a range of different vegetation indices.

Original image

NIR/R (TM4/TM3), with values ranging from 0 to 8, soil to vigorous vegetation, vegetated areas shown in white.

NIR/R, stretched, with values ranging from 16 to 254, giving a smoother transition between surface types.


6 Geometric and radiometric corrections

Bad line replacement

Part of image with missing scan line

Regarding the position of the missing scan line, to find the correct row number, it must considered that the image peak-tm84 has 512 rows and 512 columns according to it’s image info, with coordinates upper left 1/1(y/x) and lower right 512/-510 (y/x).

Since the missing scan line is at x = –200, this is row number 201.

 

Sensor de-striping

Destriped image (left) and original image (right).

The destriped image shows a small improvement compared to the original image.

Mather (1999) describes two ways of destriping, using a linear method or histogram matching. In the first, the mean and standard deviation of the histograms of every 16 bands are similarly forced to be equal to the mean and the standard deviation of all the pixels in the image. In the latter a target histogram is constructed, which is the cumulative histogram of the entire image. Using the cumulative histogram of one of the detectors, meaning the cumulative histogram of every 16th line, a comparison is made, and an output pixel value is calculated. ERDAS uses a convolution method, which passes an averaging matrix across the image, which may account for stripes still being apparent in the image.


Atmospheric path luminance correction

Original image with default 50/50% — And 100/70% brightness/contrast

The pixel values in the bands used for display (1-3) take up only about the first 40 of the possible 256 values, which is why the image seems dark. The brightness tool moves the colours into white, while contrast stretches the histogram for better colour differentiation. Thus, the dark bluish pixel values are replaced by bright tones in the right image, speckled with red tones.

Corrected image with default 50/50% —- and 100/70% brightness/contrast

After correcting for luminance, all pixel values start more or less from 0, which is why the default image is almost totally black compared to the original image.

Due to the effect of path radiance the histogram initially does not have its lowest value at zero. This is why the 100/70% contrast brightness is bluish bright, since the path radiance is highest in band 1, which is chosen as the blue colour component. After subtracting the minimum DN, the corrected image appears with a normalised true colour approximation. Bands 5 and 7 normally need not to be corrected (Mather, 1999), which is also apparent in the table below.

The table shows the minimum and maximum values before (B) and after (A) the dark pixel subtraction.

Band MinB MinA MaxB Max8
1 58 1 113 43
2 19 0 63 31
3 16 0 82 46
4 12 0 123 100
5 5 0 171 132
6 96 96 180 160
7 1 2 99 67

Geometric correction

– Using Ground Control Point (GCP)

The GCPs should be spread evenly over the image, covering the whole image, and be placed as much as possible into the corners of the image, to give best coverage for calculating the transformation.

This is not he case in the provided data, as no corners are used, and there is a slight over-representation of GCP in the lower half of the image. Unfortunately, adding GCP from the given map is also limited, as the map covers much less area than the actual image. Clustering GCPs in the centre of the image will not give accuracy to the transformation of coordinates in the extremities of the image.

Adding GCP from a map is easy and straightforward. However, care should be taken when choosing the correct position of a GCP, as a scanned map might add to the RMS error, because roads tend to be drawn wider than they actually are, or even slightly displaced, i.e. University Road is said to be approximately 30m wide. In addition, the pixel size of the map is 3m; the image itself operates with 30m, which is another source of error in correct placement of GCP.

– Image-to-image transform
Image to image transform is especially useful where geo-referencing is not necessary for the analysis of multiple images of the same area. Again, the same rule applies as above, namely spreading the GCPs as much as possible. For the reasons already mentioned, areas that are not covered by both images and that do not contain GCPs should not be considered as “matching” and should therefore not be used in comparison studies.

5 Principal Components vs Tesselled Cap

Original image (top), Principal Component Analysis (middle) and Tasselled Cap Transform (bottom)

In the true colour image, with bands 1, 2 and 3, there seem to be little difference between principal component analysis (PCA) and tasselled cap transform (TCT). Even though the colour differs, the same pattern can be discerned.

 

The resulting images from PCA and TCT are similar, but not directly comparable.

TCT defines three functions or axes a priori: “wetness”, “greenness” and “brightness”.

TCT is particularly useful in defining soils (wetness) and vegetation (greenness) from Landsat MSS or TM data.

In PCA, the axes are dependent on statistical relationships among the spectral bands of the particular image being analysed.

Thus, inter-image comparison is more difficult when using PCA compared to TCT, because the rotation of the principal axes is dependent on the image that is used.

PCA will identify the number of dimensions in a data set: If all bands were highly correlated, the data would practically speaking have only one dimension.

In PCA, the axes with the most variability (or the least correlation) are found in PC1 (or principal component 1). Thus, this image contains most of the information that can be inferred from the image.

Principal component 1

Axes with most variability in the data are located in the first principal component (PC1). Thus, this component contains most of the information that can be inferred from the data and shows many distinguishable features.

PC1 and PC2

PC2 shows other hidden information, i.e. a clear separation of water bodies from surroundings, something that is not so obvious from PC1.

TCT band 5 and band 3

Band 5 in TCT emphasises wetness, so water bodies will stand out from the background. From the TCT coefficients in Mather (1999), green areas in band 3 will be dark, whereas wet areas will appear bright.

7 Image classification

Unsupervised classification

When performing an unsupervised classification it is necessary to find the right number of classes that are to be found. Too many, and the image will not differ noticeable from the original, too few and the selection will be too coarse.

 

Original image

Unsupervised classification, 10 classes

Unsupervised classification, 6 classes

The difference between 6 and 10 unsupervised classes is the merger of urban and residential as well as agricultural fields. The table below summarises the convergence for every iteration, depending on the number of classes. The convergence describes how many of the pixels stay in the same cluster between one iteration and the next.

Iteration 6 classes 10 classes 16 classes
1 0.000 0.000 0.000
2 0.745 0.556 0.399
3 0.809 0.737 0.644
4 0.951 0.903 0.871
5 0.934 0.926
6 0.955 0.946
7 0.955

The classes were determined by referring to the Ordnance Survey Landranger map of Leicester. Still, not all classes are consistent, i.e. water tends to appear many places. Forest, which one would expect to stand out in summer, when this picture is taken, does not have its own class.

Class Type
1 Heavily urbanised, roads, water
2 Wetlands, water, roads, rail
3 Agriculture
4 Railways, water
5 Residential, minor roads
6 Grass, i.e. along railways or motorways
7 Agriculture, hedges between fields
8 Agriculture
9 Grassland, dry
10 Park area

Dry grassland was inferred from the area around the racecourse. Agricultural areas were named so because of their typical pattern. Parkland areas were consistent with park areas on the map.

Looking at the separability table, some class are clearly separable, others not so clearly. Class 1 (city centre) and class 5 (residential) show the lowest average separability. The same goes for class 2 (roads) and 4 (railways). What is a bit puzzling is the fact that small water bodies like rivers have no distinct class, as they seem to mingle with other classes, especially roads and railways. This is due to their low reflectance, making it difficult to distinguish between them.

Supervised classification

There are different approaches to supervised classification.

Supervised classification requires a priori knowledge of the number of classes, as well as knowledge concerning statistical aspects of the classes. All methods start with establishing training samples, which are areas that are assumed or verified to be of a particular type. The classification algorithms will sent “sort” the pixels in the image accordingly.

Minimum distance

Maximum Likelihood

Maximum Likelihood/ Parallelepiped

The Minimum Distance algorithm allocates each cell by its minimum Euclidian distance to the respective centroid for that group of pixels, which is similar to Thiessen polygons. The Maximum Likelihood classifier applies the rule that the geometrical shape of a set of pixels belonging to a class often can be described by an ellipsoid. Consequently, pixels are grouped according to their position in the influence zone of the class ellipsoid. The Parallelepiped method is a rectangle, where the lowest and highest pixel values for the class in each band make up the boundary.

The Maximum Likelihood classification above has determined more water bodies and identified dense urban areas better than the Minimum Distance. Still, both methods display apparent errors. Choosing better training areas can improve this, but not necessarily so. The incorrect classification of water bodies in the Parallelepiped is due to the method’s inherent inaccuracy as the minimum and maximum value in a class seldom are representative values for a particular class.

To compare the unsupervised and supervised classification above is difficult, because their signature files do not show the same classes. In an image with high separability unsupervised classification may be used , whereas low separability will need the aid of supervision.

Source: http://idrisigis.wordpress.com/