Digital image analysis is usually conducted using Raster data structures - each image is treated as an array of values. It offers advantages for manipulation of pixel values by image processing system, as it is easy to find and locate pixels and their values. Disadvantages becomes apparent when one needs to represent the array of pixels as discrete patches or regions, where as Vector data structures uses polygonal patches and their boundaries as fundamental units for analysis and manipulation. Though vector format is not appropriate to for digital analysis of remotely sensed data.
Image Resolution
Resolution can be defined as "the ability of an imaging system to record fine details in a distinguishable manner". A working knowledge of resolution is essential for understanding both practical and conceptual details of remote sensing. Along with the actual positioning of spectral bands, they are of paramount importance in determining the suitability of remotely sensed data for a given applications. The major characteristics of imaging remote sensing instrument operating in the visible and infrared spectral region are described in terms as follow:
Spectral Resolution refers to the width of the spectral bands. As different material on the earth surface exhibit different spectral reflectances and emissivities. These spectral characteristics define the spectral position and spectral sensitivity in order to distinguish materials. There is a tradeoff between spectral resolution and signal to noise. The use of well -chosen and sufficiently numerous spectral bands is a necessity, therefore, if different targets are to be successfully identified on remotely sensed images.
Radiometric Resolution or radiometric sensitivity refers to the number of digital levels used to express the data collected by the sensor. It is commonly expressed as the number of bits (binary digits) needs to store the maximum level. For example Landsat TM data are quantised to 256 levels (equivalent to 8 bits). Here also there is a tradeoff between radiometric resolution and signal to noise. There is no point in having a step size less than the noise level in the data. A low-quality instrument with a high noise level would necessarily, therefore, have a lower radiometric resolution compared with a high-quality, high signal-to-noise-ratio instrument. Also higher radiometric resolution may conflict with data storage and transmission rates.
Spatial Resolution of an imaging system is defines through various criteria, the geometric properties of the imaging system, the ability to distinguish between point targets, the ability to measure the periodicity of repetitive targets ability to measure the spectral properties of small targets.
The most commonly quoted quantity is the instantaneous field of view (IFOV), which is the angle subtended by the geometrical projection of single detector element to the Earth's surface. It may also be given as the distance, D measured along the ground, in which case, IFOV is clearly dependent on sensor height, from the relation: D = hb, where h is the height and b is the angular IFOV in radians. An alternative measure of the IFOV is based on the PSF, e.g., the width of the PDF at half its maximum value.
A problem with IFOV definition, however, is that it is a purely geometric definition and does not take into account spectral properties of the target. The effective resolution element (ERE) has been defined as "the size of an area for which a single radiance value can be assigned with reasonable assurance that the response is within 5% of the value representing the actual relative radiance". Being based on actual image data, this quantity may be more useful in some situations than the IFOV.
Other methods of defining the spatial resolving power of a sensor are based on the ability of the device to distinguish between specified targets. Of the concerns the ratio of the modulation of the image to that of the real target. Modulation, M, is defined as:
M = Emax -Emin / Emax + Emin
Where Emax and Emin are the maximum and minimum radiance values recorded over the image.
Temporal resolution refers to the frequency with which images of a given geographic location can be acquired. Satellites not only offer the best chances of frequent data coverage but also of regular coverage. The temporal resolution is determined by orbital characteristics and swath width, the width of the imaged area. Swath width is given by 2htan(FOV/2) where h is the altitude of the sensor, and FOV is the angular field of view of the sensor.
How to Improve Your Image?
Analysis of remotely sensed data is done using various image processing techniques and methods that includes:
Elements of Image Interpretation | |
Primary Elements | Black and White Tone |
Color | |
Stereoscopic Parallax | |
Spatial Arrangement of Tone & Color | Size |
Shape | |
Texture | |
Pattern | |
Based on Analysis of Primary Elements | Height |
Shadow | |
Contextual Elements | Site |
Association |
Digital Image Processing is a collection of techniques for the manipulation of digital images by computers. The raw data received from the imaging sensors on the satellite platforms contains flaws and deficiencies. To overcome these flaws and deficiencies inorder to get the originality of the data, it needs to undergo several steps of processing. This will vary from image to image depending on the type of image format, initial condition of the image and the information of interest and the composition of the image scene. Digital Image Processing undergoes three general steps:
As an image enhancement technique often drastically alters the original numeric data, it is normally used only for visual (manual) interpretation and not for further numeric analysis. Common enhancements include image reduction, image rectification, image magnification, transect extraction, contrast adjustments, band ratioing, spatial filtering, Fourier transformations, principal component analysis and texture transformation.
Information Extraction is the last step toward the final output of the image analysis. After pre-processing and image enhancement the remotely sensed data is subjected to quantitative analysis to assign individual pixels to specific classes. Classification of the image is based on the known and unknown identity to classify the remainder of the image consisting of those pixels of unknown identity. After classification is complete, it is necessary to evaluate its accuracy by comparing the categories on the classified images with the areas of known identity on the ground. The final result of the analysis consists of maps (or images), data and a report. These three components of the result provide the user with full information concerning the source data, the method of analysis and the outcome and its reliability.
Pre-Processing of the Remotely Sensed Images
When remotely sensed data is received from the imaging sensors on the satellite platforms it contains flaws and deficiencies. Pre-processing refers to those operations that are preliminary to the main analysis. Preprocessing includes a wide range of operations from the very simple to extremes of abstractness and complexity. These categorized as follow:
The following methods defines the outline the basis of the cosmetic operations for the removal of such defects:
Line-Dropouts
A string of adjacent pixels in a scan line contain spurious DN. This can occur when a detector malfunctions permanently or temporarily. Detectors are loaded by receiving sudden high radiance, creating a line or partial line of data with the meaningless DN. Line dropouts are usually corrected either by replacing the defective line by a duplicate of preceding or subsequent line, or taking the average of the two. If the spurious pixel, sample x, line y has a value DNx,y then the algorithms are simply:
DNx,y = DNx,y-1
DNx,y = (DNx,y-1 + DNx,y+1)/2
De-Striping
Banding or striping occurs if one or more detectors go out of adjustment in a given band. The systematic horizontal banding pattern seen on images produced by electro-mechanical scanners such as Landsat's MSS and TM results in a repeated patterns of lines with consistently high or low DN. Two reasons can be thus put forward in favor of applying a 'de-striping' correction :
The two different methods of de-striping are as follow:
First method entails a construction of histograms for each detector of the problem band, i.e., histograms generated from by the six detectors: these histograms are calculated for the lines 1,7,13,……, lines 2, 8, 14, ……, etc. Then the means and standard deviation are calculated for each of the six histograms. Assuming the proportion of pixels representing different soils, water, vegetation, cloud, etc. are the same for each detector, the means and standard deviations of the 6 histograms should be the same. Stripes, however are characterised by distinct histograms. De-striping then requires equalisation of the means and standard deviation of the six detectors by forcing them to equal selected values - usually the mean and standard deviation for the whole image.
The process of histogram matching is also utilised before mosaicking image data of adjacent scenes (recorded at diferent times) so as to accommodate differences in illumination levels, angles etc. A further application is resolution merging, in which a low spatial resolution image is sharpened by merging with high spatial resolution image.
Second method is a non-linear in the sense that relationship between radiance rin(received at the detector) and rout (output by the sensor) is not describable in terms of a single linear segments.
Random Noise
Odd pixels that have spurious DN crop up frequently in images - if they are particularlt distracting, they can be suppressed by spatial filtering. By definition, these defects can be identified by their marked differences in DN from adjacent pixels in the affected band. Noisy pixels can be replaced by substituting for an average value of the neighborhood DN. Moving windows of 3 x 3 or 5 x 5 pixels are typically used in such procedures.
Geometric Corrections
Raw digital images often contain serious geometrical distortions that arise from earth curvature, platform motion, relief displacement, non-linearities in scanning motion. The distortions involved are of two types:
Distortion Evaluated from Ground Control
Caused during the spacecraft scan of the ground .
Systematic Distortions
Geometric systematic distortions are those effects that are constant and can be predicted in advance. These are of two types:
Scan Skew
It is caused by forward motion of the spacecraft during the time of each mirror sweep. In this case the ground swath scanned is not normal to the ground track. (Fig.8).
Radiative Transfer theory is used to make quantitative calculations of the difference between the satellite received radiance and earth leaving radiance.
Radiation traveling in a certain direction is specified by the angle f between that direction and the vertical axis z and setting a differential equation for a small horizontal element of the transmitting medium (the atmosphere) with thickness dz. The resulting differential equation is called the radiative transfer equation. The equation will therefore be different for different wavelengths of electromagnetic radiation because of the different relative importance of different physical process at different wavelength.
Need for Atmospheric Correction
When an image is to be utilized, it is frequently necessary to make corrections in brightness and geometry for accuracy during interpretation and also some of the application may require correction to evaluate the image accurately. The various reason for which correction should be done:
The amount of atmospheric correction depends upon
Spectral Enhancement Techniques
Density Slicing
Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into discrete intervals of 0-63, 64-127, 128-191 and 192-255. These four classes are displayed as four different grey levels. This kind of density slicing is often used in displaying temperature maps.
Contrast Stretching
The operating or dynamic , ranges of remote sensors are often designed with a variety of eventual data applications. For example for any particular area that is being imaged it is unlikely that the full dynamic range of sensor will be used and the corresponding image is dull and lacking in contrast or over bright. Landsat TM images can end up being used to study deserts, ice sheets, oceans, forests etc., requiring relatively low gain sensors to cope with the widely varying radiances upwelling from dark, bright , hot and cold targets. Consequently, it is unlikely that the full radiometric range of brand is utilised in an image of a particular area. The result is an image lacking in contrast - but by remapping the DN distribution to the full display capabilities of an image processing system, we can recover a beautiful image. Contrast Stretching can be displayed in three catagories:
Linear Contrast Stretch
This technique involves the translation of the image pixel values from the observed range DNmin to DNmax to the full range of the display device(generally 0-255, which is the range of values representable in an 8bit display devices)This technique can be applied to a single band, grey-scale image, where the image data are mapped to the display via all three colors LUTs.
It is not necessary to stretch between DNmax and DNmin - Inflection points for a linear contrast stretch from the 5th and 95th percentiles, or ± 2 standard deviations from the mean (for instance) of the histogram, or to cover the class of land cover of interest (e.g. water at expense of land or vice versa). It is also straightforward to have more than two inflection points in a linear stretch, yielding a piecewise linear stretch.
Histogram Equalisation
The underlying principle of histogram equalisation is straightforward and simple, it is assumed that each level in the displayed image should contain an approximately equal number of pixel values, so that the histogram of these displayed values is almost uniform (though not all 256 classes are necessarily occupied). The objective of the histogram equalisation is to spread the range of pixel values present in the input image over the full range of the display device.
Gaussian Stretch
This method of contrast enhancement is base upon the histogram of the pixel values is called a Gaussian stretch because it involves the fitting of the observed histogram to a normal or Gaussian histogram. It is defined as follow:
F(x) = (a/p)0.5 exp(-ax2)
Multi-Spectral Enhancement Techniques
Image Arithmetic Operations
The operations of addition, subtraction, multiplication and division are performed on two or more co-registered images of the same geographical area. These techniques are applied to images from separate spectral bands from single multispectral data set or they may be individual bands from image data sets that have been collected at different dates. More complicated algebra is sometimes encountered in derivation of sea-surface temperature from multispectral thermal infrared data (so called split-window and multichannel techniques).
Addition of images is generally carried out to give dynamic range of image that equals the input images.
Band Subtraction Operation on images is sometimes carried out to co-register scenes of the same area acquired at different times for change detection.
Multiplication of images normally involves the use of a single'real' image and binary image made up of ones and zeros.
Band Ratioing or Division of images is probably the most common arithmetic operation that is most widely applied to images in geological, ecological and agricultural applications of remote sensing. Ratio Images are enhancements resulting from the division of DN values of one spectral band by corresponding DN of another band. One instigation for this is to iron out differences in scene illumination due to cloud or topographic shadow. Ratio images also bring out spectral variation in different target materials. Multiple ratio image can be used to drive red, green and blue monitor guns for color images. Interpretation of ratio images must consider that they are "intensity blind", i.e, dissimilar materials with different absolute reflectances but similar relative reflectances in the two or more utilised bands will look the same in the output image.
Principal Component Analysis
Spectrally adjacent bands in a multispectral remotely sensed image are often highly correlated. Multiband visible/near-infrared images of vegetated areas will show negative correlations between the near-infrared and visible red bands and positive correlations among the visible bands because the spectral characteristics of vegetation are such that as the vigour or greenness of the vegetation increases the red reflectance diminishes and the near-infrared reflectance increases. Thus presence of correlations among the bands of a multispectral image implies that there is redundancy in the data and Principal Component Analysis aims at removing this redundancy.
Principal Components Analysis (PCA) is related to another statistical technique called factor analysis and can be used to transform a set of image bands such that the new bands (called principal components) are uncorrelated with one another and are ordered in terms of the amount of image variation they explain. The components are thus a statistical abstraction of the variability inherent in the original band set.
To transform the original data onto the new principal component axes, transformation coefficients (eigen values and eigen vectors) are obtained that are further applied in alinear fashion to the original pixel values. This linear transformation is derived from the covariance matrix of the original data set. These transformation coefficients describe the lengths and directions of the principal axes. Such transformations are generally applied either as an enhancement operation, or prior to classification of data. In the context of PCA, information means variance or scatter about the mean. Multispectral data generally have a dimensionality that is less than the number of spectral bands. The purpose of PCA is to define the dimensionality and to fix the coefficients that specify the set of axes, which point in the directions of greatest variability. The bands of PCA are often more interpretable than the source data.
Decorrelation Stretch
Principal Components can be stretched and transformed back into RGB colours - a process known as decorrelation stretching.
If the data are transformed into principal components space and are stretched within this space, then the three bands making up the RGB color composite images are subjected to stretched will be at the right angles to each other. In RGB space the three-color components are likely to be correlated, so the effects of stretching are not independent for each color. The result of decorrelation stretch is generally an improvement in the range of intensities and saturations for each color with the hue remaining unaltered. Decorrelation Stretch, like principal component analysis can be based on the covariance matrix or the correlation matrix. The resultant value of the decorrelation stretch is also a function of the nature of the image to which it is applied. The method seems to work best on images of semi-arid areas and it seems to work least well where the area is covered by the image includes both land and sea.
Canonical Components
PCA is appropriate when little prior information about the scene is available. Canonical component analysis, also referred to as multiple discriminant analysis, may be appropriate when information about particular features of interest is available. Canonical component axes are located to maximize the separability of different user-defined feature types.
Hue, Saturation and Intensity (HIS) Transform
Hues is generated by mixing red, green and blue light are characterised by coordinates on the red, green and blue axes of the color cube. The hue-saturation-intensity hexcone model, where hue is the dominant wavelength of the perceived color represented by angular position around the top of a hexcone, saturation or purity is given by distance from the central, vertical axis of the hexcone and intensity or value is represented by distance above the apex of the hexcone. Hue is what we perceive as color. Saturation is the degree of purity of the color and may be considered to be the amount of white mixed in with the color. It is sometimes useful to convert from RGB color cube coordinates to HIS hexcone coordinates and vice-versa
The hue, saturation and intensity transform is useful in two ways: first as method of image enhancement and secondly as a means of combining co-registered images from different sources. The advantage of the HIS system is that it is a more precise representation of human color vision than the RGB system. This transformation has been quite useful for geological applications.
Fourier Transformation
The Fourier Transform operates on a single -band image. Its purpose is to break down the image into its scale components, which are defined to be sinusoidal waves with varying amplitudes, frequencies and directions. The coordinates of two-dimensional space are expressed in terms of frequency (cycles per basic interval). The function of Fourier Transform is to convert a single-band image from its spatial domain representation to the equivalent frequency-domain representation and vice-versa.
The idea underlying the Fourier Transform is that the grey-scale valuea forming a single-band image can be viewed as a three-dimensional intensity surface, with the rows and columns defining two axes and the grey-level value at each pixel giving the third (z) dimension. The Fourier Transform thus provides details of
Spatial Processing
Spatial Filtering
Spatial Filtering can be described as selectively emphasizing or suppressing information at different spatial scales over an image. Filtering techniques can be implemented through the Fourier transform in the frequency domain or in the spatial domain by convolution.
Convolution Filters
Filtering methods exists is based upon the transformation of the image into its scale or spatial frequency components using the Fourier transform. The spatial domain filters or the convolution filters are generally classed as either high-pass (sharpening) or as low-pass (smoothing) filters.
Low-Pass (Smoothing) Filters
Low-pass filters reveal underlying two-dimensional waveform with a long wavelength or low frequency image contrast at the expense of higher spatial frequencies. Low-frequency information allows the identification of the background pattern, and produces an output image in which the detail has been smoothed or removed from the original.
A 2-dimensional moving-average filter is defined in terms of its dimensions which must be odd, positive and integral but not necessarily equal, and its coefficients. The output DN is found by dividing the sum of the products of corresponding convolution kernel and image elements often divided by the number of kernel elements.
A similar effect is given from a median filter where the convolution kernel is a description of the PSF weights. Choosing the median value from the moving window does a better job of suppressing noise and preserving edges than the mean filter.
Adaptive filters have kernel coefficients calculated for each window position based on the mean and variance of the original DN in the underlying image.
High-Pass (Sharpening) Filters
Simply subtracting the low-frequency image resulting from a low pass filter from the original image can enhance high spatial frequencies. High -frequency information allows us either to isolate or to amplify the local detail. If the high-frequency detail is amplified by adding back to the image some multiple of the high frequency component extracted by the filter, then the result is a sharper, de-blurred image.
High-pass convolution filters can be designed by representing a PSF with positive centre weightr and negative surrounding weights. A typical 3x3 Laplacian filter has a kernal with a high central value, 0 at each corner, and -1 at the centre of each edge. Such filters can be biased in certain directions for enhancement of edges.
A high-pass filtering can be performed simply based on the mathematical concepts of derivatives, i.e., gradients in DN throughout the image. Since images are not continuous functions, calculus is dispensed with and instead derivatives are estimated from the differences in the DN of adjacent pixels in the x,y or diagonal directions. Directional first differencing aims at emphasising edges in image.
Frequency Domain Filters
The Fourier transform of an image, as expressed by the amplitude spectrum is a breakdown of the image into its frequency or scale components. Filtering of these components use frequency domain filters that operate on the amplitude spectrum of an image and remove, attenuate or amplify the amplitudes in specified wavebands. The frequency domain can be represented as a 2-dimensional scatter plot known as a fourier spectrum, in which lower frequencies fall at the centre and progressively higher frequencies are plotted outward.
Filtering in the frequency domain consists of 3 steps:
Fundamentally spectral classification forms the bases to map objectively the areas of the image that have similar spectral reflectance/emissivity characteristics. Depending on the type of information required, spectral classes may be associated with identified features in the image (supervised classification) or may be chosen statistically (unsupervised classification). Classification has also seen as a means to compressing image data by reducing the large range of DN in several spectral bands to a few classes in a single image. Classification reduces this large spectral space into relatively few regions and obviously results in loss of numerical information from the original image. There is no theoretical limit to the dimensionality used for the classification, though obviously the more bands involved, the more computationally intensive the process becomes. It is often wise to remove redundant bands before classification.
Classification generally comprises four steps:
Unsupervised Classification
This system of classification does not utilize training data as the basis of classification. This classifier involves algorithms that examine the unknown pixels in the image and aggregate them into a number of classes based on the natural groupings or cluster present in the image. The classes that result from this type of classification are spectral classes. Unsupervised classification is the identification, labeling and mapping of these natural classes. This method is usually used when there is less information about the data before classification.
There are several mathematical strategies to represent the clusters of data in spectral space.
Source: http://www.gisdevelopment.net/