Image Classification - Lecture Material - Completely Remote Sensing tutorial, GPS, and GIS - facegis.com
Image Classification

Now, at last we approach the finale of this Tutorial section during which we demonstrate two of the common methods for identifying and classifying features in images: Unsupervised and Supervised Classification. Closely related to Classification is the approach called Pattern Recognition. You may wish to read at the outset, the helpful Internet site that reviews Classification Procedures that is included in the Remote Sensing Core Curriculum online tutorial we have cited before.

Before starting, it is well to review several basic principles, considered earlier in the Introduction, with the aid of this diagram:

In the upper left are plotted spectral signatures for three general classes: Vegetation; Soil; Water. The relative spectral responses (reflectances in this spectral interval), in terms of some intensity unit, e.g., reflected energy (as a ratio of reflected to incident radiation) in appropriate units or as a percent, the unit times 100), have been sampled at three wavelengths. (The response values are normally converted [either at the time of acquisition on the ground or onboard aircraft or spacecraft] to a digital format - the DNs or Digital Numbers cited before, commonly subdivided into units from 0 to 255 [28]).

For this specific signature set, the values at any two of these wavelengths are plotted on the upper right. It is evident that there is considerable separation of the resulting value points in this two-dimensional diagram. In reality, when each class is considered in terms of geographic distribution and/or specific individual types (such as soybeans versus wheat in the Vegetation category), as well as other factors, there will be usually some variation of a particular set of its characteristic DNs in one or both chosen wavelengths being sampled. The result is a spread of points in the two-dimensional diagram (known as a scatter diagram), as seen in the lower left. For any two classes this scattering of value points may or may not overlap. In the case shown, which treats three types of vegetation (crops), they don't. The collection of plotted values (points) associated with each class is known as a cluster. It is possible, using statistics that calculate means, standard deviations, and certain probability functions, to draw boundaries between clusters, such that arbitrarily every point plotted in the spectral response space on each side of a boundary will automatically belong the class or type within that space. This is shown in the lower right diagram, along with a single point "w" which is an unknown object or pixel (at some specific location) whose identity is being sought. In this example, 'w' plots just in the soybean space.

Thus, the principle of classification (by computer image-processing) boils down to this: Any individual pixel or spatially grouped sets of pixels representing some feature, class, or material is characterized by a (generally small) range of DNs for each band monitored by the remote sensor. The DN values (determined by the radiance averaged over each spectral interval) are considered to be clustered sets of data in 2-, 3-, and higher dimensional plotting space. These are analyzed statistically to determine their degree of uniqueness in this spectral response space and some mathematical function(s) is/are chosen to discriminate the resulting clusters.

Two methods of classification are commonly used: Unsupervised and Supervised. The logic or steps involved can be grasped from these flow diagrams:

In unsupervised classification any individual pixel is compared to each discrete cluster to see which one it is closest to. A map of all pixels in the image, classified as to which cluster each pixel is most likely to belong, is produced (in black and white or more commonly in colors assigned to each cluster). This then must be interpreted by the user as to what the color patterns may mean in terms of classes, etc. that are actually present in the real world scene; this requires some knowledge of the scene's feature/class/material content from general experience or personal familiarity with the area imaged.

In a supervised classification the interpreter knows beforehand what classes, etc. are present and where each is in one to perhaps many locations within the scene. These are located on the image, areas containing examples of the class are circumscribed (making them training sites), and the statistical analysis is performed on the multiband data for each such class. Instead of clusters then, one has class groupings with appropriate discriminant functions that distinguish each (it is possible that more than one class will have similar spectral values but that is unlikely when more than 3 bands are used because different classes/materials seldom have similar responses over a wide range of wavelengths). All pixels in the image lying outside training sites are then compared with the class discriminants derived from the training sites, with each being assigned to the class it is closest to - this makes a map of established classes (with a few pixels usually remaining unknown) which can be reasonably accurate (but some classes present may not have been set up; or some pixels are misclassified.

Both modes of classification will be considered in more detail and examples given here and on the next 4 pages.

#### Unsupervised Classification

In an unsupervised classification, the objective is to group multiband spectral response patterns into clusters that are statistically separable. Thus, a small range of digital numbers (DNs) for, say 3 bands, can establish one cluster that is set apart from a specified range combination for another cluster (and so forth). Separation will depend on the parameters we choose to differentiate. We can visualize this process with the aid of this diagram, taken from Sabins, "Remote Sensing: Principles and Interpretation." 2nd Edition, for four classes: A = Agriculture; D= Desert; M = Mountains; W = Water.

From F.F. Sabins, Jr., "Remote Sensing: Principles and Interpretation." 2nd Ed., © 1987. Reproduced by permission of W.H. Freeman & Co., New York City.

We can modify these clusters, so that their total number can vary arbitrarily when four or more bands are involved. (This multidimensional situation is, of course, not readily displayed in a 3-D plot.) When we do the separations through a computer program, each pixel in an image is assigned to one of the clusters as being most similar to it in DN combination value. Generally, in an area within an image, multiple pixels in the same cluster correspond to some (initially unknown) ground feature or class so that patterns of gray levels result in a new image depicting the spatial distribution of the clusters. These levels can then be assigned colors to produce a cluster map. The trick then becomes one of trying to relate the different clusters to meaningful ground categories. We do this by either being adequately familiar with the major classes expected in the scene, or, where feasible, by visiting the scene (ground truthing) and visually correlating map patterns to their ground counterparts. Since the classes are not selected beforehand, this latter method is called Unsupervised Classification.

The IDRISI image processing program employs a simplified approach to Unsupervised Classification. Input data consist of the DN values of the registered pixels for the 3 bands used to make any of the color composites. Algorithms calculate the cluster values from these bands. The program automatically determines the maximum number of clusters by the parameters selected in the processing. This process typically has the effect of producing so many clusters that the resulting classified image becomes too cluttered and, thus, more difficult to interpret in terms of assigned classes. To improve the interpretability, we first tested a simplified output and thereafter limited the number of classes displayed to 15 (reduced from 28 in the final cluster tabulation).

The first Unsupervised Classification operates on the color composite made from Bands 2, 3, and 4. Examine the resulting image when just 6 clusters are specified.

The light buff colors associate with the marine waters but are also found in the mountains where shadows are evident in the individual band and color composite images. Red occurs where there is some heavy vegetation. Dark olive is found almost exclusively in the ocean against the beach. The orange, green, and blue colors have less discrete associations.

We next display a more sophisticated version, again using Bands 2, 3, and 4, in which 15 clusters are set up; a different color scheme is chosen.

Try to make some sense of the color patterns as indicators of the ground classes you know from previous paragraphs. A conclusion that you may reach is that some of the patterns do well in singling out some of the features in parts of the Morro Bay subscene. But, many individual areas represented by clusters do not appear to correlate well with what you thought was there. Unfortunately, what is happening is a rather artificial subdivision of spectral responses from small segments of the surface. In some instances, we see simply the effect of slight variations in surface orientation that changes the reflectances or perhaps the influence of what we said in the Overview was "mixed pixels". When we try another combination, Bands 4, 7, and 1, the new resulting classification has most of the same problems as the first, although sediment variation in the ocean is better discriminated. One reason why both 15 cluster classifications don't grab one's attention is that the colors automatically assigned to each cluster are not as distinctly different (instead, some similar shades) as might be optimum.

The subject of "mixed pixels" is intimately embedded in any discussion of classification methodology, and particularly of accuracy of a classified map. For reasons of context, discussion of the mixed pixel concept is deferred until page 13-2; after reading it but you should understand its relevance to accuracy assessment (that page also expands upon the idea behind training sites).

There is another aspect of unsupervised classification that bespeaks of their limited value. Arbitrarily changing classification parameters can lead to images that don't closely resemble each other and that are hard to judge as to the validity of the patterns as representative of actual ground classes or features. They are thus subjective! If 20 or 10 spectral clusters had been selected for the above examples instead of 15, the color display would have looked quite different. Part may be due to color assignments, but the patterns vary considerably in size and shape. Some features, such as the ocean remain constant but others such as those which associate with the hills can be notably changed. Again, if 6 reflected light bands were used instead of three, still different patterns emerge. The question of how many bands and how many clusters are assigned to the classification is not readily answered. One way to use unsupervised classifications is often impractical to execute, namely, to visit the scene with several different classifications and try to judge which seems to best describe the observables.

The writer's personal bias is that Unsupervised Classification is too much of a generalization and that the clusters only roughly match some of the actual classes. Its value is mainly as a guide to the spectral content of a scene to aid in making a preliminary interpretation prior to conducting the much more powerful supervised classification procedures.

Source: http://rst.gsfc.nasa.gov