ÍÖ ÎÌÇ :: On Remote SensingArticles about Remote Sensing :: Âåðñèÿ äëÿ ïå÷àòè

Updating Geographical Information Using High-Resolution Remote Sensing Data

The 1:25000 vector maps of Enshid Municipality, the Netherlands was revised using the IKONOS remote sensing data simultaneously with their reviewing in GIS environment. The analysis has shown that high resolution data have significant advantages as compared with that of conventional techniques of terrain research because of the possibility of updating vector data and maintaining data bases with considerably less expenditure of time and effort.
Research Objectives:


  • Use of various techniques of image processing (image refinement, image synthesis etc.) to distinguish different categories of objects and to increase image interpretability.
  • Integration of remote sensing data with GIS data on the same area with their visualization to detect changes
  • Correction of source data with due account for changes detected.

PREPROCESSINGÀ

ÏThe first stage of processing is generally called preprocessing since this stage comes before all other stages of image treatment. The amount of preprocessing varies with the type of camera and digital data quality. The kinds of preprocessing are system correction, radiometric correction, and geometric correction. This research dealt with neither sensor correction nor the spacecraft deviation and the Earth’s rotation-induced error correction since all these corrections had been already performed by the image supplier.
It is generally known that objects in different image segments have different spectral characteristics owing to dissimilar imagery conditions (spacecraft viewing angle when imagery, the Sun position, weather conditions). All these parameters are taken into account in correction of all ranges of multispectral images. The following equation demonstrates the mathematical relationship between different parts of radiometric correction:

O=(T^S+T*D*I)R^I+H*I

where O is the reflectance detected by sensor; S is the solar angle; D is light; I is sensor indicator; T is irradiance; and R is reflection from object/surface; Í is cloudiness.

Correction of Semitransparent Cloudiness

From the above equation follows that cloud conditions have an additive effect, whereas the solar angle has a multiplicative one. That is the reason that an image has lesser digital numbers (pixels) than is required. The correction results in cloudiness value approximation and missing values are restored for each pixel of image. The lower the cloudiness transparence values in the particular image, the more correction is required. The equation for semitransparent cloudiness correction is as follows:

DN = Vw + Vh

where (DN) = presumable lower values (under cloudiness-free conditions Vw) + distortions caused by cloudiness (Vh). Hence, the cloudiness coefficient is equal to pixel basic value DN minus lower assessed values (under cloudiness-free conditions Vw) — (Eq.2).
In this case the cloudiness correction value for spectral band 1is 18 and is derived by assessing the DN value (in this case it is equal to 2) taking the absence of atmosphere as the condition and using Eq.2. The correction value was subtracted from the total amount of image pixels of spectral band 1. The same was done for spectral band 2. The lowest values in spectral bands 3 and 4 were close to zero and therefore, did not require any correction.


Solar Angle Correction

Due to the fact that this research uses the very same image taken during a certain time interval, we use an „absolute“ analysis rather than a comparative analysis. Using an „absolute“ analysis the solar angle correction is determined as a basic value of DN divided by the solar elevation sine (this angle is known from the source data delivered with an image by distributor). Thus DN’= DN / sin (L), i.e.
The number of output pixels is equal to that of source pixels / Sin (Solar elevation) (Eq.3), where solar elevation is measured in degrees.
The following source data is delivered together with an image for correction of solar angle:



ImageSolar elevationValue
Spectral band 141.342050.66055



All images underwent semitransparent cloudiness correction, should go through solar angle correction by Eq.3

Solar Irradiance Correction

Radiation from appropriate pixel (element) at the earth’s surface to optoelectronic system on board a satellite depends on two factors:

1. Interference of adjacent elements of the earth’s surface (adjacent pixels in image respectively). It manifests itself strongly when using low- and medium resolution data but in case of high resolution data this interference is negligible. Thus, our image does not require any correction in this aspect.

2. The atmospheric element and particle (smoke, dust, suspended matters, and light cloudiness in the form of haze) effect. Because we have the image of zero cloudiness and despite possible effect of atmospheric particles, we may neglect it for its nullity.

Geometric Correction

There exist two principal ways of geometric correction. We used one of them, namely, the correction by ground control points (GCP). The numerical values (ground coordinates) of GCPs in image were calculated. The 0.73 pixel accuracy with 1m spatial resolution (IKONOS panchromatic image) was attained. The geolocation was followed by resampling of image wherein each numerical value (appropriate coordinates of points) of the source image was recalculated and shifted to a new position in the corrected image. This paper used the bilinear interpolation method (interpolation of geometric surface using two linear functions) because it performs better than the nearest-neighbor method (image sampling in which output image element is based on adjacent pixel parameters) and has less distortions than the cubic convolution has – sampling with cubic convolution (interpolation method based on weighted mean calculation using 16 neighboring pixels in the 4×4 pixel section around the processed point of source image).

SECOND STAGE OF IMAGE PROCESSING AND ITS CLASSIFICATION

To enhance an image and to increase its interpretability there was used a number of techniques, namely, image characteristics improvement (contrast stretching or false-color composition), filtering.

Image Characteristics Improvement

ÄThe IKONOS data is delivered in the dynamic range equal to 11 bits/pixel (2048 gray gradations), whereas the software for image processing operates in 8-bit range (256 gray gradations) starting with black 0 and ending with white 255. Therefore, when a consumer receives an image, a scene seems to be absolutely black. In order to enhance a contrast and to increase the quantity of digital numbers, an image should be rescaled.

Contrast Stretching

Satellite images, particularly in optical region, are often restricted with a minor portion of the dynamic range specifically this is true for images taken under poor light conditions. To improve the visual characteristics of image, it should be transformed until it occupies the entire dynamic range. This method is called contrast stretching. (Fig.2). This method allows for maximum extending the dynamic range thus enhancing the image contrast as a whole.

When stretching a histogram, two methods of image contrast improvement exist (except for stepwise contrast adjustment) – histogram linear expansion and histogram equalization. Out of these methods we have chosen a histogram equalization since gray layer distribution at the output is proportional to the occurrence frequency of source numerical digital data (pixels) in image under correction.

Color Image Composition

Color images on the display monitor are generally resulted from combination of three spectral bands. All three spectral bands are usually produced simultaneously. These spectral bands correspond to red, green, and blue regions of the optical range. Images may be also formed in pseudo colors by transforming an original color RGB image and assigning IR band.

Selection of Bands for Color Image Composition

The bands for color image compositions are selected in a nonrandom way. Those bands admissible for composition may be selected on a basis of statistical methods. One of them is based on selecting bands with the greatest variance. This method proposed by Chavez Etal in 1982 is called Factor of Optimal Index. The closer are variance values, the better is color composite image quality. Prior to displaying the image on monitor we calculated the factor of optimal index of different combinations (each consists of three bands). The matrix of variance coincidence given below was constructed.
The values of factor of optimal index were:
For the first case (bands 1, 3 and 4) – 129.22
For the second case (bands 1, 2 and 4) – 111.95
For the third case (bands 1, 2 and 3) – 75.62
Since the image has mainly built-up area, then there are many linear objects such as road boundary lines in it. The linear structures failed to be distinguished from buildings and other objects in the color composite image obtained. Because many details could not be visualized rather clear, the result was found to be inadequate. Thus the processing was continued using the filters (a method of image enhancement based on selective filters).


Use of Filters

A close look at the image enlarged to proper large scale will show noise effect in some image segments. This is typical for areas with non-uniform details and results from pixel mixing. Therefore before using selective filters, we applied the maximum value filters.

Classification

In principle, the classification techniques are based on spectral characteristics of objects to distinguish significant classes of the different types of the earth’s surface. The images the characteristics of which were improved are transformed into thematic layers. The choice of thematic layers depends on the analysis problems and spectral characteristics and spatial resolution of the analyzed image as well. The analysis problem of this paper was updating dataset for scale 1:25000. The following thematic layers were chosen for classification:
Built-up territories, areas planted with trees, open grounds, water bodies, road lines.
It should be noted that the thematic layer „Open Grounds“ is taken to mean both waste grounds and areas under crops as well as meadows all contribute a single thematic layer. Similarly, industrial zones and objects, sport facilities, housing developments and other man-made objects belong to the thematic class „Built-up Territories“.


Selection of Reference Samples

Taking into account a high spatial resolution of original image and the fact that it is multispectral it should be noted that there may be distinguished a lot more thematic layers and object classes. We made field observations to collect different reference samples that extend the thematic layers given above. We succeeded in discriminating say not only open grounds, but also land out of crop, areas under crops, meadows, wetlands etc. We managed to distinguish more than 30 reference samples in the field for further classification.

Generation of Attributive Cartographic Information

For purposes of classification using multispectral image there is a need to specify which spectral bands and in which combination will be used for further efficient extraction of attributive information.
Since the factor of optimal index has the largest value for combination of bands 3, 4 and 1, it is their combination is used for extracting attributive cartographic information. (It should be noted that this combination is used in this case, other types of analysis used other combinations of spectral bands).


Definition of Reference Values

Because all three measurements are impossible to be displayed on two-dimensional monitor, we presented the image in the form of 3 scattering diagrams. When assigning some pixels to one or another class there appeared large deviations caused by false information on geographical position contained in some pixels. In this connection the assigning of these pixels to one or another class of attributive information (thematic layer) was sometimes rather difficult. Therefore the number of attributive cartographic information classes was increased taking into account that after being classified some objects and elements would fall into one class again in GIS analysis. Roads were broken down into three classes – highways, roads, and bikeways. Open ground – into meadows, areas under crops and etc. Those areas where roads and buildings were impossible to be discriminated unambiguously were marked by „X“ that is not determined.

Selection of Classification Algorithm

Of all various classification algorithms, we rejected the boxcar classifier because in spite of its simplicity not all values assigned to a certain class are compactly grouped so that they might fall exactly into the box analyzed. This might result in loss of accuracy. The classifier using the criteria of the Euclidean minimal distance and its comparison with mean error was also rejected since the risk of assigning a class to pixel that has a large deviation from mean error is run. This may be prevented by specifying the distance as fixed, however its value may be of importance for consumer.
The third kind of classification is maximum likelihood classification that is the most optimal since it is based on probability concepts. When assigning a pixel to one or another class such parameters as mean value for cluster (pixel block) and covariance are allowed for. It is this algorithm that was used for classification.


Classification Process

After all abovementioned stages had been completed, a classification for combined spectral bands was performed using the defined reference values and criteria and the algorithms described above as well.
It became quite obvious that the results proved to be poor. Most buildings were assigned to the class of roads or highways. Some buildings (with black roves) had such a reflectance close to that of water and were classified as water bodies. Some forests were also classified as water bodies. The results of the first classification produced such an impression that the classification based on the multispectral image bands in their original form could not yield good result (Fig.8). That is why we used the arithmetic and other methods to be sure that more qualitative product might be obtained. The end image underwent such a processing should provide the better interpretability and therefore the results should prove to be better.


Application of Other Image Processing Techniques

Because the classification result was found to be poor, there were used the other techniques of image processing to be certain that there was the possibility of obtaining the best result of classification.
Among the variety of techniques there were chosen three of them: arithmetic algorithm, fusion of images to improve separate characteristics and parameters, and main component analysis.

Arithmetic Algorithm

To solve the problem of spectral band mixing resulting in errors in identification of urban building and road network, the attempt to use band 4 to a greater extent was made since it contains more data on vegetation. To do this, the contrast of some image segments was increased following which the image was inverted. However the result failed to eliminate the mixing of road network and urban building. Therefore the idea of using the inversion and contrast increase for this purpose was left aside.

Image Fusion

Image fusion is a new image generation by combining two and more different images with use of a certain algorithm. The significance of this technique is to improve the accuracy characteristics and to improve the reliability of interpretation through reducing the uncertainty. To achieve the best image in its interpretability term, three approaches were used. The first approach involved a sequential adding of separate bands to the panchromatic band. The second – obtaining spectral characteristics means of all spectral bands followed by superimposition on panchromatic image. And finally, the third approach implied the transformation of RGB image into three separate images, each having the maximal characteristics of color depth, strength, and shades respectively.
After all three approaches were tested it was found that the classification results failed to be improved (degradation in automatic recognition of object classes was observed).


Main Component Analysis

The basis of analysis was a generating of new images resulted from the estimated correlation between 4 initial bands. These images were constructed by decreasing of correlation degree between the initial bands. The first three images with the highest correlation coefficient were combined into one image however the result of automatic recognition of object classes proved to be the worst.

Reasons for Nonuse of Automatic Recognition of Object Classes

As the above experiments showed, any attempt to discriminate automatically object classes had failed because of small differences in spectral characteristics and reflectance between separate objects that resulted in an ambiguous assignment of one or other object to an appropriate class. In our case we dealt with the test site which presented mainly a built-up territory with a minimum of vegetation, water bodies etc. House roofs, roads, some grounds have essentially identical spectral characteristics and strongly correlated to one another that gave rise to essential errors.
Finally we rejected the method of automatic recognition of object classes. In place there was performed a manual visual classification. The results proved to be most satisfactory. This is related to the peculiarities of human perception and thinking, specifically the associatively and possibility as well, i.e. those instruments which are inaccessible in automatic recognition and identification.


Integration of Vector Data into Raster Image and Updating

The vector non-updated data was integrated with the raster image followed by updating the information using the classes and separate objects distinguished through visual interpretation. The updating of data in the image was made using the analysis of objects and their neighbors, texture, form, position, and etc. The entire updating was performed in the following way: the generalized vector data was displayed in GIS environment with the image being conformed to the GIS data then any objects and separate elements were vectorized in case of their inadequacy and assigned an attributive data. If one or other element failed to fall into the proper class then the required alterations were made and an element or object was assigned new attributive data. This process was performed sequentially until all objects and elements in different classes were updated (see Figures 13-15).

Conclusion

Remote sensing data of high spatial and spectral resolution is a unique source of information.
Satellite remote sensing data of high resolution is very efficient means for updating cartographic information in terms of time and funds saving. The distinguishing of separate objects and their classes depends on nature of terrain, for urban building an automatic assigning of one or other image elements to one or other class is a matter of difficulty. However a visual interpretation is very efficient.
A special emphasis should be placed on errors resulting from vector and raster data integration. The greatly rough terrain and mountainous terrain also require the integration with the DTM data.
Because all available test data were mainly the urban buildings with no topography variations, we excluded the topography from consideration. The mountain terrain topography contributes large distortions to real position of some image elements, therefore digital terrain models should be constructed and elevation characteristics should be allowed for.


URL äîêóìåíòà: http://www.ntsomz.ru/dzz_info/articles_dzz/geoinf_refr
Copyright © ÍÖ ÎÌÇ