main page on remote sensingarticles about remote sensingupdating geographical information using high-resolution remote sensing data
Updating Geographical Information Using High-Resolution Remote Sensing Data. / November 8, 2004 /The 1:25000 vector maps of Enshid Municipality, the Netherlands was revised using the IKONOS remote sensing data simultaneously with their reviewing in GIS environment. The analysis has shown that high resolution data have significant advantages as compared with that of conventional techniques of terrain research because of the possibility of updating vector data and maintaining data bases with considerably less expenditure of time and effort.
ПThe first stage of processing is generally called preprocessing since this stage comes before all other stages of image treatment. The amount of preprocessing varies with the type of camera and digital data quality. The kinds of preprocessing are system correction, radiometric correction, and geometric correction. This research dealt with neither sensor correction nor the spacecraft deviation and the Earth’s rotation-induced error correction since all these corrections had been already performed by the image supplier.
where O is the reflectance detected by sensor; S is the solar angle; D is light; I is sensor indicator; T is irradiance; and R is reflection from object/surface; Н is cloudiness.
From the above equation follows that cloud conditions have an additive effect, whereas the solar angle has a multiplicative one. That is the reason that an image has lesser digital numbers (pixels) than is required. The correction results in cloudiness value approximation and missing values are restored for each pixel of image. The lower the cloudiness transparence values in the particular image, the more correction is required. The equation for semitransparent cloudiness correction is as follows:
where (DN) = presumable lower values (under cloudiness-free conditions Vw) + distortions caused by cloudiness (Vh). Hence, the cloudiness coefficient is equal to pixel basic value DN minus lower assessed values (under cloudiness-free conditions Vw) — (Eq.2).
In this case the cloudiness correction value for spectral band 1is 18 and is derived by assessing the DN value (in this case it is equal to 2) taking the absence of atmosphere as the condition and using Eq.2. The correction value was subtracted from the total amount of image pixels of spectral band 1. The same was done for spectral band 2. The lowest values in spectral bands 3 and 4 were close to zero and therefore, did not require any correction.
Due to the fact that this research uses the very same image taken during a certain time interval, we use an „absolute“ analysis rather than a comparative analysis. Using an „absolute“ analysis the solar angle correction is determined as a basic value of DN divided by the solar elevation sine (this angle is known from the source data delivered with an image by distributor). Thus DN’= DN / sin (L), i.e.
Radiation from appropriate pixel (element) at the earth’s surface to optoelectronic system on board a satellite depends on two factors:
1. Interference of adjacent elements of the earth’s surface (adjacent pixels in image respectively). It manifests itself strongly when using low- and medium resolution data but in case of high resolution data this interference is negligible. Thus, our image does not require any correction in this aspect.
2. The atmospheric element and particle (smoke, dust, suspended matters, and light cloudiness in the form of haze) effect. Because we have the image of zero cloudiness and despite possible effect of atmospheric particles, we may neglect it for its nullity.
There exist two principal ways of geometric correction. We used one of them, namely, the correction by ground control points (GCP). The numerical values (ground coordinates) of GCPs in image were calculated. The 0.73 pixel accuracy with 1m spatial resolution (IKONOS panchromatic image) was attained. The geolocation was followed by resampling of image wherein each numerical value (appropriate coordinates of points) of the source image was recalculated and shifted to a new position in the corrected image. This paper used the bilinear interpolation method (interpolation of geometric surface using two linear functions) because it performs better than the nearest-neighbor method (image sampling in which output image element is based on adjacent pixel parameters) and has less distortions than the cubic convolution has – sampling with cubic convolution (interpolation method based on weighted mean calculation using 16 neighboring pixels in the 4×4 pixel section around the processed point of source image).
To enhance an image and to increase its interpretability there was used a number of techniques, namely, image characteristics improvement (contrast stretching or false-color composition), filtering.
ДThe IKONOS data is delivered in the dynamic range equal to 11 bits/pixel (2048 gray gradations), whereas the software for image processing operates in 8-bit range (256 gray gradations) starting with black 0 and ending with white 255. Therefore, when a consumer receives an image, a scene seems to be absolutely black. In order to enhance a contrast and to increase the quantity of digital numbers, an image should be rescaled.
Satellite images, particularly in optical region, are often restricted with a minor portion of the dynamic range specifically this is true for images taken under poor light conditions. To improve the visual characteristics of image, it should be transformed until it occupies the entire dynamic range. This method is called contrast stretching. (Fig.2). This method allows for maximum extending the dynamic range thus enhancing the image contrast as a whole.
When stretching a histogram, two methods of image contrast improvement exist (except for stepwise contrast adjustment) – histogram linear expansion and histogram equalization. Out of these methods we have chosen a histogram equalization since gray layer distribution at the output is proportional to the occurrence frequency of source numerical digital data (pixels) in image under correction.
Color images on the display monitor are generally resulted from combination of three spectral bands. All three spectral bands are usually produced simultaneously. These spectral bands correspond to red, green, and blue regions of the optical range. Images may be also formed in pseudo colors by transforming an original color RGB image and assigning IR band.
The bands for color image compositions are selected in a nonrandom way. Those bands admissible for composition may be selected on a basis of statistical methods. One of them is based on selecting bands with the greatest variance. This method proposed by Chavez Etal in 1982 is called Factor of Optimal Index. The closer are variance values, the better is color composite image quality. Prior to displaying the image on monitor we calculated the factor of optimal index of different combinations (each consists of three bands). The matrix of variance coincidence given below was constructed.
A close look at the image enlarged to proper large scale will show noise effect in some image segments. This is typical for areas with non-uniform details and results from pixel mixing. Therefore before using selective filters, we applied the maximum value filters.
In principle, the classification techniques are based on spectral characteristics of objects to distinguish significant classes of the different types of the earth’s surface. The images the characteristics of which were improved are transformed into thematic layers. The choice of thematic layers depends on the analysis problems and spectral characteristics and spatial resolution of the analyzed image as well. The analysis problem of this paper was updating dataset for scale 1:25000. The following thematic layers were chosen for classification:
Taking into account a high spatial resolution of original image and the fact that it is multispectral it should be noted that there may be distinguished a lot more thematic layers and object classes. We made field observations to collect different reference samples that extend the thematic layers given above. We succeeded in discriminating say not only open grounds, but also land out of crop, areas under crops, meadows, wetlands etc. We managed to distinguish more than 30 reference samples in the field for further classification.
For purposes of classification using multispectral image there is a need to specify which spectral bands and in which combination will be used for further efficient extraction of attributive information.
Because all three measurements are impossible to be displayed on two-dimensional monitor, we presented the image in the form of 3 scattering diagrams. When assigning some pixels to one or another class there appeared large deviations caused by false information on geographical position contained in some pixels. In this connection the assigning of these pixels to one or another class of attributive information (thematic layer) was sometimes rather difficult. Therefore the number of attributive cartographic information classes was increased taking into account that after being classified some objects and elements would fall into one class again in GIS analysis. Roads were broken down into three classes – highways, roads, and bikeways. Open ground – into meadows, areas under crops and etc. Those areas where roads and buildings were impossible to be discriminated unambiguously were marked by „X“ that is not determined.
Of all various classification algorithms, we rejected the boxcar classifier because in spite of its simplicity not all values assigned to a certain class are compactly grouped so that they might fall exactly into the box analyzed. This might result in loss of accuracy. The classifier using the criteria of the Euclidean minimal distance and its comparison with mean error was also rejected since the risk of assigning a class to pixel that has a large deviation from mean error is run. This may be prevented by specifying the distance as fixed, however its value may be of importance for consumer.
After all abovementioned stages had been completed, a classification for combined spectral bands was performed using the defined reference values and criteria and the algorithms described above as well.
Because the classification result was found to be poor, there were used the other techniques of image processing to be certain that there was the possibility of obtaining the best result of classification.
To solve the problem of spectral band mixing resulting in errors in identification of urban building and road network, the attempt to use band 4 to a greater extent was made since it contains more data on vegetation. To do this, the contrast of some image segments was increased following which the image was inverted. However the result failed to eliminate the mixing of road network and urban building. Therefore the idea of using the inversion and contrast increase for this purpose was left aside.
Image fusion is a new image generation by combining two and more different images with use of a certain algorithm. The significance of this technique is to improve the accuracy characteristics and to improve the reliability of interpretation through reducing the uncertainty. To achieve the best image in its interpretability term, three approaches were used. The first approach involved a sequential adding of separate bands to the panchromatic band. The second – obtaining spectral characteristics means of all spectral bands followed by superimposition on panchromatic image. And finally, the third approach implied the transformation of RGB image into three separate images, each having the maximal characteristics of color depth, strength, and shades respectively.
The basis of analysis was a generating of new images resulted from the estimated correlation between 4 initial bands. These images were constructed by decreasing of correlation degree between the initial bands. The first three images with the highest correlation coefficient were combined into one image however the result of automatic recognition of object classes proved to be the worst.
As the above experiments showed, any attempt to discriminate automatically object classes had failed because of small differences in spectral characteristics and reflectance between separate objects that resulted in an ambiguous assignment of one or other object to an appropriate class. In our case we dealt with the test site which presented mainly a built-up territory with a minimum of vegetation, water bodies etc. House roofs, roads, some grounds have essentially identical spectral characteristics and strongly correlated to one another that gave rise to essential errors.
The vector non-updated data was integrated with the raster image followed by updating the information using the classes and separate objects distinguished through visual interpretation. The updating of data in the image was made using the analysis of objects and their neighbors, texture, form, position, and etc. The entire updating was performed in the following way: the generalized vector data was displayed in GIS environment with the image being conformed to the GIS data then any objects and separate elements were vectorized in case of their inadequacy and assigned an attributive data. If one or other element failed to fall into the proper class then the required alterations were made and an element or object was assigned new attributive data. This process was performed sequentially until all objects and elements in different classes were updated (see Figures 13-15).
Remote sensing data of high spatial and spectral resolution is a unique source of information.
© Official Site of Research Center for Earth Operative Monitoring (NTS OMZ). Where any materials on this site are republished or copied, the source of the material must be identified.
127490, Moscow, Decabristov st., b.51, h.25