Next Article in Journal
A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data
Previous Article in Journal
Remote Sensing Based Yield Estimation in a Stochastic Framework — Case Study of Durum Wheat in Tunisia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Object-Based Approach for Mapping Shrub and Tree Cover on Grassland Habitats by Use of LiDAR and CIR Orthoimages

1
Forest and Landscape, Faculty of Life Sciences, University of Copenhagen, Rolighedsvej 23, DK-1958 Frederiksberg C, Denmark
2
Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, P.O. Box 15, FI-02431 Masala, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2013, 5(2), 558-583; https://0-doi-org.brum.beds.ac.uk/10.3390/rs5020558
Submission received: 25 November 2012 / Revised: 10 January 2013 / Accepted: 15 January 2013 / Published: 28 January 2013

Abstract

: Due to the abandonment of former agricultural management practices such as mowing and grazing, an increasing amount of grassland is no longer being managed. This has resulted in increasing shrub encroachment, which poses a threat to a number of species. Monitoring is an important means of acquiring information about the condition of the grasslands. Though the use of traditional remote sensing is an effective means of mapping and monitoring land cover, the mapping of small shrubs and trees based only on spectral information is challenged by the fact that shrubs and trees often spectrally resemble grassland and thus cannot be safely distinguished and classified. With the aid of LiDAR-derived information, such as elevation, the classification of spectrally similar objects can be improved. In this study, we applied high point density LiDAR data and colour-infrared orthoimages for the classification of shrubs and trees in a study area in Denmark. The classification result was compared to a classification based only on colour-infrared orthoimages. The overall accuracy increased significantly with the use of LiDAR and, for shrubs and trees specifically, producer’s accuracy increased from 81.2% to 93.7%, and user’s accuracy from 52.9% to 89.7%. Object-based image analysis was applied in combination with a CART classifier. The potential of using the applied approach for mapping and monitoring of large areas is discussed.

1. Introduction

Due to abandonment, here defined as the complete withdrawal of agricultural management [1], the absence of management practices such as grazing and mowing has led to increased shrub encroachment on grasslands in many parts of Europe, e.g., the Mediterranean Basin [2], the Alps [3] and northern and eastern parts of Europe [46]. As grassland habitats are key areas for maintaining biodiversity in agricultural landscapes [7], changes in vegetation, such as shrub encroachment, can be critical to species that are dependent on managed grasslands. It has been estimated that 50% of all species in Europe, including several endemic and threatened species, depend on extensively managed habitats such as grasslands [8]. In Denmark, shrub encroachment is a well-known problem and is thought to be the most severe threat to the remaining extensive grasslands [9], which today only represent 400,000 ha or 9.5% of the Danish land area according to the Danish Basemap [10]. Furthermore, these remaining grasslands are very fragmented due to the fact that 60% of the Danish land area is cultivated, while 10% is built-up, thus leaving little space for semi-natural habitats [11].

Due to the importance of these grasslands for biodiversity, the identification of abandoned grassland is of great interest from a conservation perspective. However, abandonment is very difficult to estimate and map due to a lack of data (statistical data as well as land cover/land use data) on abandoned land [1,12]. Other means of mapping, for instance, by use of remotely sensed data, is challenged because the initial changes in land cover following the change in land use (abandonment) are primarily related to the encroachment of other grass species and different kinds of herbaceous vegetation, which can be very difficult to classify and even visually discern from remotely sensed imagery. Hence, an abandoned grassland may not reveal itself until shrubs start to encroach, and, depending on the local abiotic and biotic factors, this can happen quickly, i.e., within a few years or it may take as long as 25 years, or more. Despite this potentially long time span between abandonment to shrub encroachment, the “sudden” appearance of shrubs on grassland remains a good indicator of abandonment.

Studies which specifically focus on mapping shrub encroachment based on remotely sensed imagery and semi-automatic classification are relatively few, with most of them concentrating on the classification of shrub encroachment on arid or desert grassland, e.g., [1317], or similar homogeneous landscapes, e.g., Hantson et al.[18], who mapped woody species in a dune landscape. However, successful classification of shrubs and trees in non-arid/non-desert landscapes is also known e.g., Campos et al.[19]. Nevertheless, one of the main challenges of shrub encroachment mapping based only on spectral information, is that small shrubs and trees can be very difficult to distinguish from other vegetation types which show a similar spectral response (e.g., grassland) [20,21], unless these land cover types stand out very clearly from each other, as in the case of shrubby vegetation in a desert. However, with the aid of airborne LiDAR-derived data (Light Detection and Ranging), valuable height information can be added to the classification and thus make it possible to easily distinguish, e.g., shrubs from grassland, and houses from other impermeable surfaces. This addition of LiDAR has been shown to significantly improve classification results [2225]. Land cover classification based only on LiDAR has also shown promising results e.g., [2630], also in relation to shrub encroachment mapping [31]. However, despite the good results, the lack of spectral information, which is useful for separating different types of land cover, still remains a shortcoming for LiDAR-based land cover classification.

For land cover classification where LiDAR data are applied, the derived height information, and in some cases also intensity information, is often combined with imagery. Object-based image analysis (OBIA) [32] is today a frequently applied approach for the combined LiDAR data and imagery classification of urban landscapes [3335], as well as open land landscapes [36,37]. For the actual classification, commonly used approaches include: Support Vector Machines (SVM); nearest neighbour (NN); decision trees—also known as classification trees or CART (Classification and Regression Trees)—in addition to rule sets based on expert knowledge. The use of classification trees, which do not require assumptions regarding the distribution of the data [38], has become increasingly popular due to its high level of automation, flexibility and ability to handle large numbers of input attributes. The method can be used for creating classification rules automatically when training samples with various attributes (features) are available [3941].

Though LiDAR data exist for the whole of Denmark, very few LiDAR studies have been conducted in a Danish context, and none have explored the potential of mapping shrub and tree cover using LiDAR combined with imagery. Furthermore, in a mapping and monitoring perspective, where aerial photographs are collected frequently in many countries for orthoimage production, including colour-infrared (CIR) images, and where LiDAR data still remain a limited resource, it is relevant to explore how big the difference is in classifying shrubs and trees effectively, using only images compared to an approach based on LiDAR and image data. Therefore, in this study, we investigate the potential of mapping shrub and tree cover in a study area of 14 km2 located in a rural landscape in the northern part of Denmark by use of aerial colour-infrared (CIR) orthoimages and height information derived from a high point density LiDAR dataset. To assess the effect of adding LiDAR data for the classification of shrubs and trees, we compare a classification which is only based on CIR images with a classification based on CIR and LiDAR-derived data. CIR images are acquired every year in Denmark, and they are thus easily available for the application. Therefore, CIR images are included in both classification experiments. An object-based image analysis approach, using eCognition®[42], is used in combination with a classification tree approach for finding the most suitable features for classification. The methodology, the results and the experiences are discussed, as well as the potential of applying the method in mapping and monitoring of large areas.

2. Study Area and Data

2.1. Study Area

The study area is located in the northwestern part of Denmark and covers 14 km2 (Figure 1). It represents one of five areas which were laser scanned during 2010 as part of a large archaeological laser-scanning project ordered by Holstebro Museum, which aimed to locate archaeological remnants using a detailed LiDAR-derived DTM (Digital Terrain Model). The chosen study area consists of agricultural land with crops and a few plantations, intermixed with hedgerows, small ponds and semi-natural habitats, including abandoned grasslands that have now been invaded by shrubs, primarily willow (Salix sp.) (Figure 1). The grasslands, with and without shrubs, are primarily located in the central part of the study area. Compared to the other four areas comprising the archaeological laser-scanning project, this particular area was chosen due to the presence and amount of semi-natural habitats with—as well as without—shrubby vegetation. The other four areas mostly consist of either cropped agricultural land (with very few semi-natural habitats) or agricultural land with large plantations.

2.2. LiDAR Data and CIR Orthoimages

The data sets applied in this study include high point density laser-scanner data and derived products (DTM, DSM and nDSM), and CIR orthoimages.

The laser-scanner data were acquired on the 28th and 29th of October 2010 by BLOMinfo A/S (now NIRAS BLOMinfo A/S) and were recorded using a TopEye system S/N 700 mounted on a helicopter flying from an altitude of 300 m. Due to the time of year when the laser scanning took place, some foliage was still present. Flightline overlap was used to achieve a high point density of the area, resulting in an overall average point density of 38 points/m2. However, to improve the processing of the LiDAR data, which is very demanding on memory, we only loaded every third point which reduced the overall average point density to 13 points/m2. This point density was considered sufficient for detecting shrubs and trees. The point cloud data were delivered as a pre-classified 1.2 las file including first, last and intermediate returns. Processing and classification of the raw LiDAR data is described in Section 3.1.

The CIR images consisting of the three bands—near-infrared, red, and green—were recorded between May and June 2010 by the private vendor COWI, using an UltraCam Xp from Vexcel Imaging GmbH. The images had been radiometrically corrected and orthorectified and were delivered in tiles of 2 × 2 km with a ground resolution of 0.16 m. Post-processing of the images consisted of mosaicing the tiles into one large image which was then divided into four new tiles (Figure 2). The tiling was done to better control the processing of the data and to separate a training area (tile 4) for classification purposes (described in more detail in the methods section). Furthermore, the pixel resolution was changed from 0.16 to 0.50 m. We assessed that a small pixel size of 0.16 m was not necessary for the application of classifying shrubs and trees. Furthermore, such a small pixel size would also require more time for the computational processing of the data. On the other hand, since the application should also be able to capture small shrubs with a diameter of only a few metres, the pixel size should not be too large. Hence, we decided to use a pixel size of 0.50 m.

3. Methods

The method applied for classification consists of a segmentation step followed by a classification step using features derived from a CART analysis. Two classifications are performed: one which combines LiDAR-derived data and CIR orthoimages (with the initial calculation of an nDSM), and one which is solely based on CIR orthoimages. Finally, an accuracy assessment of the two classifications is performed. An overview of the applied approach is presented in Figure 3.

3.1. Calculation of an nDSM Based on LiDAR Points

For the processing and classification of the raw LiDAR points, TerraScan software [43] was used. The main purpose of the processing and classification was to obtain: (a) a class consisting of ground points for the creation of a DTM (Digital Terrain Model) and (b) a class consisting of ground points as well as points above ground for the creation of a DSM (Digital Surface Model). The main objective of creating the DTM and DSM was to derive a normalized DSM (nDSM), which could later be used for classification. For the creation of an nDSM, a correct DTM is crucial, hence a correct classification of the ground points is important. All returns—first, last and intermediate—were used during classification. The ground classification was based on a progressive densification filtering algorithm [44] developed by Axelsson [45]. The classification routine iteratively builds a triangulated surface model and works its way upwards, as long as it finds new points which match the iteration parameters.

Based on the ground classification, a classification routine was subsequently run which classified points 1.8 m above ground with a maximum threshold height of 40 m. The threshold of 1.8 m was applied to avoid “noise” in terms of tall herbaceous vegetation, cars, cows, etc. The threshold value of 40 m was set based on the knowledge that there were no high buildings in the study area and based on the fact that trees in Denmark do not normally grow higher than about 40 m (the tallest tree in Denmark is 51 m). The ground point class and aboveground point class were then combined to create a DSM. The DTM and DSM points were exported in a gridded format with a cell size of 0.5 m to fit the image resolution. Subsequently, an nDSM was created by subtracting the DTM from the DSM (Figure 4).

3.2. Segmentation and Classification Methods

For the segmentation and classification of the LiDAR-derived nDSM and CIR orthoimages, the object-based image analysis software eCognition® Developer 8.7 [42] was used. The CART function (classification and regression tree) embedded in eCognition was used for the classification. As the primary focus of this study is the classification of shrubs and trees, only three classes were defined: shrubs and trees (including forest), built-up (different kinds of buildings), and ground (including vegetated as well as non-vegetated surfaces and water).

First, a multiresolution segmentation algorithm [46] was used to delineate homogeneous segments. In the classification based on the nDSM and CIR orthoimages, the segmentation was based on the nDSM in order to distinguish tall objects from ground objects using the height information. The parameters used for the segmentation step are presented in Table 1. The chosen settings for segmentation of the nDSM were based on initial testing of the different segmentation parameters and subsequent evaluation of the results in terms of what resulted in the most meaningful objects. The chosen segmentation scale was also tested by running the ESP (Estimation of Scale Parameter) tool which can be used for finding the most suitable segmentation scale [47]. The ESP tool builds on the idea of local variance (LV) of object heterogeneity within a scene. It iteratively generates image-objects at multiple scale levels in a bottom-up approach and calculates the LV for each scale. Variation in heterogeneity is explored by evaluating the LV plotted against the corresponding scale. The thresholds in rates of change of LV (ROC-LV) indicate the scale levels at which the image can be segmented in the most appropriate manner, relative to the data properties at the scene level [47]. Based on the result, a diagram can be created where the peaks of the curve indicate appropriate scale levels for segmentation. Among the suggested scale levels (diagram peaks), the scale level of 10 appeared as a distinct peak. Peaks at scale levels 16 and 30 were also tested but turned out to be less successful in delineating the borders of the shrubs and trees correctly. The shape and compactness settings were selected based on testing. After the segmentation, all the nDSM segments above an average height of 0.5 m were assigned to a temporary class named “high objects”, while the remaining segments (below an average height of 0.5 m) were assigned to the class ground. The reason for choosing the threshold height value of 0.5 m was that not all the nDSM segments were delineated correctly, hence some small trees and shrubs were segmented with adjacent ground included, thus resulting in some segments having a low average height value. An example showing the result of the segmentation can be seen in Figure 5.

For the classification which was based only on the CIR orthoimages, the scale setting as well as the shape and compactness settings were changed (Table 1). For the purpose of finding the best segmentation scale, the ESP tool was used. The suggested scale levels were tested and the best segmentation result was found by using a scale of 92. Shape and compactness settings were selected based on testing. An example showing the result of the segmentation can be seen in Figure 6.

3.3. Classification Tree and Training Data

For the selection of segments used for the classification tree to train on, a total of 114 manually selected reference points classified based on the CIR orthoimages and RGB orthoimages recorded in spring 2010 (0.125 m resolution) were located inside tile number 4. The 114 points (Figure 7) were divided into the three classes (with number of points in brackets): built-up (27), shrubs and trees (28), and ground, which was subdivided into the following subclasses: Vegetated ground (19), non-vegetated ground (17), and water (23). The point dataset was used in eCognition as a thematic layer to select training segments from the CIR orthoimages. All segments with a point located inside were selected as training data. However, for the analysis based on the nDSM and CIR orthoimages only, the elevated training data were applied (built-up, and shrubs and trees) as the objects with an average elevation of less than 0.5 m had already been assigned to the class ground based on the nDSM segmentation. For the classification based only on the CIR orthoimages all training data were applied. Two examples showing the result of the segmentation and the selection of segments for training are shown in Figures 5 and 6.

The CART function in eCognition was then used to build a model (classification tree) based on the attribute information attached to the training segments. A separate CART analysis was run for each of the classifications based on the CIR orthoimages as input data. The features used for the CART analysis were the same in both analyses and were chosen to cover a broad selection of features which could be relevant to the classification, e.g., mean values, standard deviations, ratios, various texture and geometry attributes and two custom-made features: SUM/NIR and NDVI. The SUM/NIR feature was included because initial tests of different customized features had shown that it was effective for separating buildings from shrubs and trees. Hence, it was included as a supplement to NDVI, which is known to be effective for separating non-vegetated areas (e.g., built-up areas) from vegetated areas [34,48]. An overview of the selected features is presented in Table 2.

The features selected for the CART analysis were then used to find the best split between the different classes representing the training segments. The CART analysis in eCognition is based on algorithms described by Breiman et al.[4951]. This includes the application of Gini’s diversity index as a splitting rule [49]. Gini’s diversity index is defined as:

Impurity ( t ) = Σ p ( i | t ) p ( j | t ) i j
where t is the node in the tree, and p(i|t) is the proportion of cases xnt belonging to class i (x is a measurement vector, i.e., a vector of features for a training segment in our case). What the splitting rule attempts to do is to find the largest homogeneous category (which reduces the node impurity the most) within the dataset, and isolate it from the remainder of the data. Subsequent nodes are then segregated in the same manner until further divisions are not possible, or the tree reaches a predetermined maximum depth. In this study, the maximum depth was set to 10. Furthermore, the minimum number of samples (segments) for producing a split was set to 10. Finally, the number of cross validation folds was also set to 10. To explore the effect on the classification of the different features included as input for the classification tree analysis, some tests were also performed by varying the number and combination of features applied. Based on the result of the classification tree analysis, threshold values for separating the classes could subsequently be implemented as a rule set in eCognition.

For the classification which was based only on the CIR orthoimages, a final classification procedure was applied at the end of the rule set, in addition to the rules derived from the classification tree analysis. Due to the segmentation, some long and slim objects were produced which mainly consisted of ground elements (e.g., road objects). However, within each of the two classes, shrubs and trees and built-up, several of these objects existed and had been misclassified. To reclassify them to the class ground, a procedure was run using a feature in eCognition which calculates the roundness (similarity to an ellipse). This feature effectively identified these long and slim objects, which were then reclassified.

3.4. Thematic Accuracy Assessment

Congalton and Green [52] suggest that objects should be used as sampling units for the thematic accuracy assessment of object-based classification results. A few approaches have been applied as e.g., [53], however, so far, no specific method has yet been adopted by the scientific community [54]. Furthermore, due to the two different input datasets and segmentation scales applied in this study, objects as sampling units were not straightforward to use. Instead, we applied a pixel-based accuracy assessment based on digitized reference data.

To acquire a representative selection of reference data we divided the study area into 185 cells of size 260 (Length) × 240 (Height) m and selected 20 of them randomly, representing 10.8% of the area (Figure 8). The land cover in each of the 20 cells was then manually classified and digitized, according to the three classes—shrubs and trees, built-up and ground—by use of high resolution RGB orthoimages (0.125 m resolution) recorded in spring 2010. Subsequently, the digitized land cover was converted to raster format with a cell size of 0.5 m. The number of pixels selected for reference, compared to the number of pixels classified for each of the three classes, is presented in Tables 3 and 4, for each of the two classifications. The accuracy calculations consisted of producer’s accuracy, user’s accuracy, and overall accuracy [52]. The producer’s accuracy relates to the probability that a reference sample (photo-interpreted land cover class in this case) will be correctly classified (i.e., a measure of omission error). In contrast, the user’s accuracy indicates the probability that a sample from the land cover class actually matches the class from the reference data (i.e., a measure of commission error).

4. Results

4.1. Classification Tree

The results of the classification tree analyses are presented in Figures 9 and 10. The classification tree for the CIR orthoimages, where all training data were applied, resulted in four features: GLCM homogeneity (a texture measure), GLDV contrast (a texture measure), radius of largest enclosed ellipse (describes how similar a segment is to an ellipse) and ratio red band (mean value in the red band divided by the sum of the mean values in all bands). For a detailed description of the features see [50]. The classification tree based only on the elevated training data resulted in one feature: SUM/NIR (the sum of the mean values in all bands divided by the mean value in the NIR band). These features and the matching threshold values were applied for classification as “if-then” rules in eCognition.

4.2. Accuracy Assessment Results

The result of the thematic accuracy assessments of the two classifications is presented in Tables 5 and 6, while the classification results as maps are presented in Figures 11 and 12. As appears from the tables, the overall classification result improved significantly when the nDSM was included, from 82.0% to 96.7%. Looking specifically at the differences regarding the successful classification of shrubs and trees and built-up, the classification result was significantly improved, as well: producer’s accuracy for shrubs and trees increased from 81.2% to 93.7%, and for built-up from 26.7% to 92.6%. Improvements are also evident from the maps (Figures 11 and 12) where one of the main problems of the CIR orthoimage-based classification is the misclassification of ground objects (especially agricultural fields) into shrubs and trees and built-up. These misclassifications are also apparent from the confusion matrix resulting in low producer’s and user’s accuracies for built-up (26.7% and 11.5%, respectively) and a low user’s accuracy for shrubs and trees when only CIR orthoimages were used. This result indicates that despite a relatively high producer’s accuracy for shrubs and trees, the classification based on the CIR orthoimages did not work well. In Figures 1316, two examples are presented in large scale, showing the differences in classification quality.

5. Discussion

5.1. Classification Results

The improved classification results documented by adding LiDAR-derived data, here in the form of nDSM, is in line with other studies where similar results have been obtained, e.g., [25,41]. As for the result concerning the correct classification of shrubs and trees, and distinguishing them from built-up and ground, the study clearly documents the effect of adding LiDAR by achieving classification results (producer’s and user’s accuracy, respectively) of 93.7% and 89.7% with LiDAR included, and 81.2% and 52.9% without LiDAR. These overall results resemble those obtained in a similar study by Matikainen and Karila [41], whereby very high user’s and producer’s accuracy results were achieved using similar classes (trees and buildings) in a residential area with a similar set up (LiDAR combined with multispectral orthoimages and use of classification tree). A classification based only on the multispectral orthoimages was also tested separately in their study with comparable segmentation settings applied. In their study, they reached an overall accuracy of 74.3% compared to the 82.0% obtained in our research. Though the producer’s accuracy for shrubs and trees in this study reached an acceptable high level of accuracy when LiDAR data were not used, the user’s accuracy revealed that the applied combination of features for classification did not work satisfactorily, as too many pixels belonging to the ground and built-up classes were included in the shrub and tree class.

The poor classification result for shrubs and trees using only CIR orthoimages is in line with the findings from other studies. The main problem is that shrubs often show a similar spectral response as e.g., grassland, which makes shrubs difficult to distinguish [20,21]. However, good classification results using OBIA in a similar landscape type have also been obtained as exemplified by, e.g., Campos et al.[19], who reached an overall accuracy of 90% (and a producer’s and user’s accuracy respectively of 95% and 88% for trees). In their study, a Quickbird image and a nearest neighbour (NN) classification approach were applied. Such high accuracy assessment results are, however, rare, and as Platt and Rapoza [55] also state in their evaluation study of OBIA for land cover classification: “the object-oriented paradigm is no magic bullet.”

In this study, a CART-based approach was applied to derive useful features and threshold values for classification. A large number of features could be tested in a short amount of time in order to derive the most effective features for classification. The result of the classification tree analysis could easily be implemented subsequently as a series of if-then rules in eCognition. The features derived from the CART analysis were very different between the two classifications, thereby illustrating the effect of using different data, a different number of classes, and different segmentation scales. Despite the many features included in the CART analysis, only one feature was returned (SUM/NIR) to distinguish shrubs and trees effectively from built-up for the classification based on nDSM and CIR orthoimages. For the classification based only on the CIR orthoimages, the CART analysis yielded four features in total to separate the three classes. Two of these were texture features (GLCM homogeneity and GLDV contrast), which is in line with similar findings from other studies where texture features have been applied successfully for the classification of vegetation, especially trees and shrubs; see, for example, [48]. Though the classification result based only on the CIR orthoimages was improved by finding the appropriate segmentation scale, an additional expert knowledge-based rule had to be added to achieve a more satisfactory classification result for shrubs and trees. If more expert knowledge-based rules had been applied, it would likely have improved the classification result further: for instance, by adding a threshold object-size for buildings and thus prevent large ground segments (agricultural fields) from being misclassified as built-up, which is one of the reasons why the accuracy assessment result of the built-up classification turned out rather poor. Hence, CART as a stand-alone tool is evidently not sufficient in this case, and a CART-based classification approach using only CIR orthoimages would probably benefit from being combined with a more sophisticated rule-set development, including the application of several object levels (hierarchical classification approach).

Although the best segmentation scale was selected as the basis for classification, some of the CIR orthoimage-based training segments showed a variation in land cover with, e.g., some shrubs included in grassland segments due to the similar spectral response of grassland and shrubs. This may have affected the CART classifier and, as a consequence, have contributed to the less accurate classification result based on the CIR orthoimages.

For both classifications, different combinations of features selected for input in the CART analysis were tested to explore the effect of the features on classifying the objects (not part of the results section). Here, we found that for the classification of shrubs and trees (and buildings) based on the nDSM and CIR orthoimages, the features Ratio NIR and NDVI yielded classification results which were very close to the result obtained by applying SUM/NIR (verified only by visual inspection though). The similar result achieved by using Ratio NIR is not surprising, as both Ratio NIR and SUM/NIR compare the reflectance in the NIR band with the reflectance in all bands. As regards the result based on the NDVI feature, this is in accordance with results from other studies where NDVI has shown to work well for separating vegetated areas from non-vegetated areas; see, for example, [34,48].

Neither SUM/NIR, NDVI nor Ratio NIR seemed to play any role in relation to the classification based on the CIR orthoimages. This is probably caused by the similarity of spectral reflectance from grassland and shrubs and trees making the SUM/NIR, NDVI and Ratio NIR features less applicable in this case for classification of these objects.

5.2. Application of Accuracy Assessment Methods

The thematic accuracy assessment was conducted using a reference sample of digitized land cover, which subsequently was converted to pixels. Initially, an object-based accuracy assessment approach was tested, as objects are suggested by Congalton and Green [52] as sampling units for thematic accuracy assessment of object-based classifications. However, the object-based accuracy assessment approach turned out to show some clear weaknesses compared to the pixel-based approach. First of all, the different input datasets and scales used for segmentation yielded very different object populations to sample from (many more objects in the segmentation based on the nDSM than in the segmentation based on the CIR orthoimages). This problem was avoided in the pixel-based approach, as the total number of pixels was the same in both classifications (same pixel-resolution applied), and though the number of pixels in each of the three classes was not evenly distributed among the two classifications, the differences were only minor. Secondly, some of the segments which had been segmented based on the CIR orthoimages, showed land cover variation within the segment (e.g., mixed shrub and grassland, or mixed buildings and other impervious surfaces), which made them difficult to interpret in terms of whether they should be categorized as correctly classified or as misclassified. Because the issue of mixed land cover within a pixel of 0.5 m is limited, this problem was reduced by using pixels. According to these experiments, the applied study design needs to be considered carefully before using objects as sampling units for thematic accuracy assessment of different classifications. In a future study, it would be interesting to test to the applicability of the combined thematic and spatial accuracy assessment approach suggested by Hernando et al.[53], which is based on object fate analysis [53].

5.3. Large Area Mapping and Monitoring Perspectives

Though the study was conducted in a mixed-type of landscape which did not exclusively consist of grasslands, shrubs and trees, but also contained other landscape elements, the study clearly shows the potential of applying LiDAR-derived data with CIR orthoimages for the effective classification of shrubs and trees, including the effective separation of grassland. Hence, for monitoring purposes of, e.g., abandonment of species-rich grasslands and the subsequent undesired spread of shrubs, the fusion of LiDAR-derived data with aerial CIR orthoimages has great potential. The capability of classifying small shrubs and trees, however, depends on the LiDAR data, especially the point density. A low average point density (one point per square metre or less) means that small shrubs and trees risk being missed by the laser pulse, and thus not detected. However, studies have shown that as few as 2 points/m2 can be sufficient for the detection of individual trees [44]. In this study, an average point density of 13 points/m2 was used, which worked well for the mapping of even small shrubs and trees (cf. Figures 13 and 15). In a future study, it would be interesting to test the effect on the classification accuracy when reducing the point density in order to find the minimum required density for successful classification of shrubs and trees.

A threshold height value of 1.8 m was applied during the classification process of the LiDAR points. This threshold height value could be lowered in order to include smaller trees and shrubs. However, this would risk taller herbaceous vegetation being included, which would probably reduce the classification accuracy due to tall weeds being misclassified as shrubs and trees, as they are difficult to distinguish based on the spectral information. Nevertheless, from a monitoring perspective, focusing on the management of grasslands, signs of abandonment and, hence, lack of management, are of key interest. Therefore, the presence of tall vegetation (either in the form of weeds or shrubs) is of general interest and thus a lower threshold height value for the creation of the nDSM could be considered.

Finally, regional or even national LiDAR data are required in order to apply a large area monitoring approach which includes LiDAR. Furthermore, the LiDAR data need to be updated regularly, preferably once every five or ten years, in order to make it possible to monitor changes (e.g., the appearance of shrubs). Acquiring this amount of data is expensive and time consuming. Luckily, however, as more and more applications of LiDAR emerge, laser scanning of large areas will become more and more common, thereby making the use of LiDAR for classification and monitoring purposes more realistic. Several countries in Europe already have LiDAR data on a national scale, including Denmark, where there are plans to update the national LiDAR data set every fifth year. In our study, the classification of high objects into shrubs and trees and built-up was based on imagery also when LiDAR data were used. This approach was selected because CIR orthoimages are acquired every year in Denmark and are, furthermore, known to be effective for distinguishing vegetation from built-up. However, LiDAR data also have high potential for classification, and features such as LiDAR texture could be tested for future application.

In the monitoring of large areas, the use of CIR images is an advantage when it comes to the availability of imagery, since CIR images are generally more easily available compared to other frequently used remotely sensed data such as Quickbird and IKONOS, at least in Denmark where CIR images are recorded every year. On the downside, aerial imagery, such as CIR images, often show more inter-scene variation due to the circumstances they are recorded under, as well as the fact that the scenes are smaller compared to satellite images. This can complicate the transferability of the developed rules, as a set of classification rules which is applied to two different images/scenes will not necessarily yield the same classification result [20]. Landscape variation, which is to be expected when monitoring large areas, also affects the transferability of rules. Hence, robust rules need to be developed to make them applicable to larger areas without having to spend too much time on revising the rules. This topic of transferability and robustness has recently been discussed in several studies related to OBIA [5658]. The application of fuzzy rules could be a way of dealing with the transferability and robustness issue as discussed in Hofmann et al.[57]. The use of fuzzy rules would therefore be interesting to explore in a future study, for instance, in the context of applying the methods from this study on larger areas in Denmark.

6. Conclusion

Classification of shrubs and trees on grassland habitats normally presents a challenge due to their spectral similarity to grassland. This study has documented that the classification accuracy based on object-based image analysis of CIR orthoimages significantly improves when LiDAR derived data, here in the form of nDSM, are included as ancillary information. The overall accuracy increased from 82.0% to 96.7%, and for shrubs and trees specifically, producer’s accuracy increased from 81.2% to 93.7%, and user’s accuracy from 52.9% to 89.7%. Furthermore, the study documents that use of CART is a quick and effective means of deriving features with accompanying threshold values, which can easily be implemented subsequently for classification.

Also, the CART analysis revealed that from a large number of available features, only one feature (SUM/NIR) was required for separating shrubs and trees effectively from buildings. For future grassland and shrub encroachment mapping and monitoring in Denmark, these results are promising, and the applied approach has the potential of being implemented for the processing of large areas. However, LiDAR data for large areas would be required, and high point density LiDAR data as applied in this study are expensive to collect. Therefore, future studies should focus on testing the minimum required point density for the classification of shrubs and trees. Furthermore, in the context of mapping and monitoring large areas, where issues of robustness and transferability of rules are faced, the development of fuzzy rules would also be relevant to explore in the future.

Acknowledgments

The authors would like to thank Holstebro Museum for their kind permission to use the LiDAR data. Also thanks to Jacob Møller Sørensen from NIRAS BlomInfo A/S for technical software support and to Juha Hyyppä, and Xiaowei Yu from the Finnish Geodetic Institute in Masala. The Academy of Finland is acknowledged for its financial support in the form of the project “Towards Improved Characterization of Map Objects.”

References

  1. Keenleyside, C.; Tucker, G.M. Farmland Abandonment in the EU: An Assessment of Trends and Prospects; Report prepared for WWF; Institute for European Environmental Policy: London, UK, 2010. [Google Scholar]
  2. Maestre, F.T.; Bowker, M.A.; Puche, M.D.; Belén Hinojosa, M.; Martínez, I.; García-Palacios, P.; Castillo, A.P.; Soliveres, S.; Luzuriaga, A.L.; Sánchez, A.M.; et al. Shrub encroachment can reverse desertification in semi-arid Mediterranean grasslands. Ecol. Lett 2009, 12, 930–941. [Google Scholar]
  3. Anthelme, F.; Villaret, J.; Brun, J. Shrub encroachment in the Alps gives rise to the convergence of sub-alpine communities on a regional scale. J. Veg. Sci 2007, 18, 355–362. [Google Scholar]
  4. Schmidt, A.M.; Piórkowski, H.; Bartoszuk, H. Remote Sensing Techniques and Geographic Information Systems for Wetland Conservation and Management: Monitoring Scrub Encroachment in Biebrza National Park; Alterra-rapport 174; Alterra, Green World Research: Wageningen, the Netherlands, 2000. [Google Scholar]
  5. Cousins, S.A.O. Analysis of land-cover transitions based on 17th and 18th century cadastral maps and aerial photographs. Landscape Ecol 2001, 16, 41–54. [Google Scholar]
  6. Sickel, H.; Ihse, M.; Norderhaug, A.; Sickel, M.A.K. How to monitor semi-natural key habitats in relation to grazing preferences of cattle in mountain summer farming areas. Landscape Urban Plan 2004, 67, 67–77. [Google Scholar]
  7. Pykälä, J.; Luoto, M.; Heikkinen, R.K.; Kontula, T. Plant species richness and persistence of rare plants in abandoned semi-natural grasslands in northern Europe. Basic Appl. Ecol 2005, 6, 25–30. [Google Scholar]
  8. Stoate, C.; Báldi, A.; Beja, P.; Boatman, N.D.; Herzon, I.; van Doorn, A.; de Snoo, G.R.; Rakosy, L.; Ramwell, C. Ecological impacts of early 21st century agricultural change in Europe—A review. J. Environ. Manage 2009, 91, 22–46. [Google Scholar]
  9. Fredshavn, J.R.; Ejrnæs, R.; Damgaard, C.; Nielsen, K.E.; Nygaard, B. Terrestriske Habitatnaturtyper 2004–2010; Videnskabelig Rapport nr. 7; NOVANA.; Aarhus Universitet, DCE—Nationalt Center for Miljø og Energi: Aarhus, Denmark, 2011; p. 168. [Google Scholar]
  10. Levin, G.; Jepsen, M. R.; Blemmer, M. Basemap. Technical Documentation of a Model for Elaboration of a Land-Use and Land-Cover Map for Denmark; Aarhus University, DCE—Danish Centre for Environment and Energy: Aarhus, Denmark, 2012; p. 47. [Google Scholar]
  11. Levin, G.; Normander, B. Arealanvendelse i Danmark Siden Slutningen af 1800-Tallet; (In Danish with English summary); Faglig rapport fra DMU nr. 682; Danmarks Miljøundersøgelser, Aarhus Universitet: Aarhus, Denmark, 2008; p. 46. [Google Scholar]
  12. Baldock, D.; Beaufoy, G.; Brouwer, F.; Godeschalk, F. Farming at the Margins. Abandonement or Redeployment of Agricultural Land in Europe; Institute for European Environmental Policy (IEEP), London, and Agricultural Economics Research Institute (LEI-KLO): Hague, the Netherlands, 1996; p. 202. [Google Scholar]
  13. Chopping, M.; Lihong, S.; Rango, A; Martonchik, J.V.; Peters, D.P.C.; Laliberte, A. Remote sensing of woody shrub cover in desert grasslands using MISR with a geometric-optical canopy reflectance model. Remote Sens. Environ 2008, 112, 19–34. [Google Scholar]
  14. Laliberte, A.; Rango, A.; Havstad, K.; Paris, J.; Beck, R.; Mcneely, R.; Gonzalez, A. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ 2004, 93, 198–210. [Google Scholar]
  15. Laliberte, A.S.; Fredrickson, E.L.; Rango, A. Combining decision trees with hierarchical object-oriented image analysis for mapping arid rangelands. Photogramm. Eng. Remote Sensing 2007, 73, 197–207. [Google Scholar]
  16. Mcglynn, I.; Okin, G. Characterization of shrub distribution using high spatial resolution remote sensing: Ecosystem implications for a former Chihuahuan Desert grassland. Remote Sens. Environ 2006, 101, 554–566. [Google Scholar]
  17. Smith, A.M.S.; Strand, E.K. Production of vegetation spatial-structure maps by per-object analysis of juniper encroachment in multitemporal aerial photographs. Can. J. Remote Sens 2008, 34, 1–18. [Google Scholar]
  18. Hantson, W.; Kooistra, L.; Slim, P.A. Alien and Invasive Woody Species in the Dunes of the Wadden Sea Island of Vlieland: A Remote Sensing Approach; Alterra-report 2101; Alterra: Wageningen, The Netherlands, 2010; p. 54. [Google Scholar]
  19. Campos, N.; Lawrence, R.; McGlynn, B.; Gardner, K. Effects of LiDAR-Quickbird fusion on object-oriented classification of mountain resort development. J. Appl. Remote Sens 2010, 4, 1–14. [Google Scholar]
  20. Rokitnicki-Wojcik, D.; Wei, A.; Chow-Fraser, P. Transferability of object-based rule sets for mapping coastal high marsh habitat among different regions in Georgian Bay, Canada. Wetlands Ecol. Manage 2011, 19, 223–236. [Google Scholar]
  21. Dillabaugh, K.A; King, D.J. Riparian marshland composition and biomass mapping using Ikonos imagery. Can. J. Remote Sens 2008, 34, 143–158. [Google Scholar]
  22. Lee, D.S.; Shan, J. Combining Lidar elevation data and ikonos multispectral imagery for coastal classification mapping. Mar. Geodesy 2003, 26, 117–127. [Google Scholar]
  23. Bork, E.; Su, J. Integrating LIDAR data and multispectral imagery for enhanced classification of rangeland vegetation: A meta analysis. Remote Sens. Environ 2007, 111, 11–24. [Google Scholar]
  24. Geerling, G.; Labrador-Garcia, M.; Clevers, J.G.P.; Ragas, A.M.; Smits, A.J. Classification of floodplain vegetation by data fusion of spectral (CASI) and LiDAR data. Int. J. Remote Sens 2007, 28, 4263–4284. [Google Scholar]
  25. Kempeneers, P.; Deronde, B.; Provoost, S.; Houthuys, R. Synergy of airborne digital camera and lidar data to map coastal dune vegetation. J. Coastal Res 2009, 53, 73–82. [Google Scholar]
  26. Song, J.H.; Han, S.H.; Yu, K.; Kim, Y.I. Assessing the possibility of land-cover classification using Lidar intensity data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2002, 34, 259–262. [Google Scholar]
  27. Brennan, R.; Webster, T.L. Object-oriented land cover classification of Lidar-derived surfaces. Can. J. Remote Sens 2006, 32, 162–172. [Google Scholar]
  28. Antonarakis, A.S.; Richards, K.S.; Brasington, J. Object-based land cover classification using airborne LiDAR. Remote Sens. Environ 2008, 112, 2988–2998. [Google Scholar]
  29. Yoon, J.S.; Shin, J.I.; Lee, K.S. Land cover characteristics of airborne LiDAR intensity data: A case study. IEEE Geosci. Remote Sens. Lett 2008, 5, 801–805. [Google Scholar]
  30. Korpela, I.; Koskinen, M.; Vasander, H.; Holopainen, M.; Minkkinen, K. Airborne small-footprint discrete-return LiDAR data in the assessment of boreal mire surface patterns, vegetation, and habitats. Forest Ecol. Manage 2009, 258, 1549–1566. [Google Scholar]
  31. Hambidge, C.; Petchey, S. Mapping the Extent of Scrub Encroachment on Heathland Using LiDAR Data. Proceedings of International Conference of Spatial Ecology and Conservation, Birmingham, UK, 5–7 September 2011.
  32. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm 2010, 65, 2–16. [Google Scholar]
  33. Hodgson, M.; Jensen, J.; Tullis, J. Synergistic use of Lidar and color aerial photography for mapping urban parcel imperviousness. Photogramm. Eng. Remote Sensing 2003, 69, 973–980. [Google Scholar]
  34. Zhou, W.; Troy, A. An object-oriented approach for analysing and characterizing urban landscape at the parcel level. Int. J. Remote Sens 2008, 29, 3119–3135. [Google Scholar]
  35. Hartfield, K.A.; Landau, K.I.; van Leeuwen, W.J.D. Fusion of high resolution aerial multispectral and LiDAR data: Land cover in the context of urban Mosquito habitat. Remote Sens 2011, 3, 2364–2383. [Google Scholar]
  36. Arroyo, L.A.; Johansen, K.; Armston, J.; Phinn, S. Forest ecology and management integration of LiDAR and QuickBird imagery for mapping riparian biophysical parameters and land cover types in Australian tropical savannas. Forest Ecol. Manage 2010, 259, 598–606. [Google Scholar]
  37. Sasaki, T.; Imanishi, J.; Ioki, K.; Morimoto, Y.; Kitada, K. Object-based classification of land cover and tree species by integrating airborne LiDAR and high spatial resolution imagery data. Landscape Ecol. Eng 2011, 8, 157–171. [Google Scholar]
  38. Friedl, M.; Brodley, C. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ 1997, 61, 399–409. [Google Scholar]
  39. Lawrence, R.; Wright, A. Rule-based classification systems using classification and regression tree (CART) analysis. Photogramm. Eng. Remote Sensing 2001, 67, 1137–1142. [Google Scholar]
  40. Thomas, N.; Hendrix, C.; Congalton, R. A comparison of urban mapping methods using high-resolution digital imagery. Photogramm. Eng. Remote Sensing 2003, 69, 963–972. [Google Scholar]
  41. Matikainen, L.; Karila, K. Segment-based land cover mapping of a suburban area—Comparison of high-resolution remotely sensed datasets using classification trees and test field points. Remote Sens 2011, 3, 1777–1804. [Google Scholar]
  42. Trimble eCognition, Available online: http://www.ecognition.com/ (accessed on 6 August 2012).
  43. Terrasolid Terrascan, Available online: www.terrasolid.fi (accessed on 7 August 2012).
  44. Shan, J.; Toth, C.K. Topographic Laser Ranging and Scanning—Principles and Processing; CRS Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  45. Axelsson, P. Processing of laser scanner data—Algorithms and applications. ISPRS J. Photogramm 1999, 54, 138–147. [Google Scholar]
  46. Baatz, M.; Schäpe, A. Multiresolution Segmentation: An Optimization Approach for High Quality Multi-Scale Image Segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Ed.; Herbert Wichmann Verlag: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  47. Dragut, L.; Tiede, D.; Levick, S.R.L. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci 2010, 6, 859–871. [Google Scholar]
  48. Carleer, A.P.; Wolff, E. Urban land cover multi-level region-based classification of VHR data by selecting relevant features. Int. J. Remote Sens 2006, 27, 1035–1051. [Google Scholar]
  49. Breiman, L.; Friedman, J.; Olshen, R.; Stone, C. Classification and Regression Trees; Wadsworth International Group: Belmont, CA, USA, 1984; p. 358. [Google Scholar]
  50. Trimble. In eCognition Developer 8.7.2—Reference Book 2012; Definiens AG: Munich, Germany, 2012; p. 440.
  51. OpenCV Open Source Computer Vision, Available Online: http://opencv.willowgarage.com/documentation/cpp/ml_decision_trees.html#paper_Breiman84 (accessed on 24 September 2012).
  52. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data—Principles and Practices, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2009; p. 183. [Google Scholar]
  53. Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S. Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix. Int. J. Appl. Earth Obs. Geoinf 2012, 19, 214–225. [Google Scholar]
  54. Radoux, J.; Bogaert, P.; Fasbender, D.; Defourny, P. Thematic accuracy assessment of geographic object-based image classification. Int. J. Geogr. Inf. Sci 2011, 25, 895–911. [Google Scholar]
  55. Platt, R.; Rapoza, L. An evaluation of an object-oriented paradigm for land use/land cover classification. Prof. Geogr 2008, 60, 87–100. [Google Scholar]
  56. Tiede, D.; Lang, S.; Hölbling, D.; Füreder, P. Transferability of OBIA Rulesets for IDP Camp Analysis in Darfur. Proceedings of GEOBIA 2010—Geographic Object-Based Image Analysis, Ghent, Belgium, 29 June–2 July 2010.
  57. Hofmann, P.; Blaschke, T.; Strobl, J. Quantifying the robustness of fuzzy rule sets in object-based image analysis. Int. J. Remote Sens 2011, 32, 7359–7381. [Google Scholar]
  58. Salehi, B.; Zhang, Y.; Zhong, M.; Dey, V. Object-based classification of urban areas using VHR imagery and height points ancillary data. Remote Sens 2012, 4, 2256–2276. [Google Scholar]
Figure 1. The study area is located in the northwestern part of Denmark and covers 14 km2. The two photographs to the right illustrate typical examples of shrub-encroached grassland from the study area. The applied coordinate system is the ETRS 1989 and the map projection is UTM zone 32.
Figure 1. The study area is located in the northwestern part of Denmark and covers 14 km2. The two photographs to the right illustrate typical examples of shrub-encroached grassland from the study area. The applied coordinate system is the ETRS 1989 and the map projection is UTM zone 32.
Remotesensing 05 00558f1 1024
Figure 2. The study area divided into subareas (tiles).
Figure 2. The study area divided into subareas (tiles).
Remotesensing 05 00558f2 1024
Figure 3. Overview of the methodology.
Figure 3. Overview of the methodology.
Remotesensing 05 00558f3 1024
Figure 4. nDSM of the study area with a zoomed-in extract showing shrubs, trees and two buildings.
Figure 4. nDSM of the study area with a zoomed-in extract showing shrubs, trees and two buildings.
Remotesensing 05 00558f4 1024
Figure 5. The excerpt shows the segmentation result on the CIR orthoimages. The nDSM was applied for segmentation in this case using a scale of 10. Objects selected for training (only elevated training data applied) are also shown.
Figure 5. The excerpt shows the segmentation result on the CIR orthoimages. The nDSM was applied for segmentation in this case using a scale of 10. Objects selected for training (only elevated training data applied) are also shown.
Remotesensing 05 00558f5 1024
Figure 6. The excerpt shows the segmentation result based on the CIR orthoimages using a scale of 92. Objects selected for training (all training data applied) are also shown.
Figure 6. The excerpt shows the segmentation result based on the CIR orthoimages using a scale of 92. Objects selected for training (all training data applied) are also shown.
Remotesensing 05 00558f6 1024
Figure 7. Tile 4 and the location of points used for selecting training data (segments) applied as input for the CART analysis. For the CART analysis related to the classification based on the nDSM and the CIR orthoimages, only the elevated training data were used (shrubs and trees and buildings), whereas all training data were applied in the CART analysis related to the classification based only on the CIR orthoimages.
Figure 7. Tile 4 and the location of points used for selecting training data (segments) applied as input for the CART analysis. For the CART analysis related to the classification based on the nDSM and the CIR orthoimages, only the elevated training data were used (shrubs and trees and buildings), whereas all training data were applied in the CART analysis related to the classification based only on the CIR orthoimages.
Remotesensing 05 00558f7 1024
Figure 8. The grid consisting of 260 × 240 m cells of which 20 were selected randomly for reference data sampling. Tile 4 is used for training and is therefore not included.
Figure 8. The grid consisting of 260 × 240 m cells of which 20 were selected randomly for reference data sampling. Tile 4 is used for training and is therefore not included.
Remotesensing 05 00558f8 1024
Figure 9. Result of the classification tree analysis used for classification of the CIR orthoimages with all the training data applied.
Figure 9. Result of the classification tree analysis used for classification of the CIR orthoimages with all the training data applied.
Remotesensing 05 00558f9 1024
Figure 10. Result of the classification tree analysis used for classification of the CIR orthoimages where the nDSM initially had been applied for classification of ground objects (height < 0.5 m).
Figure 10. Result of the classification tree analysis used for classification of the CIR orthoimages where the nDSM initially had been applied for classification of ground objects (height < 0.5 m).
Remotesensing 05 00558f10 1024
Figure 11. Result of the classification based on the nDSM and CIR orthoimages and the feature SUM/NIR derived from the CART analysis. The marked squares outlined in black demarcate the examples shown in Figures 1316.
Figure 11. Result of the classification based on the nDSM and CIR orthoimages and the feature SUM/NIR derived from the CART analysis. The marked squares outlined in black demarcate the examples shown in Figures 1316.
Remotesensing 05 00558f11 1024
Figure 12. Result of the classification based on the CIR orthoimages and the four features derived from the CART analysis. The marked squares outlined in black demarcate the examples shown in Figures 1316.
Figure 12. Result of the classification based on the CIR orthoimages and the four features derived from the CART analysis. The marked squares outlined in black demarcate the examples shown in Figures 1316.
Remotesensing 05 00558f12 1024
Figure 13. This example (1) shows how the combination of the nDSM, the CIR orthoimages, and the SUM/NIR feature has effectively classified and separated shrubs and trees, as well as buildings.
Figure 13. This example (1) shows how the combination of the nDSM, the CIR orthoimages, and the SUM/NIR feature has effectively classified and separated shrubs and trees, as well as buildings.
Remotesensing 05 00558f13 1024
Figure 14. This example (1) shows the result of the classification based on the CIR images and the four features derived from the CART analysis. Compared to Figure 13, many ground objects have been misclassified as shrub and tree, and only one building has been correctly classified.
Figure 14. This example (1) shows the result of the classification based on the CIR images and the four features derived from the CART analysis. Compared to Figure 13, many ground objects have been misclassified as shrub and tree, and only one building has been correctly classified.
Remotesensing 05 00558f14 1024
Figure 15. This example (2) shows that the nDSM combined with the CIR orthoimages and the SUM/NIR feature has effectively classified and outlined the shrubby vegetation.
Figure 15. This example (2) shows that the nDSM combined with the CIR orthoimages and the SUM/NIR feature has effectively classified and outlined the shrubby vegetation.
Remotesensing 05 00558f15 1024
Figure 16. This example (2) shows the result of the classification based on the CIR images and the four features derived from the CART analysis. As in Figure 14, many ground objects have been misclassified as shrubs and trees.
Figure 16. This example (2) shows the result of the classification based on the CIR images and the four features derived from the CART analysis. As in Figure 14, many ground objects have been misclassified as shrubs and trees.
Remotesensing 05 00558f16 1024
Table 1. Overview of the multiresolution settings used for segmentation of the nDSM and the CIR orthoimages.
Table 1. Overview of the multiresolution settings used for segmentation of the nDSM and the CIR orthoimages.
ClassificationSegmentation Settings
nDSM and CIR orthoimagesLevel 1
nDSM:
Scale: 10
Shape: 0.1
Compactness: 0.9

Only CIR orthoimagesLevel 1
CIR orthoimages:
Scale: 92
Shape: 0.5
Compactness: 0.5
Table 2. Overview of the selected features which were used in the two CART analyses. Only CIR orthoimages were used as input data.
Table 2. Overview of the selected features which were used in the two CART analyses. Only CIR orthoimages were used as input data.
FeaturesDescription
Customized attributesSUM/NIR: SUM (NIR + red + green) divided by NIR
NDVI: (NIR − red)/(NIR + red)
Mean valuesThe mean value represents the mean brightness of an image object within a single band.
Maximum differenceMinimum mean value of an object subtracted from its maximum value. The means of all bands belonging to an object are compared with each other. Subsequently, the result is divided by the brightness.
Standard deviationsThe standard deviation of all pixels which form an image object within a band.
BrightnessSum of mean values in all bands divided by the number of bands.
RatiosThe amount that a given band contributes to the total brightness.
Texture after HaralickGLCM (Grey Level Co-occurrence Matrix): Describes how different combinations of pixel values occur within an object. The following GLCM features were included (using the mean of all layers): GLCM Homogeneity, GLCM Contrast, GLCM Dissimilarity, GLCM Entropy, GLCM Angular 2nd moment.
GLDV (Grey Level Difference Vector): The sum of the diagonals of the grey level co-occurrence matrix. The following GLDV features were included (using the mean of all layers): GLDV Angular 2nd moment, GLDV Entropy, GLDV Mean, GLDV Contrast.
Geometry, extentArea, Border length, Length, Length/Width, Width.
Geometry, shapeAsymmetry, Border index, Compactness, Density, Elliptic fit, Radius of largest enclosed ellipse, Radius of smallest enclosing ellipse, Rectangular fit, Roundness, Shape index.
Geometry, shape based on polygonsArea (excluding inner polygons), Area (including inner polygons), Average length of edges, Compactness, Length of longest edge, Number of edges, Number of inner objects, Perimeter, Standard deviation of length of edges.
Table 3. The number of classified pixels and reference pixels in each of the three classes related to the classification where both the nDSM and the CIR orthoimages were used.
Table 3. The number of classified pixels and reference pixels in each of the three classes related to the classification where both the nDSM and the CIR orthoimages were used.
ClassesNumber of Classified PixelsNumber of Reference Sample PixelsPercent of Classified Pixels
Shrubs and trees9,669,595928,7429.6%
Built-up272,55920,1477.4%
Ground36,137,8464,043,11111.2%
Total46,080,0004,992,00010.8%
Table 4. The number of classified pixels and reference pixels in each of the three classes related to the classification where only the CIR orthoimages were used.
Table 4. The number of classified pixels and reference pixels in each of the three classes related to the classification where only the CIR orthoimages were used.
ClassesNumber of Classified PixelsNumber of Reference Sample PixelsPercent of Classified Pixels
Shrubs and trees13,446,880928,7426.9%
Built-up816,82820,1472.5%
Ground31,816,2924,043,11112.7%
Total46,080,0004,992,00010.8%
Table 5. Confusion matrix and accuracy estimates for the classification based on the nDSM and CIR orthoimages and the feature SUM/NIR derived from the CART analysis.
Table 5. Confusion matrix and accuracy estimates for the classification based on the nDSM and CIR orthoimages and the feature SUM/NIR derived from the CART analysis.
Reference Data
ClassificationShrubs and TreesBuilt-upGroundSumUser’s Accuracy
Shrubs and trees870,52033999,786970,64589.7%
Built-up51118,6625,73824,91174.9%
Ground57,7111,1463,937,5873,996,44498.5%
Sum928,74220,1474,043,1114,992,000

Producer’s Accuracy93.7%92.6%97.4%

Overall Accuracy4,826,769/4,992,00096.7%
Table 6. Confusion matrix and accuracy estimates for the classification based only on the CIR orthoimages and the four features derived from the CART analysis.
Table 6. Confusion matrix and accuracy estimates for the classification based only on the CIR orthoimages and the four features derived from the CART analysis.
Reference Data
ClassificationShrubs and TreesBuilt-upGroundSumUser’s Accuracy
Shrubs and trees754,4552,932668,5311,425,91852.9%
Built-up7635,38640,70846,85711.5%
Ground173,52411,8293,333,8723,519,22594.7%
Sum928,74220,1474,043,1114,992,000

Producer’s Accuracy81.2%26.7%82.5%

Overall Accuracy4,093,713 / 4,992,00082.0%

Share and Cite

MDPI and ACS Style

Hellesen, T.; Matikainen, L. An Object-Based Approach for Mapping Shrub and Tree Cover on Grassland Habitats by Use of LiDAR and CIR Orthoimages. Remote Sens. 2013, 5, 558-583. https://0-doi-org.brum.beds.ac.uk/10.3390/rs5020558

AMA Style

Hellesen T, Matikainen L. An Object-Based Approach for Mapping Shrub and Tree Cover on Grassland Habitats by Use of LiDAR and CIR Orthoimages. Remote Sensing. 2013; 5(2):558-583. https://0-doi-org.brum.beds.ac.uk/10.3390/rs5020558

Chicago/Turabian Style

Hellesen, Thomas, and Leena Matikainen. 2013. "An Object-Based Approach for Mapping Shrub and Tree Cover on Grassland Habitats by Use of LiDAR and CIR Orthoimages" Remote Sensing 5, no. 2: 558-583. https://0-doi-org.brum.beds.ac.uk/10.3390/rs5020558

Article Metrics

Back to TopTop