Open Access
18 May 2018 Crop classification from Sentinel-2-derived vegetation indices using ensemble learning
Rei Sonobe, Yuki Yamaya, Hiroshi Tani, Xiufeng Wang, Nobuyuki Kobayashi, Kan-ichiro Mochizuki
Author Affiliations +
Abstract
The identification and mapping of crops are important for estimating potential harvest as well as for agricultural field management. Optical remote sensing is one of the most attractive options because it offers vegetation indices and some data have been distributed free of charge. Especially, Sentinel-2A, which is equipped with a multispectral sensor (MSI) with blue, green, red, and near-infrared-1 bands at 10 m; red edge 1 to 3, near-infrared-2, and shortwave infrared 1 and 2 at 20 m; and 3 atmospheric bands (band 1, band 9, and band 10) at 60 m, offer some vegetation indices calculated to assess vegetation status. However, sufficient consideration has not been given to the potential of vegetation indices calculated from MSI data. Thus, 82 published indices were calculated and their importance were evaluated for classifying crop types. The two most common classification algorithms, random forests (RF) and support vector machine (SVM), were applied to conduct cropland classification from MSI data. Additionally, super learning was applied for more improvement, achieving overall accuracies of 90.2% to 92.2%. Of the two algorithms applied (RF and SVM), the accuracy of SVM was superior and 89.3% to 92.0% of overall accuracies were confirmed. Furthermore, stacking contributed to higher overall accuracies (90.2% to 92.2%), and significant differences were confirmed with the results of SVM and RF. Our results showed that vegetation indices had the greatest contributions in identifying specific crop types.

1.

Introduction

From a land-planning perspective, cropland diversity is vital and crop cover maps provide information for estimating potential harvest and agricultural field management. To document field properties, such as cultivated crops and locations, some local governments in Japan have been using manual methods.1 However, more efficient techniques are required to reduce the high expense of these methods. Thus, satellite data-based cropland mapping has gained attention. Some spectral indices, which are combinations of spectral measurements at different wavelengths, have been used to evaluate phenology or quantify biophysical parameters.25 As a result, they have also made crop maps more accurate in previous studies,6 and the abilities of optical remote sensing data have been improved for monitoring agricultural fields. The opportunities to obtain optical remote sensing data have improved due to the Sentinel-2A satellite launch on June 23, 2015. Now, it is collecting multispectral data including 13 bands covering the visible, shortwave infrared bands (SWIR) wavelength regions. Sentinel-2B, which possesses the same specifications, was launched on March 7, 2017, and creates greater opportunities for monitoring agricultural fields. Furthermore, various spectral indices can be extracted including indices based on SWIR, which are influenced by plant constituents, such as pigments, leaf water contents, and biochemicals.7,8 Furthermore, vegetation indices derived from reflectance data acquired from optical sensors have been widely used to assess variations in the physiological states and biophysical properties of vegetation.911 Specifically, the normalized difference vegetation index (NDVI),12 soil-adjusted vegetation index (SAVI),13 and enhanced vegetation index (EVI)14 have been used for monitoring vegetation systems or ecological responses to environmental change.15 Multispectral sensor (MSI) data have been used for identifying crop types,1618 plastic-covered greenhouses,19 water bodies,20 and some previous studies showed the potential of VIs calculated from MSI data. However, it is possible to calculate a vast number of VIs from MSI data and most of them have been ignored in the previous studies. In this study, 82 published indices and original reflectance data sources were evaluated to classify six crop types including beans, beetroot, grass, maize, potato, and winter wheat, which are dominant crops on the western Tokachi plain, Hokkaido, Japan.

In addition to qualities of remote sensing data, classification algorithms are important to improve classification accuracies of crop maps. Recently, random forests (RF) is a widely used machine learning algorithm consisting of an ensemble of decision trees, and it has been an extremely successful machine learning algorithm for classification and regression method.21 It has been applied for generating land cover maps22,23 and reached around 65% (tree species identification),17 76% (crop types identification),17 and 90% (greenhouse detection)19 using MSI data in the previous studies.

Some studies showed that support vector machine (SVM) performed better than RF for this purpose, and it has been widely applied for crop-for-crop classification.22,2426 Its robustness to outliers has been demonstrated and SVM is an excellent classifier when the number of input features is large.27

The superlearner (SL) methodology,25 also called stacking, is an ensemble learning method in which the user-supplied library of algorithms is combined through a convex weighted combination, with the optimal weights to make the cross-validated empirical risk smaller. Therefore, SL could be expected to classify crop types more accurately than the single use of RF or SVM, both considered in this study. Next, an ensemble approach based on SL was applied for improving classification accuracies.

Within this framework, the main objectives of the present study were to evaluate the potential of Sentinel-2 data for crop-type classification and the potential of ensemble learning based on RF and SVM.

2.

Materials and Methods

2.1.

Study Area

The study area was located in the western part of Tokachi plain, Hokkaido, Japan (Fig. 1, 142°42′51″ to 143°08′47″ E, 42°43′20″ to 43°07′24″ N). Main cultivated crop types are beans, beetroots, grasses, maize, potatoes, and winter wheat. The average monthly temperatures were 8.3°C to 21.8°C and monthly precipitation was 12.0 to 94.5 mm from May to October.

Fig. 1

Study area and the distribution of croplands (background map shows Sentinel-2A data obtained on August 11, 2016, R: band 4, G: band 3, and B: band 2).

JARS_12_2_026019_f001.png

Field location and attribute data, such as crop types, were based on manual surveys and provided by Tokachi Nosai (Obihiro, Hokkaido) as a polygon-shaped file. A total of 12,639 fields [2265 beans fields, 1548 beetroot fields, 2110 grasslands (timothy and orchard grass), 1000 maize fields, 2452 potato fields, and 3264 winter wheat fields] were observed. The fields ranged from 0.05 to 18.21 ha with an averaged value of 2.54 ha. Grasslands were located on the outskirts of the built-up area.

2.2.

Remote Sensing Data

The data acquired from Sentinel-2 MSI contained blue, green, red, and near-infrared-1 bands at 10 m; red edge 1 to 3, near-infrared-2, and SWIR 1 and 2 at 20 m; and three atmospheric bands (band 1, band 9, and band 10) at 60 m. In this study, the three atmospheric bands were removed, because they were dedicated to atmospheric corrections and cloud screening.28

Although Sentinel-2A imagery was gathered seven times from May to September 2016, for the whole site, all images were covered with clouds except for one acquired on 11 August. The level 1C data acquired on August 11, 2016, were downloaded from EarthExplorer.29 All bands were converted to 10-m resolution with a cubic convolution resampling method and average reflectance values of each band were calculated for each field using the field polygons to compensate for spatial variability and to avoid problems related to uncertainty in georeferencing.

Some vegetation indices, such as NDVI, have been used for improving classification accuracies in previous studies.16,22,30,31 About 82 published vegetation indices for evaluating various vegetation properties were calculated in this study (Table 1).

Table 1

Vegetation indices calculated from Sentinel-2 MSI data.

AbbreviationIndexFormula
AFRI1.632Aerosol free vegetation index 1.6Band8a0.66*Band11Band8a+0.66*Band11
AFRI2.132Aerosol free vegetation index 2.1Band8a0.5*Band12Band8a+0.5*Band12
ARI33Anthocyanin reflectance index1Band31Band5
ARVI34Atmospherically resistant vegetation index{Band8[Band4γ(Band2Band4)]}{Band8+[Band4γ(Band2Band4)]}
The γ is a weighting function that depends on aerosol type. In this study, a value of 1 for γ.
ARVI234Atmospherically resistant vegetation index 20.18+1.17*(Band8Band4Band8+Band4)
ATSAVI35Adjusted transformed soil-adjusted vegetation indexa*(Band8a*Band4b)Band8+Band4ab+X(1+a2)
a=1.22, b=0.03, X=0.08
AVI36Ashburn vegetation index2*Band8aBand4
BNDVI37Blue-normalized difference vegetation index(Band8Band2)/(Band8+Band2)
BRI38Browning reflectance index1/Band31/Band5Band6
BWDRVI39Blue-wide dynamic range vegetation index0.1*Band7Band20.1*Band7+Band2
CARI40Chlorophyll absorption ratio indexBand5*(a*Band4+Band4+b)2Band4*(a2+1)0.5
a=(Band5Band3)/150
b=Band3*550*a
CCCI41Canopy chlorophyll content index(Band8Band5Band8+Band5)(Band8Band4Band8+Band4)
CRI55042Carotenoid reflectance index 5501Band21Band3
CRI70042Carotenoid reflectance index 7001Band21Band5
CVI43Chlorophyll vegetation indexBand8*Band4(Band3)2
Datt144Vegetation index proposed by Datt 1Band8Band5Band8Band4
Datt245Vegetation index proposed by Datt 2Band4Band3*Band5
Datt345Vegetation index proposed by Datt 3Band8aBand3*Band5
DVI46Differenced vegetation index2.4*Band8Band4
EPIcar45Eucalyptus pigment index for carotenoid0.0049*(Band4Band3*Band5)0.7488
EPIChla45Eucalyptus pigment index for chlorophyll a0.0161*(Band4Band3*Band5)0.7784
EPIChlab45Eucalyptus pigment index for chlorophyll a+b0.0236*(Band4Band3*Band5)0.7954
EPIChlb45Eucalyptus pigment index for chlorophyll b0.0337*(Band4Band3)1.8695
EVI14Enhanced vegetation index2.5*Band8Band4Band8+6*Band47.5*Band2+1
EVI247Enhanced vegetation index 22.4*Band8Band4Band8+Band4+1
EVI2.248Enhanced vegetation index 2.22.5*Band8Band4Band8+2.4*Band4+1
GARI49Green atmospherically resistant vegetation indexBand8[Band3(Band2Band4)]Band8[Band3+(Band2Band4)]
GBNDVI50Green-Blue normalized difference vegetation indexBand8(Band3+Band2)Band8+(Band3+Band2)
GDVI51Green difference vegetation indexBand8Band3
GEMI52Global environment monitoring indexn*(10.25*n)Band40.1251Band4
n=2*Band52Band42+1.5*Band5+0.5*Band4Band5+Band4+0.5
GLI53Green leaf index2*Band3Band5Band22*Band3+Band5+Band2
GNDVI49Green normalized difference vegetation indexBand8Band3Band8+Band3
GNDVI249Green normalized difference vegetation index 2Band7Band3Band7+Band3
GOSAVI54Green optimized soil-adjusted vegetation indexBand8Band3Band8+Band3+0.16
GRNDVI55Green–red normalized difference vegetation indexBand8(Band3+Band5)Band8+(Band3+Band5)
GVMI56Global vegetation moisture index(Band8+0.1)(Band12+0.02)(Band8+0.1)+(Band12+0.02)
Hue57Hueatan[2*Band5Band3Band230.5*(Band3Band2)]
IPVI58Infrared percentage vegetation indexBand8Band8+Band52(Band5Band3Band5+Band5+1)
LCI44Leaf chlorophyll indexBand8Band5Band8+Band4
Maccion59Vegetation index proposed by MaccioniBand7Band5Band7Band4
MCARI60Modified chlorophyll absorption in reflectance index[(Band5Band4)0.2*(Band5Band3)]*Band5Band4
MCARI/MTVI261MCARI/MTVI2MCARI/MTVI2
MCARI/OSAVI62MCARI/OSAVIMCARI/OSAVI
MCARI162Modified chlorophyll absorption in reflectance index 11.2*[2.5*(Band8Band4)1.3*(Band8Band3)]
MCARI262Modified chlorophyll absorption in reflectance index 21.5*2.5*(Band8Band4)1.3*(Band8Band3)(2*Bamd8+1)2(6*Band85*Band4)0.5
MGVI63Green vegetation index proposed by Misra0.386*Band30.530*Band4+0.535*Band6+0.532*Band8
mNDVI64Modified normalized difference vegetation indexBand8Band4Band8+Band42*Band2
MNSI63Non such index proposed by Misra0.404*Band3+0.039*Band40.505*Band6+0.762*Band8
MSAVI65Modified soil-adjusted vegetation index2*Band8+1(2*Band8+1)28*(Band8Band5)2
MSAVI265Modified soil-adjusted vegetation index 22*Band8+1(2*Band8+1)28*(Band8Band4)2
MSBI63Soil brightness index proposed by Misra0.406*Band3+0.600*Band4+0.645*Band6+0.243*Band8
MSR67066Modified simple ratio 670/800Band8Band41Band8Band4+1
MSRNir/Red67Modified simple ratio NIR/redBand8Band51Band8Band5+1
MTVI262Modified triangular vegetation index 21.5*1.2*(Band8Band3)2.5*(Band4Band3)(2*Bamd8+1)2(6*Band85*Band4)0.5
NBR68Normalized difference NIR/SWIR normalized burn ratioBand8Band12Band8+Band12
ND774/67769Normalized difference 774/677Band7Band4Band7+Band4
NDII70Normalized difference infrared indexBand8Band11Band8+Band11
NDRE71Nnormalized difference red-edgeBand7Band5Band7+Band5
NDSI72Normalized difference salinity indexBand11Band12Band11+Band12
NDVI12Normalized difference vegetation indexBand8Band4Band8+Band4
NDVI251Normalized difference vegetation index 2Band12Band8Band12+Band8
NGRDI69Normalized green red difference indexBand3Band5Band3+Band5
OSAVI54,73Optimized soil-adjusted vegetation index1.16*Band8Band4Band8+Band4+0.16
PNDVI55Pan normalized difference vegetation indexBand8(Band3+Band5+Band2)Band8+(Band3+Band5+Band2)
PVR74Photosynthetic vigor ratioBand3Band4Band3+Band4
RBNDVI55Red–blue normalized difference vegetation indexBand8(Band4+Band2)Band8+(Band4+Band2)
RDVI75Renormalized difference vegetation indexBand8Band4Band8+Band4
REIP76Red-edge inflection point700+40*[(Band4+Band72)Band5Band6Band5]
Rre77Reflectance at the inflexion pointBand4+Band72
SAVI13Soil adjusted vegetation index1.5*Band8Band4Band8+Band4+0.5
SBL46Soil background lineBand82.4*Band4
SIPI78Structure intensive pigment indexBand8Band2Band8Band4
SIWSI79Shortwave infrared water stress indexBand8aBand11Band8a+Band11
SLAVI80Specific leaf area vegetation indexBand8Band4+Band12
TCARI60Transformed chlorophyll absorption ratio3*[(Band5Band4)0.2*(Band5Band3)(Band5Band4)]
TCARI/OSAVI73TCARI/OSAVITCARI/OSAVI
TCI43,81Triangular chlorophyll index1.2*(Band5Band3)1.5*(Band4Band3)*Band5Band4
TVI82Transformed vegetation indexNDVI+0.5
VARI70083Visible atmospherically resistant index 700Band51.7*Band4+0.7*Band2Band5+2.3*Band41.3*Band2
VARIgreen83Visible atmospherically resistant index greenBand3Band4Band3+Band4Band2
VI70084Vegetation index 700Band5Band4Band5+Band4
WDRVI85Wide dynamic range vegetation index0.1*Band8Band40.1*Band8+Band4

2.3.

Classification Algorithm

All samples were divided into the following three groups using a stratified random sampling approach: training data (50%) for developing classification models, validation data (25%) for hyperparameter tuning, and test data (25%) for evaluation of classification accuracies86 and Table 2 shows the numbers of fields of each crop type.

Table 2

Crop type and number of fields.

Crop typeTraining dataValidation dataTest data
Beans1132566567
Beetroot774387387
Grassland1055527528
Maize500250250
Potato1226613613
Wheat1632816816

SVM partitions data using maximum separation margins87 and the “kernel trick” has frequently been applied instead of attempting to fit a nonlinear model in previous studies.30 In this study, the Gaussian radial basis function kernel, which has mostly been used for classification purposes,30 was used as a kernel and two parameters were tuned to control the flexibility of the classifier, the regularization parameter C, and the kernel bandwidth γ. If the C value is too large, there is a high penalty for no separable points, and we may store many support vectors and overfit. If it is too small, there may be underfitting. It controls the trade-off between errors of the SVM on training data and margin maximization (C= leads to hard margin SVM). The γ value defines how far the influence of a single training example reaches, with low values meaning “far” and high values meaning “close.”

RF is an ensemble learning technique composed of multiple decision trees based on random bootstrapped samples of the training data.88 The output is determined by a majority vote of the results of decision trees. There are two user-defined hyperparameters including the number of trees (ntree) and the number of variables used to split the nodes (mtry). If ntree is made larger, the generalization error always converges, and over-training will not be a problem. On the other hand, a reduction in mtry makes each individual decision tree weaker.

The best combinations of these hyperparameters were determined using the Gaussian process, Bayesian optimization,89 which has been widely applied for hyperparameter tuning of machine learning algorithms.1

Ensemble machine learning methods have been used to obtain better predictive performance than from single learning algorithms, and the SL methodology has been proposed.90 In this method, given algorithms are combined through a convex weighted combination to minimize cross-validated errors. First, classification models based on RF or SVM were trained as the base algorithms using the training data. Next, a 10-fold cross validation was performed on each and the cross-validated predicted results were obtained. N is the number of rows in the training data, cross-validated predicted results were combined, and an N by two matrices was obtained as the “level-one” data and meta-learning model was generated. To predict the test data, the predictions from the base learners were fed into the metalearning model to generate the ensemble prediction. The data-based sensitivity analysis (DSA),91 which performs a pure black box use of the fitted models by querying the fitted models with sensitivity samples and recording their responses, was applied for assessing the sensitivity of the classification models.

2.4.

Accuracy Assessment

Classification accuracies were evaluated based on the simple measures of quantity disagreement (QD) and allocation disagreement (AD).92 They provide an effective summary of confusion matrices.93

The proportion of fields that are classified as crop i and their actual classes are crop j (Pij) is expressed in the following

Eq. (1)

Pij=Winijni+,
where Wi is the fields classified as crop i, nij is the number of fields classified as crop i, and their actual classes are crop j. ni+ is the row totals of the confusion matrix. In this case, AD and QD are calculated using the following:

Eq. (2)

ADi=2min(pi+,p+i)2pii,

Eq. (3)

AD=12i=1NcADi,

Eq. (4)

QDi=|pi+p+i|,

Eq. (5)

QD=12i=1NcQDi,
where Nc is the number of classes (six in this study), pi+ and p+i are the row and column totals of the confusion matrix, ADi is the allocation disagreement of crop i, and QDi is the quantity disagreement of crop i, respectively. The sum of QDi (QD) and ADi (AD) are calculated and the total disagreement can be evaluated by the sum of QD and AD.92

In addition, three indicators including overall accuracy [OA, Eq. (6)], producer’s accuracy [PA, Eq. (7)], and user’s accuracy [UA, Eq. (8)] were calculated because they have widely been applied for assessing classification accuracies

Eq. (6)

OA=i=1Npii/N,

Eq. (7)

PA=pii/Ri,

Eq. (8)

UA=pii/Ci,
where N is the number of fields, Ri and Ci represent the total number of crop i in the correct data and the total number from the classification results, respectively. McNemar’s test94 has been used to judge whether the differences between two given classification results were significant,95 and it was also applied in this study.

3.

Results and Discussion

3.1.

Classification Accuracy

Crop classification maps are shown in Fig. 2, the maximum, minimum, and averaged accuracies of 10 repetitions and confusion matrices when all the repetitions were merged are shown in Tables 3 and 4. Averaged OAs were 89.0% for RF, 90.6% for SVM, and 91.6% for the ensemble machine learning method and the mean PAs and mean UAs derived using the machine learning algorithms were >0.8, excepting those of RF (mean UA for maize was 0.797). All machine learning algorithms performed well in classifying croplands. Especially, the good accuracies were confirmed for the PAs and UAs for wheat (>93.8%) and beet (>89.9%). However, the chi-square values based on McNemar’s tests were 12.02 to 40.60, 27.78 to 62.43, and 17.00 to 51.60 for R—SVM, RF—SL, and SVM—SL, respectively. As a result, significant differences were confirmed among the results of three machine learning algorithms (p<0.05).

Fig. 2

Crop classification map generated by (a) RF, (b) SVM, and (c) SL.

JARS_12_2_026019_f002.png

Table 3

Classification accuracies of each algorithm.

RFSVMSL
Minimum (%)Maximum (%)Mean±std (%)Minimum (%)Maximum (%)Mean±std (%)Minimum (%)Maximum (%)Mean±std (%)
PA
Beans80.686.483.4±1.681.190.586.2±2.284.790.387.6±1.4
Beet89.994.893.0±1.391.096.494.5±1.593.896.195.1±0.6
Grassland84.388.386.0±1.286.793.889.4±2.589.894.392.1±1.4
Maize78.884.880.8±1.778.887.683.0±3.181.287.684.6±1.8
Potato82.989.787.0±1.883.589.987.6±1.984.089.788.1±1.6
Wheat96.497.997.0±0.596.397.597.1±0.495.797.597.0±0.7
UA
Beans84.988.686.8±1.182.091.486.4±2.983.490.388.6±2.0
Beet94.596.995.6±0.894.397.395.7±0.995.197.196.0±0.6
Grassland88.093.391.0±1.489.996.694.0±2.393.897.795.7±1.1
Maize77.882.079.7±1.378.487.381.9±2.281.485.283.6±1.4
Potato78.583.181.5±1.282.187.885.2±1.983.086.885.4±1.1
Wheat93.896.195.0±0.794.597.295.9±0.895.197.296.2±0.6
OA88.589.489.0±0.289.392.090.6±0.990.292.291.6±0.6
κ85.987.086.5±0.386.890.288.4±1.188.090.589.6±0.8
AD8.09.99.0±0.66.59.77.9±1.06.58.87.3±0.7
QD1.32.82.0±0.50.72.51.5±0.60.62.31.2±0.5

Table 4

Confusion matrices for (a) RF, (b) SVM, and (c) SL.

Reference data
BeansBeetrootGrasslandsMaizePotatoWheat
(a) RF
Classified dataBeans47265924710028726
Beet4835992328651
Grasslands1726545435211643
Maize13921128201917748
Potato5031192302355332123
Wheat827109661537919
(b) SVM
Classified dataBeans48887721211933334
Beet6136591722632
Grasslands110344720407049
Maize11214130207616640
Potato429791211895368115
Wheat70780541307920
(c) SL
Classified dataBeans4965821058333342
Beet6136801117613
Grasslands59174861375253
Maize858121211416932
Potato426771132005403112
Wheat74669491127918

Classification results by SL had the best OA and AD + QD (8.5%) and SVM had a slightly better PA of wheat (97.1%). On the contrary, identifying maize fields was difficult due to the similarity in their reflectance. Grasses cultivation employs fewer controls and then a lot of weeds were mixed with timothy and orchard grass in grasslands. As a result, variation in reflectance features was larger than in other crop types, causing misclassifications of relatively large fields.

Figure 3 shows the relationship between field area and misclassified fields for each algorithm after 10 repetitions (i.e., the total number is 10 times of that of the test data). More than 75% of the misclassified fields were <200 a in area for all algorithms, and 95.1% (RF), 95.5% (SVM), and 96.1% (SL) of misclassified fields were below 450 a. Applying stacking made the model more robust for classifying smaller fields and the number of misclassified croplands decreased (813 fields for smaller than 50 a) compared with the results by RF (909 fields for smaller than 50 a) and SVM (855 fields for smaller than 50 a). It was especially useful for identifying beans fields. It was not effective for identifying small grasslands as grass cultivation employs fewer controls and many weeds were present in grasslands. However, stacking was useful for identifying grasslands more than 500 a, which had a certain homogeneity with Dactylis glomerata or Phleum pretense in the MSI image.

Fig. 3

Relationship between field area and misclassified fields (a) RF, (b) SVM, and (c) SL.

JARS_12_2_026019_f003.png

3.2.

Sensitive Factor Analysis

Reflectance values obtained from Sentinel-2A are shown in Fig. 4. Differences in reflectance were particularly clear between wheat and beans as the wheat harvest was finished on 11 August and the reflectance of wheat fields was similar to that of bare soil. Beetroot had the steepest gradient between bands 5 and 6 and some differences in the reflectance values at band 11 were confirmed between maize and potato. Differences in the reflectance patterns between grass and beans were not clear.

Fig. 4

Mean reflectance spectra and standard deviations of each crop.

JARS_12_2_026019_f004.png

To clarify which variables contributed to identifying each crop type, DSA was conducted for each algorithm and their importance values were calculated.

For identifying beans fields, Datt3 (6.0%, 6.6%, and 6.3% for RF, SVM, and SL, respectively) and REIP (6.4%, 8.2%, and 7.3% for RF, SVM, and SL, respectively) played important roles in the three algorithms. Some variables (the reflectance values at bands 2 and 3, AFRI2.1, CVI and NDSI) possessed importance values of >5.0% in the RF-based model, whereas no variables except for Datt3 and REIP had importance values of >5.0% for SVM and SL. Even though the importance values of GEMI, Maccioni, and MNSI in SVM were <5.0%, they were more than five times those in RF. AFRI1.6 and SIWSI were useful for identifying beetroot fields and AFRI1.6 occupied 11.1%, 6.8%, and 9.0% and SIWSI occupied 10.6%, 7.1%, and 8.9% of the importance for RF, SVM, and SL, respectively. GEMI and NDSI also had importance values of >10% for RF, but were <5% for the others. In contrast, REIP was useful in SVM and it occupied 9.1% of the importance in SVM. AFRI1.6, REIP, and MNSI were effective for identifying grassland for all algorithms, whereas SIWSI played an important role (7.8%) for RF and the reflectance at band 6 played an important role (8.2%) for SVM. For identifying maize fields, no variable had importance values >5.0% for any algorithm, but the importance value of REIP was 25.3% for SVM (2.9% for RF). CRI550, CRI700, and MSBI were 9.1%, 12.9%, and 5.6% in RF, respectively (those in SVM were 2.4%, 2.2%, and 3.6%, respectively). REIP played the greatest role for identifying potato fields in all algorithms (12.8%, 6.9% and 9.9% for RF, SVM, and SL, respectively). The importance values of CCCI and CVI were also high in RF (9.9%) but those in SVM were <3.0%. In contrast, Maccioni had an importance of 6.9% in SVM but in RF was 1.4%. REIP also played a great role for identifying wheat fields in SVM, but 1.2% of the importance value was confirmed in RF while AVI occupied 15.1% in RF (1.2% in SVM). However, the original reflectance values possessed importance values of <1.0%.

In this season, the photosynthetic activities of each crop type were different; maize is a C4 plant, beans and beetroot were in their growing season, grassland was after second harvest, potato growth was inhibited by chemicals for easy harvesting, and wheat fields were cultivated. In addition to indices related to chlorophyll content, the additional use of shortwave infrared data contributed to the estimation of photosynthetic pigments, water, nitrogen, cellulose, lignin, phenols, and leaf mass per area (e.g., NDSI). As a result, vegetation indices had greater influence on the classification results than the original reflectance. However, there were differences among algorithms in which vegetation indices were more important. The importance values in SL were near the averaged values of RF and SVM. So, the differences in importance between RF and SVM were useful when stacking was applied, and the modification contributed to identifying croplands with higher accuracies.

4.

Conclusions and Future Work

Cropland classifications were conducted using a single image from Sentinel-2 MSI and the suitability and accuracy of vegetation indices from the original reflectance data from Sentinel-2 MSI were assessed.

Of the two algorithms applied (RF and SVM), the accuracy of SVM was superior and 89.3% to 92.0% of OAs were confirmed. Furthermore, stacking contributed to higher OAs (90.2% to 92.2%) and significant differences were confirmed with the results of SVM. Based on DSA, the vegetation indices calculated from the original reflectance from Sentinel-2 MSI data were useful to identify the specific crop types. Although the vegetation indices that played the largest roles were different between RF and SVM, stacking helped to modify and reduce the importance of specific variables, which might prevent overfitting. Stacking should be utilized to monitor agricultural fields for improving classification accuracies.

The field is used as a basic unit in classification and some problems related to the borders of fields remain to be resolved. We are planning to evaluate the potential of geographic object-based image analysis in conjunction with MSI data and address this question in future work.

Disclosures

No potential conflicts of interest are reported by the authors.

Acknowledgments

The authors would like to thank Tokachi Nosai for providing the field data.

References

1. 

R. Sonobe et al., “Assessing the suitability of data from Sentinel-1A and 2A for crop classification,” GISci. Remote Sens., 54 918 –938 (2017). https://doi.org/10.1080/15481603.2017.1351149 Google Scholar

2. 

R. Sonobe et al., “Estimating leaf carotenoid contents of shade grown tea using hyperspectral indices and PROSPECT-D inversion,” Int. J. Remote Sens., 39 1306 –1320 (2018). https://doi.org/10.1080/01431161.2017.1407050 Google Scholar

3. 

C. Rankine et al., “Comparing MODIS and near-surface vegetation indexes for monitoring tropical dry forest phenology along a successional gradient using optical phenology towers,” Environ. Res. Lett., 12 105007 (2017). https://doi.org/10.1088/1748-9326/aa838c Google Scholar

4. 

S. S. Liu et al., “Regional-scale winter wheat phenology monitoring using multisensor spatio-temporal fusion in a South Central China growing area,” J. Appl. Remote Sens., 10 046029 (2016). https://doi.org/10.1117/1.JRS.10.046029 Google Scholar

5. 

J. Vithanage, S. N. Miller and K. Driese, “Land cover characterization for a watershed in Kenya using MODIS data and Fourier algorithms,” J. Appl. Remote Sens., 10 045015 (2016). https://doi.org/10.1117/1.JRS.10.045015 Google Scholar

6. 

R. Sonobe et al., “Evaluating metrics derived from Landsat 8 OLI imagery to map crop cover,” Geocarto Int., (2018). https://doi.org/10.1080/10106049.2018.1425739 Google Scholar

7. 

G. P. Asner, “Biophysical and biochemical sources of variability in canopy reflectance,” Remote Sens. Environ., 64 234 –253 (1998). https://doi.org/10.1016/S0034-4257(98)00014-5 Google Scholar

8. 

M. A. Pena, R. Liao and A. Brenning, “Using spectrotemporal indices to improve the fruit-tree crop classification accuracy,” ISPRS J. Photogramm. Remote Sens., 128 158 –169 (2017). https://doi.org/10.1016/j.isprsjprs.2017.03.019 IRSEE9 0924-2716 Google Scholar

9. 

D. Bankestad and T. Wik, “Growth tracking of basil by proximal remote sensing of chlorophyll fluorescence in growth chamber and greenhouse environments,” Comput. Electron. Agric., 128 77 –86 (2016). https://doi.org/10.1016/j.compag.2016.08.004 CEAGE6 0168-1699 Google Scholar

10. 

Z. Wang et al., “Spatiotemporal variations of forest phenology in the Qinling Mountains and its response to a critical temperature of 10 degrees C,” J. Appl. Remote Sens., 12 022202 (2018). https://doi.org/10.1117/1.JRS.12.022202 Google Scholar

11. 

M. Morin et al., “Agreement analysis and spatial sensitivity of multispectral and hyperspectral sensors in detecting vegetation stress at management scales,” J. Appl. Remote Sens., 11 046025 (2017). https://doi.org/10.1117/1.JRS.11.046025 Google Scholar

12. 

C. J. Tucker, “Red and photographic infrared linear combinations for monitoring vegetation,” Remote Sens. Environ., 8 127 –150 (1979). https://doi.org/10.1016/0034-4257(79)90013-0 Google Scholar

13. 

A. R. Huete, “A soil-adjusted vegetation index (SAVI),” Remote Sens. Environ., 25 295 –309 (1988). https://doi.org/10.1016/0034-4257(88)90106-X Google Scholar

14. 

A. Huete et al., “Overview of the radiometric and biophysical performance of the MODIS vegetation indices,” Remote Sens. Environ., 83 195 –213 (2002). https://doi.org/10.1016/S0034-4257(02)00096-2 Google Scholar

15. 

C. E. Holden and C. E. Woodcock, “An analysis of Landsat 7 and Landsat 8 underflight data and the implications for time series investigations,” Remote Sens. Environ., 185 16 –36 (2016). https://doi.org/10.1016/j.rse.2016.02.052 Google Scholar

16. 

M. Belgiu and O. Csillik, “Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis,” Remote Sens. Environ., 204 509 –523 (2018). https://doi.org/10.1016/j.rse.2017.10.005 Google Scholar

17. 

M. Immitzer, F. Vuolo and C. Atzberger, “First experience with Sentinel-2 data for crop and tree species classifications in Central Europe,” Remote Sens., 8 166 (2016). https://doi.org/10.3390/rs8030166 Google Scholar

18. 

Y. Palchowdhuri et al., “Classification of multi-temporal spectral indices for crop type mapping: a case study in Coalville, UK,” J. Agric. Sci., 156 24 –36 (2018). https://doi.org/10.1017/S0021859617000879 Google Scholar

19. 

A. Novelli et al., “Performance evaluation of object based greenhouse detection from Sentinel-2 MSI and Landsat 8 OLI data: a case study from Almeria (Spain),” Int. J. Appl. Earth Obs. Geoinf., 52 403 –411 (2016). https://doi.org/10.1016/j.jag.2016.07.011 Google Scholar

20. 

Y. Du et al., “Water Bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR Band,” Remote Sens., 8 354 (2016). https://doi.org/10.3390/rs8040354 Google Scholar

21. 

G. Biau and E. Scornet, “A random forest guided tour,” Test, 25 197 –227 (2016). https://doi.org/10.1007/s11749-016-0481-7 TESTDF Google Scholar

22. 

S. Ferrant et al., “Detection of irrigated crops from Sentinel-1 and Sentinel-2 data to estimate seasonal groundwater use in South India,” Remote Sens., 9 1119 (2017). https://doi.org/10.3390/rs9111119 Google Scholar

23. 

A. O. Onojeghuo et al., “Mapping paddy rice fields by applying machine learning algorithms to multi-temporal Sentinel-1A and Landsat data,” Int. J. Remote Sens., 39 1042 –1067 (2018). https://doi.org/10.1080/01431161.2017.1395969 Google Scholar

24. 

R. Sonobe et al., “Discrimination of crop types with TerraSAR-X-derived information,” Phys. Chem. Earth. Parts A, B, C, 83–84 2 –13 (2015). https://doi.org/10.1016/j.pce.2014.11.001 Google Scholar

25. 

J. K. Gilbertson and A. van Niekerk, “Value of dimensionality reduction for crop differentiation with multi-temporal imagery and machine learning,” Comput. Electron. Agric., 142 50 –58 (2017). https://doi.org/10.1016/j.compag.2017.08.024 CEAGE6 0168-1699 Google Scholar

26. 

R. Sonobe et al., “Random forest classification of crop type using multi- temporal TerraSAR- X dual- polarimetric data,” Remote Sens. Lett., 5 157 –164 (2014). https://doi.org/10.1080/2150704X.2014.889863 Google Scholar

27. 

G. Camps-Valls et al., “Robust support vector method for hyperspectral data classification and knowledge discovery,” IEEE Trans. Geosci. Remote Sens., 42 1530 –1542 (2004). https://doi.org/10.1109/TGRS.2004.827262 Google Scholar

28. 

M. Drusch et al., “Sentinel-2: ESA’s optical high-resolution mission for GMES operational services,” Remote Sens. Environ., 120 25 –36 (2012). https://doi.org/10.1016/j.rse.2011.11.026 Google Scholar

29. 

U.S. Geological Survey, “EarthExplorer,” (2016) https://earthexplorer.usgs.gov/ December ). 2016). Google Scholar

30. 

A. Chatziantoniou, G. P. Petropoulos and E. Psomiadis, “Co-orbital Sentinel 1 and 2 for LULC mapping with emphasis on Wetlands in a Mediterranean setting based on machine learning,” Remote Sens., 9 1259 (2017). https://doi.org/10.3390/rs9121259 Google Scholar

31. 

E. M. D. Silveira et al., “Assessment of geostatistical features for object-based image classification of contrasted landscape vegetation cover,” J. Appl. Remote Sens., 11 036004 (2017). https://doi.org/10.1117/1.JRS.11.036004 Google Scholar

32. 

A. Karnieli et al., “AFRI—aerosol free vegetation index,” Remote Sens. Environ., 77 10 –21 (2001). https://doi.org/10.1016/S0034-4257(01)00190-0 Google Scholar

33. 

A. A. Gitelson, O. B. Chivkunova and M. N. Merzlyak, “Nondestructive estimation of anthocyanins and chlorophylls in anthocyanic leaves,” Am. J. Bot., 96 1861 –1868 (2009). https://doi.org/10.3732/ajb.0800395 Google Scholar

34. 

Y. J. Kaufman and D. Tanre, “Atmospherically resistant vegetation index (ARVI) for EOS-MODIS,” IEEE Trans. Geosci. Remote Sens., 30 261 –270 (1992). https://doi.org/10.1109/36.134076 Google Scholar

35. 

F. Baret and G. Guyot, “Potentials and limits of vegetation indices for LAI and APAR assessment,” Remote Sens. Environ., 35 161 –173 (1991). https://doi.org/10.1016/0034-4257(91)90009-U Google Scholar

36. 

P. Ashburn, “The vegetative index number and crop identification,” in The LACIE Symp. Proc. of the Technical Session, 843 –855 (1978). Google Scholar

37. 

C. G. Yang, J. H. Everitt and J. M. Bradford, “Airborne hyperspectral imagery and linear spectral unmixing for mapping variation in crop yield,” Precis. Agric., 8 279 –296 (2007). https://doi.org/10.1007/s11119-007-9045-x Google Scholar

38. 

O. B. Chivkunova et al., “Reflectance spectral features and detection of superficial scald -induced browning in storing apple fruit,” J. Russ. Phytopathol. Soc., 2 73 –77 (2001). Google Scholar

39. 

D. W. Hancock and C. T. Dougherty, “Relationships between blue- and red-based vegetation indices and leaf area and yield of alfalfa,” Crop Sci., 47 2547 –2556 (2007). https://doi.org/10.2135/cropsci2007.01.0031 CRPSAY Google Scholar

40. 

M. S. Kim et al., “The use of high spectral resolution bands for estimating absorbed photosynthetically active radiation (A par),” in The 6th Int. Symp. on Physical Measurements and Signatures in Remote Sensing, (1994). Google Scholar

41. 

D. M. El-Shikha et al., “Remote sensing of cotton nitrogen status using the Canopy Chlorophyll Content Index (CCCI),” Trans. ASABE, 51 73 –82 (2008). https://doi.org/10.13031/2013.24228 Google Scholar

42. 

A. A. Gitelson, M. N. Merzlyak and O. B. Chivkunova, “Optical properties and nondestructive estimation of anthocyanin content in plant leaves,” Photochem. Photobiol., 74 38 –45 (2001). https://doi.org/10.1562/0031-8655(2001)074<0038:OPANEO>2.0.CO;2 Google Scholar

43. 

E. R. Hunt et al., “Remote sensing leaf chlorophyll content using a visible band index,” Agron. J., 103 1090 –1099 (2011). https://doi.org/10.2134/agronj2010.0395 AGJOAT 0002-1962 Google Scholar

44. 

B. Datt, “Remote sensing of water content in eucalyptus leaves,” Aust. J. Bot., 47 909 –923 (1999). https://doi.org/10.1071/BT98042 Google Scholar

45. 

B. Datt, “Remote sensing of chlorophyll a, chlorophyll b, chlorophyll a+b, and total carotenoid content in eucalyptus leaves,” Remote Sens. Environ., 66 111 –121 (1998). https://doi.org/10.1016/S0034-4257(98)00046-7 Google Scholar

46. 

A. J. Richardson and C. L. Wiegand, “Distinguishing vegetation from soil background information,” Photogramm. Eng. Remote Sens., 43 1541 –1552 (1977). Google Scholar

47. 

T. Miura et al., “Inter-comparison of ASTER and MODIS surface reflectance and vegetation index products for synergistic applications to natural resource monitoring,” Sensors, 8 2480 –2499 (2008). https://doi.org/10.3390/s8042480 SNSRES 0746-9462 Google Scholar

48. 

Z. Y. Jiang et al., “Development of a two-band enhanced vegetation index without a blue band,” Remote Sens. Environ., 112 3833 –3845 (2008). https://doi.org/10.1016/j.rse.2008.06.006 Google Scholar

49. 

A. A. Gitelson, Y. J. Kaufman and M. N. Merzlyak, “Use of a green channel in remote sensing of global vegetation from EOS-MODIS,” Remote Sens. Environ., 58 289 –298 (1996). https://doi.org/10.1016/S0034-4257(96)00072-7 Google Scholar

50. 

F. M. Wang, J. F. Huang and L. Chen, “Development of a vegetation index for estimation of leaf area index based on simulation modeling,” J. Plant Nutr., 33 328 –338 (2010). https://doi.org/10.1080/01904160903470380 JPNUDS 1532-4087 Google Scholar

51. 

C. J. Tucker, “Monitoring corn and soybean crop development with hand-held radiometer spectral data,” Remote Sens. Environ., 8 237 –248 (1979). https://doi.org/10.1016/0034-4257(79)90004-X Google Scholar

52. 

B. Pinty and M. M. Verstraete, “GEMI: a non-linear index to monitor global vegetation from satellites,” Vegetatio, 101 15 –20 (1992). https://doi.org/10.1007/BF00031911 VGTOA4 Google Scholar

53. 

N. Gobron et al., “Advanced vegetation indices optimized for up-coming sensors: design, performance, and applications,” IEEE Trans. Geosci. Remote Sens., 38 2489 –2505 (2000). https://doi.org/10.1109/36.885197 IGRSD2 0196-2892 Google Scholar

54. 

G. Rondeaux, M. Steven and F. Baret, “Optimization of soil-adjusted vegetation indices,” Remote Sens. Environ., 55 95 –107 (1996). https://doi.org/10.1016/0034-4257(95)00186-7 Google Scholar

55. 

F.-M. Wang et al., “New vegetation index and its application in estimating leaf area index of rice,” Rice Sci., 14 195 –203 (2007). https://doi.org/10.1016/S1672-6308(07)60027-4 Google Scholar

56. 

E. P. Glenn, P. L. Nagler and A. R. Huete, “Vegetation index methods for estimating evapotranspiration by remote sensing,” Surv. Geophys., 31 531 –555 (2010). https://doi.org/10.1007/s10712-010-9102-2 SUGEEC 0169-3298 Google Scholar

57. 

R. Escadafal, A. Belghith and H. Ben-Moussa, “Indices spectraux pour la degradation des milieux naturels en Tunisie aride,” in 6e Symp. Int. sur les mesures physiques et signatures en teledetection, 253 –259 (1994). Google Scholar

58. 

R. E. Crippen, “Calculating the vegetation index faster,” Remote Sens. Environ., 34 71 –73 (1990). https://doi.org/10.1016/0034-4257(90)90085-Z Google Scholar

59. 

A. Maccioni, G. Agati and P. Mazzinghi, “New vegetation indices for remote measurement of chlorophylls based on leaf directional reflectance spectra,” J. Photochem. Photobiol. B, 61 52 –61 (2001). https://doi.org/10.1016/S1011-1344(01)00145-2 JPPBEG 1011-1344 Google Scholar

60. 

C. S. T. Daughtry et al., “Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance,” Remote Sens. Environ., 74 229 –239 (2000). https://doi.org/10.1016/S0034-4257(00)00113-9 Google Scholar

61. 

J. U. H. Eitel et al., “Using in-situ measurements to evaluate the new RapidEye (TM) satellite series for prediction of wheat nitrogen status,” Int. J. Remote Sens., 28 4183 –4190 (2007). https://doi.org/10.1080/01431160701422213 Google Scholar

62. 

D. Haboudane et al., “Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: modeling and validation in the context of precision agriculture,” Remote Sens. Environ., 90 337 –352 (2004). https://doi.org/10.1016/j.rse.2003.12.013 RSEEA7 0034-4257 Google Scholar

63. 

P. N. Misra, S. G. Wheeler and R. E. Oliver, “Kauth-Thomas brigthness and greenness axes,” 23 –46 (1977). Google Scholar

64. 

R. Main et al., “An investigation into robust spectral indices for leaf chlorophyll estimation,” ISPRS J. Photogramm. Remote Sens., 66 751 –761 (2011). https://doi.org/10.1016/j.isprsjprs.2011.08.001 IRSEE9 0924-2716 Google Scholar

65. 

J. Qi et al., “A modified soil adjusted vegetation index,” Remote Sens. Environ., 48 119 –126 (1994). https://doi.org/10.1016/0034-4257(94)90134-1 Google Scholar

66. 

J. M. Chen, “Evaluation of vegetation indices and a modified simple ratio for boreal applications,” Can. J. Remote Sens., 22 229 –242 (1996). https://doi.org/10.1080/07038992.1996.10855178 Google Scholar

67. 

J. M. Chen and J. Cihlar, “Retrieving leaf area index of boreal conifer forests using Landsat TM images,” Remote Sens. Environ., 55 153 –162 (1996). https://doi.org/10.1016/0034-4257(95)00195-6 Google Scholar

68. 

C. Key, N. Benson, “Landscape assessment: ground measure of severity; the composite burn index, and remote sensing of severity, the normalized burn index and remote sensing of severity, the normalized burn ratio,” FIREMON: Fire Effects Monitoring and Inventory System, 1 –51 Rocky Mountains Research Station, USDA Forest Service, Fort Collins, Colorado (2005). Google Scholar

69. 

P. J. Zarco-Tejada et al., “Scaling-up and model inversion methods with narrowband optical indices for chlorophyll content estimation in closed forest canopies with hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 39 1491 –1507 (2001). https://doi.org/10.1109/36.934080 IGRSD2 0196-2892 Google Scholar

70. 

M. A. Hardisky, V. Klemas and R. M. Smart, “The influences of soil salinity, growth form, and leaf moisture on the spectral reflectance of Spartina Alterniflora canopies,” Photogramm. Eng. Remote Sens., 49 77 –84 (1983). Google Scholar

71. 

E. M. Barnes et al., “Coincident detection of crop water stress, nitrogen status and canopy density using ground based multispectral data,” in Proc. of Fifth Int. Conf. on Precision Agriculture and Other Resource Management ASA-CSSA-SSSA, (2000). Google Scholar

72. 

A. Dehni and M. Lounis, “Remote sensing tetchniques for salt affected soil mapping: application to the Oran Region of Algeria,” Procedia Eng., 33 188 –198 (2012). https://doi.org/10.1016/j.proeng.2012.01.1193 Google Scholar

73. 

D. Haboudane et al., “Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture,” Remote Sens. Environ., 81 416 –426 (2002). https://doi.org/10.1016/S0034-4257(02)00018-4 Google Scholar

74. 

G. Metternicht, “Vegetation indices derived from high-resolution airborne videography for precision crop management,” Int. J. Remote Sens., 24 2855 –2877 (2003). https://doi.org/10.1080/01431160210163074 Google Scholar

75. 

N. H. Broge and E. Leblanc, “Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density,” Remote Sens. Environ., 76 156 –172 (2001). https://doi.org/10.1016/S0034-4257(00)00197-8 Google Scholar

76. 

I. Herrmann et al., “LAI assessment of wheat and potato crops by VEN mu S and Sentinel-2 bands,” Remote Sens. Environ., 115 2141 –2151 (2011). https://doi.org/10.1016/j.rse.2011.04.018 Google Scholar

77. 

J. Clevers et al., “Derivation of the red edge index using the MERIS standard band setting,” Int. J. Remote Sens., 23 3169 –3184 (2002). https://doi.org/10.1080/01431160110104647 Google Scholar

78. 

J. Penuelas, F. Baret and I. Filella, “Semi-empirical indices to assess Carotenoids/Chlorophyll-a ratio from leaf spectral reflectance,” Photosynthetica, 31 221 –230 (1995). PHSYB5 0300-3604 Google Scholar

79. 

R. Fensholt and I. Sandholt, “Derivation of a shortwave infrared water stress index from MODIS near- and shortwave infrared data in a semiarid environment,” Remote Sens. Environ., 87 111 –121 (2003). https://doi.org/10.1016/j.rse.2003.07.002 Google Scholar

80. 

L. Lymburner, P. J. Beggs and C. R. Jacobson, “Estimation of canopy-average surface-specific leaf area using Landsat TM data,” Photogramm. Eng. Remote Sens., 66 183 –191 (2000). Google Scholar

81. 

D. Haboudane et al., “Remote estimation of crop chlorophyll content using spectral indices derived from hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 46 423 –437 (2008). https://doi.org/10.1109/TGRS.2007.904836 IGRSD2 0196-2892 Google Scholar

82. 

J. W. Rouse et al., “Monitoring vegetation systems in the great plains with ERTS,” 309 –317 NASA, Washington, D.C (1974). Google Scholar

83. 

A. A. Gitelson et al., “Non-destructive and remote sensing techniques for estimation of vegetation status,” in Third European Conf. on Precision Agriculture, 301 –306 (2001). Google Scholar

84. 

A. A. Gitelson et al., “Novel algorithms for remote estimation of vegetation fraction,” Remote Sens. Environ., 80 76 –87 (2002). https://doi.org/10.1016/S0034-4257(01)00289-9 Google Scholar

85. 

A. A. Gitelson, “Wide dynamic range vegetation index for remote quantification of biophysical characteristics of vegetation,” J. Plant Physiol., 161 165 –173 (2004). https://doi.org/10.1078/0176-1617-01176 Google Scholar

86. 

T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.Springer-Verlag, New York (2009). Google Scholar

87. 

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn., 20 273 –297 (1995). https://doi.org/10.1007/BF00994018 Google Scholar

88. 

L. Breiman, “Random forests,” Mach. Learn., 45 5 –32 (2001). https://doi.org/10.1023/A:1010933404324 Google Scholar

89. 

J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., 13 281 –305 (2012). Google Scholar

90. 

M. J. van der Laan, E. C. Polley and A. E. Hubbard, “Super learner,” Stat. Appl. Genet. Mol. Biol., 6 1 –23 (2007). https://doi.org/10.2202/1544-6115.1309 Google Scholar

91. 

P. Cortez and M. J. Embrechts, “Using sensitivity analysis and visualization techniques to open black box data mining models,” Inf. Sci., 225 1 –17 (2013). https://doi.org/10.1016/j.ins.2012.10.039 Google Scholar

92. 

R. Pontius and M. Millones, “Death to Kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment,” Int. J. Remote Sens., 32 4407 –4429 (2011). https://doi.org/10.1080/01431161.2011.552923 Google Scholar

93. 

M. Story and R. Congalton, “Accuracy assessment: a user’s perspective,” Photogramm. Eng. Remote Sens., 52 397 –399 (1986). Google Scholar

94. 

Q. McNemar, “Note on the sampling error of the difference between correlated proportions or percentages,” Psychometrika, 12 153 –157 (1947). https://doi.org/10.1007/BF02295996 0033-3123 Google Scholar

95. 

G. M. Foody, “Thematic map comparison: evaluating the statistical significance of differences in classification accuracy,” Photogramm. Eng. Remote Sens., 70 627 –633 (2004). https://doi.org/10.14358/PERS.70.5.627 Google Scholar

Biographies for the authors are not available.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 1931-3195/2018/$25.00 © 2018 SPIE
Rei Sonobe, Yuki Yamaya, Hiroshi Tani, Xiufeng Wang, Nobuyuki Kobayashi, and Kan-ichiro Mochizuki "Crop classification from Sentinel-2-derived vegetation indices using ensemble learning," Journal of Applied Remote Sensing 12(2), 026019 (18 May 2018). https://doi.org/10.1117/1.JRS.12.026019
Received: 12 February 2018; Accepted: 7 May 2018; Published: 18 May 2018
Lens.org Logo
CITATIONS
Cited by 115 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Vegetation

Reflectivity

Multispectral imaging

Data modeling

Agriculture

Machine learning

Short wave infrared radiation

Back to Top