Classification

Pixel based classification

The classification in the application framework provides a supervised pixel-wise classification chain based on learning from multiple images, and using one specified machine learning method like SVM, Bayes, KNN, Random Forests, Artificial Neural Network, and others...(see application help of TrainImagesClassifier for further details about all the available classifiers). It supports huge images through streaming and multi-threading. The classification chain performs a training step based on the intensities of each pixel as features. Please note that all the input images must have the same number of bands to be comparable.

Statistics estimation

In order to make these features comparable between each training images, the first step consists in estimating the input images statistics. These statistics will be used to center and reduce the intensities (mean of 0 and standard deviation of 1) of samples based on the vector data produced by the user. To do so, the ComputeImagesStatistics tool can be used:

otbcli_ComputeImagesStatistics -il  im1.tif im2.tif im3.tif
                               -out images_statistics.xml

This tool will compute each band mean, compute the standard deviation based on pooled variance of each band and finally export them to an XML file. The features statistics XML file will be an input of the following tools.

Building the training data set

As the chain is supervised, we first need to build a training set with positive examples of different objects of interest. These polygons must be saved in OGR vector format supported by GDAL like ESRI shapefile for example.

Please note that the positive examples in the vector data should have a ``Class`` field with a label value higher than 1 and coherent in each images.

You can generate the vector data set with software for example and save it in an OGR vector format supported by (ESRI shapefile for example). should be able to transform the vector data into the image coordinate system.

Performing the learning scheme

Once images statistics have been estimated, the learning scheme is the following:

  1. For each input image:
    1. Read the region of interest (ROI) inside the shapefile,
    2. Generate validation and training data within the ROI,
    3. Add vectors respectively to the training samples set and the validation samples set.
  2. Increase the size of the training samples set and balance it by generating new noisy samples from the previous ones,
  3. Perform the learning with this training set
  4. Estimate performances of the classifier on the validation samples set (confusion matrix, precision, recall and F-Score).

Let us consider a SVM classification. These steps can be performed by the TrainImagesClassifier command-line using the following:

otbcli_TrainImagesClassifier -io.il      im1.tif im2.tif im3.tif
                             -io.vd      vd1.shp vd2.shp vd3.shp
                             -io.imstat  images_statistics.xml
                             -classifier svm (classifier_for_the_training)
                             -io.out     model.svm

Additional groups of parameters are also available (see application help for more details):

  • -elev Handling of elevation (DEM or average elevation)
  • -sample Group of parameters for sampling
  • -classifier Classifiers to use for the training, and their corresponding groups of parameters

Using the classification model

Once the classifier has been trained, one can apply the model to classify pixel inside defined classes on a new image using the ImageClassifier application:

otbcli_ImageClassifier -in     image.tif
                       -imstat images_statistics.xml
                       -model  model.svm
                       -out    labeled_image.tif

You can set an input mask to limit the classification to the mask area with value >0.

Validating the classification model

The performance of the model generated by the TrainImagesClassifier application is directly estimated by the application itself, which displays the precision, recall and F-score of each class, and can generate the global confusion matrix as an output *.CSV file.

With the ConputeConfusionMatrix application, it is also possible to estimate the performance of a model from a classification map generated with the ImageClassifier application. This labeled image is compared to positive reference samples (either represented as a raster labeled image or as a vector data containing the reference classes). It will compute the confusion matrix and precision, recall and F-score of each class too, based on the ConfusionMatrixCalculator class.

otbcli_ComputeConfusionMatrix -in                labeled_image.tif
                              -ref               vector
                              -ref.vector.in     vectordata.shp
                              -ref.vector.field  Class (name_of_label_field)
                              -out               confusion_matrix.csv

Fancy classification results

Color mapping can be used to apply color transformations on the final graylevel label image. It allows to get an RGB classification map by re-mapping the image values to be suitable for display purposes. One can use the ColorMapping application. This tool will replace each label with an 8-bits RGB color specificied in a mapping file. The mapping file should look like this :

# Lines beginning with a # are ignored
1 255 0 0

In the previous example, 1 is the label and 255 0 0 is a RGB color (this one will be rendered as red). To use the mapping tool, enter the following :

otbcli_ColorMapping -in                labeled_image.tif
                    -method            custom
                    -method.custom.lut lut_mapping_file.txt
                    -out               RGB_color_image.tif

Other look-up tables (LUT) are available : standard continuous LUT, optimal LUT, and LUT computed over a support image.

Example

We consider 4 classes: water, roads, vegetation and buildings with red roofs. Data is available in the OTB-Data repository and this image is produced with the commands inside this file .

../_images/classification_chain_inputimage.jpg
../_images/classification_chain_fancyclassif_fusion.jpg
../_images/classification_chain_fancyclassif.jpg

Figure 2: From left to right: Original image, result image with fusion (with monteverdi viewer) of original image and fancy classification and input image with fancy color classification from labeled image.

Fusion of classification maps

After having processed several classifications of the same input image but from different models or methods (SVM, KNN, Random Forest,...), it is possible to make a fusion of these classification maps with the FusionOfClassifications application which uses either majority voting or the Demspter Shafer framework to handle this fusion. The Fusion of Classifications generates a single more robust and precise classification map which combines the information extracted from the input list of labeled images.

The FusionOfClassifications application has the following input parameters :

  • -il list of input labeled classification images to fuse
  • -out the output labeled image resulting from the fusion of the input classification images
  • -method the fusion method (either by majority voting or by Dempster Shafer)
  • -nodatalabel label for the no data class (default value = 0)
  • -undecidedlabel label for the undecided class (default value = 0)

The input pixels with the nodata class label are simply ignored by the fusion process. Moreover, the output pixels for which the fusion process does not result in a unique class label, are set to the undecided value.

Majority voting for the fusion of classifications

In the Majority Voting method implemented in the FusionOfClassifications application, the value of each output pixel is equal to the more frequent class label of the same pixel in the input classification maps. However, it may happen that the more frequent class labels are not unique in individual pixels. In that case, the undecided label is attributed to the output pixels.

The application can be used like this:

otbcli_FusionOfClassifications  -il             cmap1.tif cmap2.tif cmap3.tif
                                -method         majorityvoting
                                -nodatalabel    0
                                -undecidedlabel 10
                                -out            MVFusedClassificationMap.tif

Let us consider 6 independent classification maps of the same input image (Cf. left image in Figure 1) generated from 6 different SVM models. The Figure 2 represents them after a color mapping by the same LUT. Thus, 4 classes (water: blue, roads: gray,vegetation: green, buildings with red roofs: red) are observable on each of them.

../_images/QB_1_ortho_C1_CM.png
../_images/QB_1_ortho_C2_CM.png
../_images/QB_1_ortho_C3_CM.png
../_images/QB_1_ortho_C4_CM.png
../_images/QB_1_ortho_C5_CM.png
../_images/QB_1_ortho_C6_CM.png

Figure 3: Six fancy colored classified images to be fused, generated from 6 different SVM models.

As an example of the FusionOfClassifications application by majority voting, the fusion of the six input classification maps represented in Figure 3 leads to the classification map illustrated on the right in Figure 4. Thus, it appears that this fusion highlights the more relevant classes among the six different input classifications. The white parts of the fused image correspond to the undecided class labels, i.e. to pixels for which there is not a unique majority voting.

../_images/classification_chain_inputimage.jpg
../_images/QB_1_ortho_MV_C123456_CM.png

Figure 4: From left to right: Original image, and fancy colored classified image obtained by a majority voting fusion of the 6 classification maps represented in Fig. 4.13 (water: blue, roads: gray, vegetation: green, buildings with red roofs: red, undecided: white)

Dempster Shafer framework for the fusion of classifications

The FusionOfClassifications application, handles another method to compute the fusion: the Dempster Shafer framework. In the Dempster-Shafer theory , the performance of each classifier resulting in the classification maps to fuse are evaluated with the help of the so-called belief function of each class label, which measures the degree of belief that the corresponding label is correctly assigned to a pixel. For each classifier, and for each class label, these belief functions are estimated from another parameter called the mass of belief of each class label, which measures the confidence that the user can have in each classifier according to the resulting labels.

In the Dempster Shafer framework for the fusion of classification maps, the fused class label for each pixel is the one with the maximal belief function. In case of multiple class labels maximizing the belief functions, the output fused pixels are set to the undecided value.

In order to estimate the confidence level in each classification map, each of them should be confronted with a ground truth. For this purpose, the masses of belief of the class labels resulting from a classifier are estimated from its confusion matrix, which is itself exported as a *.CSV file with the help of the ComputeConfusionMatrix application. Thus, using the Dempster Shafer method to fuse classification maps needs an additional input list of such *.CSV files corresponding to their respective confusion matrices.

The application can be used like this:

otbcli_FusionOfClassifications  -il             cmap1.tif cmap2.tif cmap3.tif
                                -method         dempstershafer
                                -method.dempstershafer.cmfl
                                                cmat1.csv cmat2.csv cmat3.csv
                                -nodatalabel    0
                                -undecidedlabel 10
                                -out            DSFusedClassificationMap.tif

As an example of the FusionOfClassifications application by Dempster Shafer, the fusion of the six input classification maps represented in Figure 3 leads to the classification map illustrated on the right in Figure 5 [fig:ClassificationMapFusionApplicationDS]. Thus, it appears that this fusion gives access to a more precise and robust classification map based on the confidence level in each classifier.

../_images/classification_chain_inputimage.jpg
../_images/QB_1_ortho_DS_V_P_C123456_CM.png

Figure 5: From left to right: Original image, and fancy colored classified image obtained by a Dempster Shafer fusion of the 6 classification maps represented in Fig. 4.13 (water: blue, roads: gray, vegetation: green, buildings with red roofs: red, undecided: white).

Recommandations to properly use the fusion of classification maps

In order to properly use the FusionOfClassifications application, some points should be considered. First, the list_of_input_images and OutputFusedClassificationImage are single band labeled images, which means that the value of each pixel corresponds to the class label it belongs to, and labels in each classification map must represent the same class. Secondly, the undecided label value must be different from existing labels in the input images in order to avoid any ambiguity in the interpretation of the OutputFusedClassificationImage.

Majority voting based classification map regularization

Resulting classification maps can be regularized in order to smoothen irregular classes. Such a regularization process improves classification results by making more homogeneous areas which are easier to handle.

Majority voting for the classification map regularization

The ClassificationMapRegularization application performs a regularization of a labeled input image based on the Majority Voting method in a specified ball shaped neighborhood. For each center pixel, Majority Voting takes the more representative value of all the pixels identified by the structuring element and then sets the output center pixel to this majority label value. The ball shaped neighborhood is identified by its radius expressed in pixels.

Handling ambiguity and not classified pixels in the majority voting based regularization

Since, the Majority Voting regularization may lead to not unique majority labels in the neighborhood, it is important to define which behaviour the filter must have in this case. For this purpose, a Boolean parameter (called ip.suvbool) is used in the ClassificationMapRegularization application to choose whether pixels with more than one majority class are set to Undecided (true), or to their Original labels (false = default value).

Moreover, it may happen that pixels in the input image do not belong to any of the considered class. Such pixels are assumed to belong to the NoData class, the label of which is specified as an input parameter for the regularization. Therefore, those NoData input pixels are invariant and keep their NoData label in the output regularized image.

The ClassificationMapRegularization application has the following input parameters :

  • -io.in labeled input image resulting from a previous classification process
  • -io.out output labeled image corresponding to the regularization of the input image
  • -ip.radius integer corresponding to the radius of the ball shaped structuring element (default value = 1 pixel)
  • -ip.suvbool boolean parameter used to choose whether pixels with more than one majority class are set to Undecided (true), or to their Original labels (false = default value). Please note that the Undecided value must be different from existing labels in the input image
  • -ip.nodatalabel label for the NoData class. Such input pixels keep their NoData label in the output image (default value = 0)
  • -ip.undecidedlabel label for the Undecided class (default value = 0).

The application can be used like this:

otbcli_ClassificationMapRegularization  -io.in              labeled_image.tif
                                        -ip.radius          3
                                        -ip.suvbool         true
                                        -ip.nodatalabel     10
                                        -ip.undecidedlabel  7
                                        -io.out             regularized.tif

Recommandations to properly use the majority voting based regularization

In order to properly use the ClassificationMapRegularization application, some points should be considered. First, both InputLabeledImage and OutputLabeledImage are single band labeled images, which means that the value of each pixel corresponds to the class label it belongs to. The InputLabeledImage is commonly an image generated with a classification algorithm such as the SVM classification. Remark: both InputLabeledImage and OutputLabeledImage are not necessarily of the same datatype. Secondly, if ip.suvbool == true, the Undecided label value must be different from existing labels in the input labeled image in order to avoid any ambiguity in the interpretation of the regularized OutputLabeledImage. Finally, the structuring element radius must have a minimum value equal to 1 pixel, which is its default value. Both NoData and Undecided labels have a default value equal to 0.

Example

Resulting from the application presented in section [ssec:classificationcolormapping], and illustrated in Fig. [fig:MeanShiftVectorImageFilter], the Fig. [fig:ClassificationMapRegularizationApplication] shows a regularization of a classification map composed of 4 classes: water, roads, vegetation and buildings with red roofs. The radius of the ball shaped structuring element is equal to 3 pixels, which corresponds to a ball included in a 7 x 7 pixels square. Pixels with more than one majority class keep their original labels.

|image| |image| |image| [fig:ClassificationMapRegularizationApplication]

Regression

The machine learning models in OpenCV and LibSVM also support a regression mode : they can be used to predict a numeric value (i.e. not a class index) from an input predictor. The workflow is the same as classification. First, the regression model is trained, then it can be used to predict output values. The applications to do that are and .

../_images/classification_chain_inputimage.jpg
../_images/classification_chain_fancyclassif_CMR_input.png
../_images/classification_chain_fancyclassif_CMR_3.png

Figure 6: From left to right: Original image, fancy colored classified image and regularized classification map with radius equal to 3 pixels.

The input data set for training must have the following structure :

  • n components for the input predictors
  • one component for the corresponding output value

The application supports 2 input formats :

  • An image list : each image should have components matching the structure detailed earlier (n feature components + 1 output value)
  • A CSV file : the first n columns are the feature components and the last one is the output value

If you have separate images for predictors and output values, you can use the application.

otbcli_ConcatenateImages  -il features.tif  output_value.tif
                          -out training_set.tif

Statistics estimation

As in classification, a statistics estimation step can be performed before training. It allows to normalize the dynamic of the input predictors to a standard one : zero mean, unit standard deviation. The main difference with the classification case is that with regression, the dynamic of output values can also be reduced.

The statistics file format is identical to the output file from application, for instance :

<?xml version="1.0" ?>
<FeatureStatistics>
    <Statistic name="mean">
        <StatisticVector value="198.796" />
        <StatisticVector value="283.117" />
        <StatisticVector value="169.878" />
        <StatisticVector value="376.514" />
    </Statistic>
    <Statistic name="stddev">
        <StatisticVector value="22.6234" />
        <StatisticVector value="41.4086" />
        <StatisticVector value="40.6766" />
        <StatisticVector value="110.956" />
    </Statistic>
</FeatureStatistics>

In the application, normalization of input predictors and output values is optional. There are 3 options :

  • No statistic file : normalization disabled
  • Statistic file with n components : normalization enabled for input predictors only
  • Statistic file with n+1 components : normalization enabled for input predictors and output values

If you use an image list as training set, you can run application. It will produce a statistics file suitable for input and output normalization (third option).

otbcli_ComputeImagesStatistics  -il   training_set.tif
                                -out  stats.xml

Training

Initially, the machine learning models in OTB only used classification. But since they come from external libraries (OpenCV and LibSVM), the regression mode was already implemented in these external libraries. So the integration of these models in OTB has been improved in order to allow the usage of regression mode. As a consequence , the machine learning models have nearly the same set of parameters for classification and regression mode.

  • Decision Trees
  • Gradient Boosted Trees
  • Neural Network
  • Random Forests
  • K-Nearest Neighbors

The behaviour of application is very similar to . From the input data set, a portion of the samples is used for training, whereas the other part is used for validation. The user may also set the model to train and its parameters. Once the training is done, the model is stored in an output file.

otbcli_TrainRegression  -io.il                training_set.tif
                        -io.imstat            stats.xml
                        -io.out               model.txt
                        -sample.vtr           0.5
                        -classifier           knn
                        -classifier.knn.k     5
                        -classifier.knn.rule  median

Prediction

Once the model is trained, it can be used in application to perform the prediction on an entire image containing input predictors (i.e. an image with only n feature components). If the model was trained with normalization, the same statistic file must be used for prediction. The behavior of with respect to statistic file is identical to :

  • no statistic file : normalization off
  • n components : input only
  • n+1 components : input and output

The model to use is read from file (the one produced during training).

otbcli_PredictRegression  -in     features_bis.tif
                          -model  model.txt
                          -imstat stats.xml
                          -out    prediction.tif

Samples selection

Since release 5.4, new functionalities related to the handling of the vectors from the training data set (see also [sssec:building]) were added to OTB.

The first improvement was provided by the application PolygonClassStatistics. This application processes a set of training geometries, and outputs statistics about the sample distribution in the input geometries (in the form of a xml file) :

  • number of samples per class
  • number of samples per geometry

Supported geometries are polygons, lines and points; depending on the geometry type, this application behaves differently :

  • polygon : select pixels whose center is inside the polygon
  • lines : select pixels intersecting the line
  • points : select closest pixel to the provided point

The application also takes as input a support image, but the values of its pixels are not used. The purpose is rather to define the image grid that will later provide the samples. The user can also provide a raster mask, that will be used to discard pixel positions.

A simple use of the application PolygonClassStatistics could be as follows :

otbcli_PolygonClassStatistics  -in     support_image.tif
                               -vec    variousTrainingVectors.sqlite
                               -field  class
                               -out    polygonStat.xml