The SVM classification in application framework provides a supervised pixel-wise classification chain based on learning from multiple images. It supports huge images through streaming and multi-threading. The classification chain performs a SVM training step based on the intensities of each pixel as features. Please note that all the input images must have the same number of bands to be comparable.
In order to make these features comparable between each images, the first step is to estimate the input images statistics. These statistics will be used to center and reduce the intensities (mean of 0 and standard deviation of 1) of samples based on the vector data produced by the user. To do so, the ComputeImagesStatistics tool can be used :
This tool will compute each band mean, compute the standard deviation based on pooled variance of each band and finally export them to an XML file. The features statistics XML file will be an input of the following tools.
As the chain is supervised, we need first to build a training set with positive examples of different objects of interest. This can be done with Monteverdi Vectorization module (Fig.3.11). These polygons must be save in OGR vector format supported by GDAL like ESRI shapefile for example.
This operation will be reproduced on each image used as input of the training function.
Please note that the positive examples in the vector data should have a “Class“ field with a label value higher than 1 and coherent in each images.
You can generate the vector data set with Quantum GIS software for example and save it in an OGR vector format supported by GDAL (ESRI sphapefile for example). OTB Applications should be able to transform the vector data into the image coordinate system.
Once images statistics have been estimated, the learning scheme is the following:
These steps can be performed by the TrainImagesClassifier command-line using the following:
Optionnal groups of parameters are also available (see application help for more details):
It is also possible to estimate the performance of the SVM model with a new validation sample set and another image with the ValidateImagesClassifier application. It will compute the global confusion matrix and precision, recall and F-score of each class based on the ConfusionMatrixCalculator class.
You can save these results with the option -out output filename.
Once the classifier has been trained, one can apply the model to classify pixel inside defined classes on a new image using the ImageClassifier application:
You can set an input mask to limit the classification to the mask area with value >0.
Color mapping can be used to apply color transformations on the final graylevel label image. It allows to get an RGB classification map by re-mapping the image values to be suitable for display purposes. One can use the ColorMapping application. This tool will replace each label with an 8-bits RGB color specificied in a mapping file. The mapping file should look like this :
In the previous example, 1 is the label and 255 0 0 is a RGB color (this one will be rendered as red). To use the mapping tool, enter the following :
Other look-up tables (LUT) are available : standard continuous LUT, optimal LUT, and LUT computed over a support image.
We take 4 classes: water, roads, vegetation and buildings with red roof. Data is available in the OTB-Data repository and this image is produced with the commands inside this file.