Performs a classification of the input image according to a model file.
This application performs an image classification based on a model file produced by the TrainImagesClassifier application. Pixels of the output image will contain the class labels decided by the classifier (maximal class label = 65535). The input pixels can be optionally centered and reduced according to the statistics file produced by the ComputeImagesStatistics application. An optional input mask can be provided, in which case only input image pixels whose corresponding mask value is greater than 0 will be classified. By default, the remaining pixels will be given the label 0 in the output image.
This application has several output images and supports “multi-writing”. Instead of computing and writing each image independently, the streamed image blocks are written in a synchronous way for each output. The output images will be computed strip by strip, using the available RAM to compute the strip size, and a user defined streaming mode can be specified using the streaming extended filenames (type, mode and value). Note that multi-writing can be disabled using the multi-write extended filename option: &multiwrite=false, in this case the output images will be written one by one. Note that multi-writing is not supported for MPI writers.
-in image Mandatory
The input image to classify.
The mask restricts the classification of the input image to the area where mask pixel values are greater than 0.
-model filename [dtype] Mandatory
A model file (produced by TrainImagesClassifier application, maximal class label = 65535).
-imstat filename [dtype]
An XML file containing mean and standard deviation to center and reduce samples before classification (produced by ComputeImagesStatistics application).
Label mask value
-nodatalabel int Default value: 0
By default, hidden pixels will have the assigned label 0 in the output image. It is possible to define the label mask by another value, but be careful not to use a label from another class (max. 65535).
-out image [dtype] Mandatory
Output image containing class labels
-confmap image [dtype]
Confidence map of the produced classification. The confidence index depends on the model:
- LibSVM: difference between the two highest probabilities (needs a model with probability estimates, so that classes probabilities can be computed for each sample)
- Boost: sum of votes
- DecisionTree: (not supported)
- KNearestNeighbors: number of neighbors with the same label
- NeuralNetwork: difference between the two highest responses
- NormalBayes: (not supported)
- RandomForest: Confidence (proportion of votes for the majority class). Margin (normalized difference of the votes of the 2 majority classes) is not available for now.
- SVM: distance to margin (only works for 2-class models)
-probamap image [dtype]
Probability of each class for each pixel. This is an image having a number of bands equal to the number of classes in the model. This is only implemented for the Shark Random Forest classifier at this point.
Available RAM (MB)
-ram int Default value: 256
Available memory for processing (in MB).
Number of classes in the model
-nbclasses int Default value: 20
The number of classes is required by the output of the probability map in order to set the number of output bands.
From the command-line:
otbcli_ImageClassifier -in QB_1_ortho.tif -imstat EstimateImageStatisticsQB1.xml -model clsvmModelQB1.svm -out clLabeledImageQB1.tif
import otbApplication app = otbApplication.Registry.CreateApplication("ImageClassifier") app.SetParameterString("in", "QB_1_ortho.tif") app.SetParameterString("imstat", "EstimateImageStatisticsQB1.xml") app.SetParameterString("model", "clsvmModelQB1.svm") app.SetParameterString("out", "clLabeledImageQB1.tif") app.ExecuteAndWriteOutput()
The input image must have the same type, order and number of bands as the images used to produce the statistics file and the SVM model file. If a statistics file was used during training by the TrainImagesClassifier, it is mandatory to use the same statistics file for classification. If an input mask is used, its size must match the input image size.