3.4.2 Fusion of classification maps

After having processed several classifications of the same input image but from different models or methods (SVM, KNN, Random Forest,...), it is possible to make a fusion of these classification maps with the FusionOfClassifications application which uses either majority voting or the Demspter Shafer framework to handle this fusion. The Fusion of Classifications generates a single more robust and precise classification map which combines the information extracted from the input list of labeled images.

The FusionOfClassifications application has the following input parameters :

The input pixels with the nodata class label are simply ignored by the fusion process. Moreover, the output pixels for which the fusion process does not result in a unique class label, are set to the undecided value.

Majority voting for the fusion of classifications

In the Majority Voting method implemented in the FusionOfClassifications application, the value of each output pixel is equal to the more frequent class label of the same pixel in the input classification maps. However, it may happen that the more frequent class labels are not unique in individual pixels. In that case, the undecided label is attributed to the output pixels.

The application can be used like this:

otbcli_FusionOfClassifications  -il             cmap1.tif cmap2.tif cmap3.tif  
                                -method         majorityvoting  
                                -nodatalabel    0  
                                -undecidedlabel 10  
                                -out            MVFusedClassificationMap.tif

Let us consider 6 independent classification maps of the same input image (Cf. left image in Fig. 3.17) generated from 6 different SVM models. The Fig. 3.18 represents them after a color mapping by the same LUT. Thus, 4 classes (water: blue, roads: gray, vegetation: green, buildings with red roofs: red) are observable on each of them.


PIC PIC PIC PIC PIC PIC

Figure 3.18: Six fancy colored classified images to be fused, generated from 6 different SVM models.


As an example of the FusionOfClassifications application by majority voting, the fusion of the six input classification maps represented in Fig. 3.18 leads to the classification map illustrated on the right in Fig. 3.19. Thus, it appears that this fusion highlights the more relevant classes among the six different input classifications. The white parts of the fused image correspond to the undecided class labels, i.e. to pixels for which there is not a unique majority voting.


PIC PIC

Figure 3.19: From left to right: Original image, and fancy colored classified image obtained by a majority voting fusion of the 6 classification maps represented in Fig. 3.18 (water: blue, roads: gray, vegetation: green, buildings with red roofs: red, undecided: white).


Dempster Shafer framework for the fusion of classifications

The FusionOfClassifications application, handles another method to compute the fusion: the Dempster Shafer framework. In the Dempster-Shafer theory, the performance of each classifier resulting in the classification maps to fuse are evaluated with the help of the so-called belief function of each class label, which measures the degree of belief that the corresponding label is correctly assigned to a pixel. For each classifier, and for each class label, these belief functions are estimated from another parameter called the mass of belief of each class label, which measures the confidence that the user can have in each classifier according to the resulting labels.

In the Dempster Shafer framework for the fusion of classification maps, the fused class label for each pixel is the one with the maximal belief function. In case of multiple class labels maximizing the belief functions, the output fused pixels are set to the undecided value.

In order to estimate the confidence level in each classification map, each of them should be confronted with a ground truth. For this purpose, the masses of belief of the class labels resulting from a classifier are estimated from its confusion matrix, which is itself exported as a *.CSV file with the help of the ComputeConfusionMatrix application. Thus, using the Dempster Shafer method to fuse classification maps needs an additional input list of such *.CSV files corresponding to their respective confusion matrices.

The application can be used like this:

otbcli_FusionOfClassifications  -il             cmap1.tif cmap2.tif cmap3.tif  
                                -method         dempstershafer  
                                -method.dempstershafer.cmfl  
                                                cmat1.csv cmat2.csv cmat3.csv  
                                -nodatalabel    0  
                                -undecidedlabel 10  
                                -out            DSFusedClassificationMap.tif

As an example of the FusionOfClassifications application by Dempster Shafer, the fusion of the six input classification maps represented in Fig. 3.18 leads to the classification map illustrated on the right in Fig. 3.20. Thus, it appears that this fusion gives access to a more precise and robust classification map based on the confidence level in each classifier.


PIC PIC

Figure 3.20: From left to right: Original image, and fancy colored classified image obtained by a Dempster Shafer fusion of the 6 classification maps represented in Fig. 3.18 (water: blue, roads: gray, vegetation: green, buildings with red roofs: red, undecided: white).


Recommendations to properly use the fusion of classification maps

In order to properly use the FusionOfClassifications application, some points should be considered. First, the list_of_input_images and OutputFusedClassificationImage are single band labeled images, which means that the value of each pixel corresponds to the class label it belongs to, and labels in each classification map must represent the same class. Secondly, the undecided label value must be different from existing labels in the input images in order to avoid any ambiguity in the interpretation of the OutputFusedClassificationImage.