Remote sensing is not just a matter of taking pictures, but also – mostly – a matter of measuring physical values. In order to properly deal with physical magnitudes, the numerical values provided by the sensors have to be calibrated. After that, several indices with physical meaning can be computed.
With multispectral sensors, several indices can be computed, combining several spectral bands to show features that are not obvious using only one band. Indices can show:
A vegetation index is a quantitative measure used to measure biomass or vegetative vigor, usually formed from combinations of several spectral bands, whose values are added, divided, or multiplied in order to yield a single value that indicates the amount or vigor of vegetation.
Numerous indices are available in OTB and are listed in table 13.1 to 13.4 with their references.
NDVI | Normalized Difference Vegetation Index [119] |
RVI | Ratio Vegetation Index [103] |
PVI | Perpendicular Vegetation Index [116, 144] |
SAVI | Soil Adjusted Vegetation Index [64] |
TSAVI | Transformed Soil Adjusted Vegetation Index [9, 8] |
MSAVI | Modified Soil Adjusted Vegetation Index [112] |
MSAVI2 | Modified Soil Adjusted Vegetation Index [112] |
GEMI | Global Environment Monitoring Index [108] |
WDVI | Weighted Difference Vegetation Index [26, 27] |
AVI | Angular Vegetation Index [110] |
ARVI | Atmospherically Resistant Vegetation Index [78] |
TSARVI | Transformed Soil Adjusted Vegetation Index [78] |
EVI | Enhanced Vegetation Index [65, 75] |
IPVI | Infrared Percentage Vegetation Index [31] |
TNDVI | Transformed NDVI [35] |
SRWI | Simple Ratio Water Index [150] |
NDWI | Normalized Difference Water Index [20] |
NDWI2 | Normalized Difference Water Index [94] |
MNDWI | Modified Normalized Difference Water Index [146] |
NDPI | Normalized Difference Pond Index [82] |
NDTI | Normalized Difference Turbidity Index [82] |
SA | Spectral Angle |
The use of the different indices is very similar, and only few example are given in the next sections.
NDVI was one of the most successful of many attempts to simply and quickly identify vegetated areas and their condition, and it remains the most well-known and used index to detect live green plant canopies in multispectral remote sensing data. Once the feasibility to detect vegetation had been demonstrated, users tended to also use the NDVI to quantify the photosynthetic capacity of plant canopies. This, however, can be a rather more complex undertaking if not done properly.
The source code for this example can be found in the file
Examples/Radiometry/NDVIRAndNIRVegetationIndexImageFilter.cxx.
The following example illustrates the use of the otb::RAndNIRIndexImageFilter with the use of the Normalized Difference Vegatation Index (NDVI). NDVI computes the difference between the NIR channel, noted LNIR, and the red channel, noted Lr radiances reflected from the surface and transmitted through the atmosphere:
![]() | (13.1) |
The following classes provide similar functionality:
With the otb::RAndNIRIndexImageFilter class the filter inputs are one channel images: one inmage represents the NIR channel, the the other the NIR channel.
Let’s look at the minimal code required to use this algorithm. First, the following header defining the otb::RAndNIRIndexImageFilter class must be included.
The image types are now defined using pixel types the dimension. Input and output images are defined as otb::Image .
The NDVI (Normalized Difference Vegetation Index) is instantiated using the images pixel type as template parameters. It is implemented as a functor class which will be passed as a parameter to an otb::RAndNIRIndexImageFilter .
The otb::RAndNIRIndexImageFilter type is instantiated using the images types and the NDVI functor as template parameters.
Now the input images are set and a name is given to the output image.
We set the processing pipeline: filter inputs are linked to the reader output and the filter output is linked to the writer input.
Invocation of the Update() method on the writer triggers the execution of the pipeline. It is recommended to place update() calls in a try/catch block in case errors occur and exceptions are thrown.
Let’s now run this example using as input the images NDVI_3.hdr and NDVI_4.hdr (images kindly and free of charge given by SISA and CNES) provided in the directory Examples/Data.
The source code for this example can be found in the file
Examples/Radiometry/ARVIMultiChannelRAndBAndNIRVegetationIndexImageFilter.cxx.
The following example illustrates the use of the otb::MultiChannelRAndBAndNIRIndexImageFilter with the use of the Atmospherically Resistant Vegetation Index (ARVI) otb::Functor::ARVI. ARVI is an improved version of the NDVI that is more robust to the atmospheric effect. In addition to the red and NIR channels (used in the NDVI), the ARVI takes advantage of the presence of the blue channel to accomplish a self-correction process for the atmospheric effect on the red channel. For this, it uses the difference in the radiance between the blue and the red channels to correct the radiance in the red channel. Let’s define ρNIR∗, ρr∗, ρb∗ the normalized radiances (that is to say the radiance normalized to reflectance units) of red, blue and NIR channels respectively. ρrb∗ is defined as
![]() | (13.2) |
The ARVI expression is
![]() | (13.3) |
This formula can be simplified with :
![]() | (13.4) |
For more details, refer to Kaufman and Tanre’ work [78].
The following classes provide similar functionality:
With the otb::MultiChannelRAndBAndNIRIndexImageFilter class the input has to be a multi channel image and the user has to specify index channel of the red, blue and NIR channel.
Let’s look at the minimal code required to use this algorithm. First, the following header defining the otb::MultiChannelRAndBAndNIRIndexImageFilter class must be included.
The image types are now defined using pixel types and dimension. The input image is defined as an otb::VectorImage , the output is a otb::Image .
The ARVI (Atmospherically Resistant Vegetation Index) is instantiated using the image pixel types as template parameters. Note that we also can use other functors which operate with the Red, Blue and Nir channels such as EVI, ARVI and TSARVI.
The otb::MultiChannelRAndBAndNIRIndexImageFilter type is defined using the image types and the ARVI functor as template parameters. We then instantiate the filter itself.
Now the input image is set and a name is given to the output image.
The three used index bands (red, blue and NIR) are declared.
The γ parameter is set. The otb::MultiChannelRAndBAndNIRIndexImageFilter class sets the default value of γ to 0.5. This parameter is used to reduce the atmospheric effect on a global scale.
The filter input is linked to the reader output and the filter output is linked to the writer input.
The invocation of the Update() method on the writer triggers the execution of the pipeline. It is recommended to place update calls in a try/catch block in case errors occur and exceptions are thrown.
Let’s now run this example using as input the image IndexVegetation.hd (image kindly and free of charge given by SISA and CNES) and γ=0.6 provided in the directory Examples/Data.
The source code for this example can be found in the file
Examples/Radiometry/AVIMultiChannelRAndGAndNIRVegetationIndexImageFilter.cxx.
The following example illustrates the use of the otb::MultiChannelRAndGAndNIR VegetationIndexImageFilter with the use of the Angular Vegetation Index (AVI). The equation for the Angular Vegetation Index involves the gren, red and near infra-red bands. λ1, λ2 and λ3 are the mid-band wavelengths for the green, red and NIR bands and tan−1 is the arctangent function.
The AVI expression is
![]() | (13.5) |
![]() | (13.6) |
![]() | (13.7) |
For more details, refer to Plummer work [110].
With the otb::MultiChannelRAndGAndNIRIndexImageFilter class the input has to be a multi channel image and the user has to specify the channel index of the red, green and NIR channel.
Let’s look at the minimal code required to use this algorithm. First, the following header defining the otb::MultiChannelRAndGAndNIRIndexImageFilter class must be included.
The image types are now defined using pixel types and dimension. The input image is defined as an otb::VectorImage , the output is a otb::Image .
The AVI (Angular Vegetation Index) is instantiated using the image pixel types as template parameters.
The otb::MultiChannelRAndGAndNIRIndexImageFilter type is defined using the image types and the AVI functor as template parameters. We then instantiate the filter itself.
Now the input image is set and a name is given to the output image.
The three used index bands (red, green and NIR) are declared.
The λ R, G and NIR parameters are set. The otb::MultiChannelRAndGAndNIRIndexImageFilter class sets the default values of λ to 660, 560 and 830.
The filter input is linked to the reader output and the filter output is linked to the writer input.
The invocation of the Update() method on the writer triggers the execution of the pipeline. It is recommended to place update calls in a try/catch block in case errors occur and exceptions are thrown.
Let’s now run this example using as input the image verySmallFSATSW.tif provided in the directory Examples/Data.
The source code for this example can be found in the file
Examples/Radiometry/AtmosphericCorrectionSequencement.cxx.
The following example illustrates the application of atmospheric corrections to an optical multispectral image similar to Pleiades. These corrections are made in four steps :
The manipulation of each class used for the different steps and the link with the 6S radiometry library will be explained. In particular, the API modifications that have been made in version 4.2 will be detailed. There was several reasons behind these modifications :
Let’s look at the minimal code required to use this algorithm. First, the following header defining the otb::AtmosphericCorrectionSequencement class must be included. For the numerical to luminance image, luminance to refletance image, and reflectance to atmospheric correction image corrections and the neighborhood correction, four header files are required.
In version 4.2, the class SurfaceAdjacencyEffect6SCorrectionSchemeFilter has been renamed into otb::SurfaceAdjacencyEffectCorrectionSchemeFilter , but it still does the same thing.
This chain uses the 6S radiative transfer code to compute radiative terms (for instance upward and downward transmittance). The inputs needed are separated into two categories :
The class otb::RadiometryCorrectionParametersToAtmisphericRadiativeTerms computes the radiative terms using these two types of parameters. It contains a single static method that calls the 6S library. The header also includes the classes to manipulate correction parameters and radiative terms.
Image types are now defined using pixel types and dimension. The input image is defined as an otb::VectorImage , the output image is a otb::VectorImage . To simplify, input and output image types are the same one.
The GenerateOutputInformation() reader method is called to know the number of component per pixel of the image. It is recommended to place GenerateOutputInformation calls in a try/catch block in case errors occur and exceptions are thrown.
The otb::ImageToLuminanceImageFilter type is defined and instancied. This class uses a functor applied to each component of each pixel (Xk) whose formula is:
![]() | (13.8) |
Where :
Here, α and β are read from an ASCII file given in input, stored in a vector and passed to the class.
The otb::LuminanceToReflectanceImageFilter type is defined and instancied. This class used a functor applied to each component of each pixel of the luminance filter output (LTOAk):
![]() | (13.9) |
Where :
The solar illumination is read from a ASCII file given in input, stored in a vector and given to the class. Day, month and zenital solar angle are inputs and can be directly given to the class.
At this step of the chain, radiative information are nedeed to compute the contribution of the atmosphere (such as atmosphere transmittance and reflectance). Those information will be computed from different correction parameters stored in otb::AtmosphericCorrectionParameters and otb::ImageMetadataCorrectionParameters instances. These containers will be given to the static function Compute from otb::RadiometryCorrectionParametersToAtmosphericRadiativeTerms class, which will call a 6S routine that will compute the needed radiometric informations and store them in a otb::AtmosphericRadiativeTerms class instance. For this, otb::RadiometryCorrectionParametersToAtmosphericRadiativeTerms , otb::AtmosphericCorrectionParameters , otb::ImageMetadataCorrectionParameters and otb::AtmosphericRadiativeTerms types are defined and instancied.
The otb::ImageMetadataCorrectionParameters class stores several parameters that are generaly present in the image metadata :
When this container is not set in the ReflectanceToSurfaceReflectance filter, it is automatically filled using the image metadata. The following lines show that it is also possible to set the values manually.
The otb::AtmosphericCorrectionParameters class stores physical parameters of the atmosphere that are not impacted by the viewing angles of the image :
Once those parameters are loaded, they are used by the 6S library to compute the needed radiometric informations. The RadiometryCorrectionParametersToAtmosphericRadiativeTerms class provides a static function to perform this step2 .
The output is stored inside an instance of the otb::AtmosphericRadiativeTerms class. This class contains (for each channel of the image)
Atmospheric corrections can now start. First, an instance of otb::ReflectanceToSurfaceReflectanceImageFilter is created.
The aim of the atmospheric correction is to invert the surface reflectance (for each pixel of the input image) from the TOA reflectance and from simulations of the atmospheric radiative functions corresponding to the geometrical conditions of the observation and to the atmospheric components. The process required to be applied on each pixel of the image, band by band with the following formula:
![]() | (13.10) |
Where,
![]() | (13.11) |
With :
All those parameters are contained in the AtmosphericRadiativeTerms container.
Next (and last step) is the neighborhood correction. For this, the SurfaceAdjacencyEffectCorrectionSchemeFilter class is used. The previous surface reflectance inversion is performed under the assumption of a homogeneous ground environment. The following step allows to correct the adjacency effect on the radiometry of pixels. The method is based on the decomposition of the observed signal as the summation of the own contribution of the target pixel and of the contributions of neighbored pixels moderated by their distance to the target pixel. A simplified relation may be :
![]() | (13.12) |
With :
![]() | (13.13) |
where,
The neighborhood consideration window size is given by the window radius.
An instance of otb::SurfaceAdjacencyEffectCorrectionSchemeFilter is created. This class has an interface quite similar to otb::ReflectanceToSurfaceReflectance . They both need radiative terms ( otb::AtmosphericRadiativeTerms ), so it is possible to compute them outside the filter and set them directly in the filter. The other solution is to give as input the two parameters containers (”atmospheric” and ”acquisition” parameters), then the filter will compute the radiative terms internally. If the ”acquisition” correction parameters are not present, the filter will try to get them from the image metadata.
Four inputs are needed to compute the neighborhood contribution:
At this step, each filter of the chain is instancied and every one has its input paramters set. A name can be given to the output image, each filter can be linked to the next one and create the final processing chain.
The invocation of the Update() method on the writer triggers the execution of the pipeline. It is recommended to place this call in a try/catch block in case errors occur and exceptions are thrown.