Multimodal Classification of Remote Sensing Images: A Review and Future Directions
Top Cited Papers
- 7 August 2015
- journal article
- review article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Proceedings of the IEEE
- Vol. 103 (9), 1560-1584
- https://doi.org/10.1109/jproc.2015.2449668
Abstract
Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future.Keywords
Funding Information
- Generalitat Valenciana
- Swiss National Science Foundation (PP00P2_150593)
- Spanish Ministry of Economy and Competitiveness (MINECO)
- Italian Space Agency
This publication has 100 references indexed in Scilit:
- Sentinel-2: ESA's Optical High-Resolution Mission for GMES Operational ServicesRemote Sensing of Environment, 2012
- The Global Monitoring for Environment and Security (GMES) Sentinel-3 missionRemote Sensing of Environment, 2012
- Synergies between VSWIR and TIR data for the urban environment: An evaluation of the potential for the Hyperspectral Infrared Imager (HyspIRI) Decadal Survey missionRemote Sensing of Environment, 2012
- Automatic inference of articulated spine models in CT images using high-order Markov Random FieldsMedical Image Analysis, 2011
- Signal processing for hyperspectral image exploitationIEEE Signal Processing Magazine, 2002
- Multisensor image segmentation using Dempster-Shafer fusion in Markov fields contextIEEE Transactions on Geoscience and Remote Sensing, 2001
- The ESA Medium Resolution Imaging Spectrometer MERIS a review of the instrument and its missionInternational Journal of Remote Sensing, 1999
- Review article Multisensor image fusion in remote sensing: Concepts, methods and applicationsInternational Journal of Remote Sensing, 1998
- Review Article Digital change detection techniques using remotely-sensed dataInternational Journal of Remote Sensing, 1989
- A tutorial on hidden Markov models and selected applications in speech recognitionProceedings of the IEEE, 1989