Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing
Open Access
- 13 August 2015
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Proceedings of the IEEE
- Vol. 103 (9), 1585-1601
- https://doi.org/10.1109/jproc.2015.2462751
Abstract
Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?Keywords
Funding Information
- European Project (ERC-2012-AdG-320684-CHESS)
This publication has 79 references indexed in Scilit:
- Ensemble Multiple Kernel Active Learning For Classification of Multisource Remote Sensing DataIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
- Locality Preserving Composite Kernel Feature Extraction for Multi-Source Geospatial Image AnalysisIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
- Fusion of Hyperspectral and LiDAR Data for Landscape Visual Quality AssessmentIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
- Advances in Hyperspectral Image Classification: Earth Monitoring with Statistical Learning MethodsIEEE Signal Processing Magazine, 2013
- Infinite Gaussian mixture models for robust decision fusion of hyperspectral imagery and full waveform LiDAR dataPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- A survey of classical methods and new trends in pansharpening of multispectral imagesEURASIP Journal on Advances in Signal Processing, 2011
- Hyperspectral Image Classification Using Dictionary-Based Sparse RepresentationIEEE Transactions on Geoscience and Remote Sensing, 2011
- An empirical evaluation of deep architectures on problems with many factors of variationPublished by Association for Computing Machinery (ACM) ,2007
- Composite Kernels for Hyperspectral Image ClassificationIEEE Geoscience and Remote Sensing Letters, 2006
- Thematic and statistical evaluations of five panchromatic/multispectral fusion methods on simulated PLEIADES-HR imagesInformation Fusion, 2005