Towards Multimodal Data Retrieval in Remote Sensing

Abstract
The world around us is multimodal in nature: seeing scenes, hearing voices, watching videos, and savoring flavors. Recently, multimodal applications, which deal with multiple modalities, especially image-text retrieval (matching) a topic of broad and current interest in the general literature of computer vision. Yet most of the existing remote sensing image retrieval approaches rely on the concept of image-image matching (unimodal). In this paper, we aim to draw the attention of researchers in the remote sensing community to a recent direction multimodal data retrieval (matching), particularly image-text matching, which is considered a recent research direction, due to its importance for human intelligence to grasp the relation between visual and textual content and to bridge the semantic gap between such different contents (modalities) in light of tremendous progress in deep learning techniques through highlighting the three main challenges (multimodal representation, similarity measurement, and dataset availability) that face researchers in this research line.