Sphere Loss: Learning Discriminative Features for Scene Classification in a Hyperspherical Feature Space

Abstract
The power of features considerably influences the classification performance of remote sensing scene classification (RSSC). Recently, deep convolutional neural networks (DCNNs) have been used to extract powerful scene features. Nevertheless, confusion and overlap still occur in the feature space, leading to inaccurate RSSC. To alleviate this problem, we propose a novel deep metric learning loss function incorporated into a sphere loss to enhance the discrimination of feature representations. Inspired by two representative loss functions (i.e., angular loss and center loss), the proposed sphere loss learns a unique cluster center for each class in a remote sensing scene. Because the cluster centers and features are restricted by an introduced geometrical constraint, the intraclass distance of features decreases, while the interclass distance increases. Moreover, we introduce a spatial constraint, i.e., a uniformity coefficient on different cluster centers, which causes the centers to form a uniform distribution that maximizes the interclass distances between features. Extensive analysis and experiments on three commonly used RSSC data sets consistently show that, compared with state-of-the-art methods, the proposed sphere loss can effectively learn discriminative feature representations and significantly improve RSSC.
Funding Information
  • Chang Jiang Scholars Program (T2012122)
  • Hundred Leading Talent Project of Beijing Science and Technology (Z141101001514005)

This publication has 32 references indexed in Scilit: