Parameter-Free Loss for Class-Imbalanced Deep Learning in Image Classification

Abstract
Current state-of-the-art class-imbalanced loss functions for deep models require exhaustive tuning on hyperparameters for high model performance, resulting in low training efficiency and impracticality for nonexpert users. To tackle this issue, a parameter-free loss (PF-loss) function is proposed, which works for both binary and multiclass-imbalanced deep learning for image classification tasks. PF-loss provides three advantages: 1) training time is significantly reduced due to NO tuning on hyperparameter(s); 2) it dynamically pays more attention on minority classes (rather than outliers compared to the existing loss functions) with NO hyperparameters in the loss function; and 3) higher accuracy can be achieved since it adapts to the changes of data distribution in each mini-batch instead of the fixed hyperparameters in the existing methods during training, especially when the data are highly skewed. Experimental results on some classical image datasets with different imbalance ratios (IR, up to 200) show that PF-loss reduces the training time down to 1/148 of that spent by compared state-of-the-art losses and simultaneously achieves comparable or even higher accuracy in terms of both G-mean and area under receiver operating characteristic (ROC) curve (AUC) metrics, especially when the data are highly skewed.
Funding Information
  • National Natural Science Foundation of China (62006160)
  • Educational Commission of Guangdong Province (2020KQNCX062)
  • Shenzhen Fundamental Research Program (20200813102946001)
  • Science and Technology Development Fund, Macau (0112/2020/A, 004/2019/AFJ)
  • National Natural Science Foundation of China (81771922, 62071309, 61801305, 81971585, 61871274)

This publication has 28 references indexed in Scilit: