Representation Learning With Dual Autoencoder for Multi-Label Classification
Open Access
- 12 July 2021
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Access
- Vol. 9, 98939-98947
- https://doi.org/10.1109/access.2021.3096194
Abstract
Multi-label classification aims to deal with the problem that an object may be associated with one or more labels, which is a more difficult task due to the complex nature of multi-label data. The crucial problem of multi-label classification is the more robust and higher-level feature representation learning, which can reduce non-helpful feature attributes from the input space prior to training. In recent years, deep learning methods based on autoencoders have achieved excellent performance in multi-label classification for the advantages of powerful representations learning ability and fast convergence speed. However, most existing autoencoder-based methods only rely on the single autoencoder model, which pose challenges for multi-label feature representations learning and fail to measure similarities between data spaces. To address this problem, in this paper, we propose a novel representation learning method with dual autoencoder for multi-label classification. Compared to the existing autoencoder-based methods, our proposed method can capture different characteristics and more abstract features from data by the serially connection of two different types of autoencoders. More specifically, firstly, the algorithm of Reconstruction Independent Component Analysis (RICA) in sparse autoencoder is trained on patches on all training and test dataset for robust global feature representations learning. Secondly, with the output of RICA, stacked autoencoder with manifold regularization (SAMR) is introduced to ameliorate the quality of multi-label features learning. Comprehensive experiments on several real-world data sets demonstrate the effectiveness of our proposed approach compared with several competing state-of-the-art methods.Funding Information
- National Natural Science Foundation of China (61906060)
- Open Project Program of Key Laboratory of Huizhou Architecture in Anhui Province (HPJZ-2020-02)
- Open Project Program of Joint International Research Laboratory of Agriculture and Agri-Product Safety, the Ministry of Education of China, Yangzhou University (JILAR-KF202104)
This publication has 37 references indexed in Scilit:
- Multi-label classification with Bayesian network-based chain classifiersPattern Recognition Letters, 2014
- Two stage architecture for multi-label learningPattern Recognition, 2012
- Classifier chains for multi-label classificationMachine Learning, 2011
- Efficient voting prediction for pairwise multilabel classificationNeurocomputing, 2010
- Learning Deep Architectures for AIFoundations and Trends® in Machine Learning, 2009
- Classifier Chains for Multi-label ClassificationLecture Notes in Computer Science, 2009
- Random k-Labelsets: An Ensemble Method for Multilabel ClassificationLecture Notes in Computer Science, 2007
- ML-KNN: A lazy learning approach to multi-label learningPattern Recognition, 2007
- Multi-Label ClassificationInternational Journal of Data Warehousing and Mining, 2007
- Learning multi-label scene classificationPattern Recognition, 2004