Supervised deep learning with auxiliary networks
- 24 August 2014
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM) in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining
Abstract
Deep learning well demonstrates its potential in learning latent feature representations. Recent years have witnessed an increasing enthusiasm for regularizing deep neural networks by incorporating various side information, such as user-provided labels or pairwise constraints. However, the effectiveness and parameter sensitivity of such algorithms have been major obstacles for putting them into practice. The major contribution of our work is the exposition of a novel supervised deep learning algorithm, which distinguishes from two unique traits. First, it regularizes the network construction by utilizing similarity or dissimilarity constraints between data pairs, rather than sample-specific annotations. Such kind of side information is more flexible and greatly mitigates the workload of annotators. Secondly, unlike prior works, our proposed algorithm decouples the supervision information and intrinsic data structure. We design two heterogeneous networks, each of which encodes either supervision or unsupervised data structure respectively. Specifically, we term the supervision-oriented network as "auxiliary network" since it is principally used for facilitating the parameter learning of the other one and will be removed when handling out-of-sample data. The two networks are complementary to each other and bridged by enforcing the correlation of their parameters. We name the proposed algorithm SUpervision-Guided AutoencodeR (SUGAR). Comparing prior works on unsupervised deep networks and supervised learning, SUGAR better balances numerical tractability and the flexible utilization of supervision information. The classification performance on MNIST digits and eight benchmark datasets demonstrates that SUGAR can effectively improve the performance by using the auxiliary networks, on both shallow and deep architectures. Particularly, when multiple SUGARs are stacked, the performance is significantly boosted. On the selected benchmarks, ours achieve up to 11.35% relative accuracy improvement compared to the state-of-the-art models.Keywords
Funding Information
- Ministry of Science and Technology of the People's Republic of China (2014CB340304)
This publication has 21 references indexed in Scilit:
- Deep Learning of Representations: Looking ForwardLecture Notes in Computer Science, 2013
- A feasible method for optimization with orthogonality constraintsMathematical Programming, 2012
- Learning Deep Architectures for AIFoundations and Trends® in Machine Learning, 2009
- Near-optimal hashing algorithms for approximate nearest neighbor in high dimensionsCommunications of the ACM, 2008
- Reducing the Dimensionality of Data with Neural NetworksScience, 2006
- A Fast Learning Algorithm for Deep Belief NetsNeural Computation, 2006
- Topographic Independent Component AnalysisNeural Computation, 2001
- Efficient BackPropPublished by Springer Science and Business Media LLC ,1998
- Gradient-based learning applied to document recognitionProceedings of the IEEE, 1998
- Learning representations by back-propagating errorsNature, 1986