A fast learning algorithm for multi-layer extreme learning machine

Abstract
Extreme learning machine (ELM) is an efficient training algorithm originally proposed for single-hidden layer feedforward networks (SLFNs), of which the input weights are randomly chosen and need not to be fine-tuned. In this paper, we present a new stack architecture for ELM, to further improve the learning accuracy of ELM while maintaining its advantage of training speed. By exploiting the hidden information of ELM random feature space, a recovery-based training model is developed and incorporated into the proposed ELM stack architecture. Experimental results of the MNIST handwriting dataset demonstrate that the proposed algorithm achieves better and much faster convergence than the state-of-the-art ELM and deep learning methods.