Faster Domain Adaptation Networks

Abstract
It is widely acknowledged that the success of deep learning is built on large-scale training data and tremendous computing power. However, the data and computing power are not always available for many real-world applications. In this paper, we address the machine learning problem where it lacks training data and limits computing power. Specifically, we investigate domain adaptation which is able to transfer knowledge from one labeled source domain to an unlabeled target domain, so that we do not need much training data from the target domain. At the same time, we consider the situation that the running environment is confined, e.g., in edge computing the end device has very limited running resources. Technically, we present the Faster Domain Adaptation (FDA) protocol and further report two paradigms of FDA: early stopping and amid skipping. The former accelerates domain adaptation by multiple early exit points. The latter speeds up the adaptation by wisely skip several amid neural network blocks. Extensive experiments on standard benchmarks verify that our method is able to achieve the comparable and even better accuracy but employ much less computing resources. To the best of our knowledge, there are very few works which investigated accelerating knowledge adaptation in the community.
Funding Information
  • National Natural Science Foundation of China (61806039, 62073059, 61832001)
  • Sichuan Science and Technology Program (2020YFG0080)

This publication has 29 references indexed in Scilit: