Integrating Prior Knowledge into Deep Learning

Abstract
Deep learning allows to develop feature representations and train classification models in a fully integrated way. However, learning deep networks is quite hard and it improves over shallow architectures only if a large number of training data is available. Injecting prior knowledge into the learner is a principled way to reduce the amount of required training data, as the learner does not need to induce the knowledge from the data itself. In this paper we propose a general and principled way to integrate prior knowledge when training deep networks. Semantic Based Regularization (SBR) is used as underlying framework to represent the prior knowledge, expressed as a collection of first-order logic clauses (FOL), and where each task to be learned corresponds to a predicate in the knowledge base. The knowledge base correlates the tasks to be learned and it is translated into a set of constraints which are integrated into the learning process via backpropagation. The experimental results show how the integration of the prior knowledge boosts the accuracy of a state-of-the-art deep network on an image classification task.

This publication has 8 references indexed in Scilit: