Sum-product networks: A new deep architecture
Top Cited Papers
- 1 November 2011
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum product networks (SPNs) and will present in this abstract. The key idea of SPNs is to compactly represent the partition function by introducing multiple layers of hidden variables. An SPN is a rooted directed acyclic graph with variables as leaves, sums and products as internal nodes, and weighted edges.Keywords
This publication has 7 references indexed in Scilit:
- Convolutional deep belief networks for scalable unsupervised learning of hierarchical representationsPublished by Association for Computing Machinery (ACM) ,2009
- Learning Multi-linear Representations of Distributions for Efficient InferenceLecture Notes in Computer Science, 2009
- Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categoriesComputer Vision and Image Understanding, 2007
- Reducing the Dimensionality of Data with Neural NetworksScience, 2006
- A differential approach to inference in Bayesian networksJournal of the ACM, 2003
- Parameterisation of a stochastic model for human face identificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A View of the Em Algorithm that Justifies Incremental, Sparse, and other VariantsPublished by Springer Science and Business Media LLC ,1998