Separate Compilation of Bayesian Networks for Efficient Exact Inference

Abstract
Compiling Bayesian Networks (BNs) into Zero-Suppressed BDDs (ZDDs) to perform efficient exact inference has attracted much attention. Computation time for exact inference using ZDDs is reduced to linear time in the size of the ZDDs. Also, cache memory techniques further help accelerate the inference. However, as the size of BN grows, compiling ZDDs becomes unacceptable in both time consumption and ZDD size which hinders BN practical applications. In this paper, we aim to improve the conventional ZDD-based method by proposing the idea of partitioning and separately compiling BNs. For every given BN, serial pattern d-separation sets are found and used to partition the BN into conditionally independent components. Separately compiling these components into ZDDs is more efficient than generating a giant ZDD for a whole network. However, partitioning a BN into too many components may give rise to considerable time consumption which grows exponentially with the number of vertexes in serial pattern d-separations. To trade off the off-line time consumption (for finding d-separations and compiling ZDDs) and on-line time consumption (for inference using ZDDs), the d-separations used to partitioning BNs are restricted to one-vertex and found using Tarjan’s vertex-cut algorithm which can be performed linear time in the number of BN vertexes. The experiments illustrate that one-vertex d-separations exist in most BNs. Partitioning BNs with one-vertex d-separations improves the speed for both compilation and inference largely than the conventional ZDD-based method. To show the validity of partitioning with one-vertex d-separations, we also conduct the experiments of partitioning with two-vertex d-separations and the comparative experiments of jointree algorithms.

This publication has 1 reference indexed in Scilit: