Abstract
In this talk, we propose an information theoretic approach to design the functional representations to extract the hidden common structure shared by a set of random variables. The main idea is to measure the common information between the random variables by the Watanabe's total correlation, and then find the hidden attributes of these random variables such that common information between these random variables is reduced the most given these hidden attributes. We show that these hidden attributes can be characterized by an exponential family specified by the eigen-decomposition of some pairwise joint distribution matrix. Then, we adopt the log-likelihood functions for estimating these hidden attributes as the desired functional representations of the random variables, and show that these functional representations are informative to describe the common structure. Moreover, we design both the multivariate alternative conditional expectation (MACE) algorithm to compute the proposed functional representations for discrete data, and a novel neural network training scheme for continuous or high-dimensional data. Finally, the performances of our algorithms are validated by numerical simulations in the MNIST digital recognition.