Abstract
An extensive synthesis is provided of the concepts, measures and techniques of Information Theory (IT). After an axiomatic description of the basic definitions of “information functions”, “entropy” or uncertainty and the maximum entropy principle, the paper demonstrates the power of IT as both an interpretive and techinically productive tool. It is argued that this power and universality is promarily due to the common need for (i) measures of distance and discrimination and, (ii) appropriate partitioning- aggregation properties. IT offers a very suggestive unification for a bewildering and arbitrary set of approaches that have evolved in different disciplines. Applications are discussed or indicated. These applications have relevance to economics, finance, industrial organization, marketing, statistical ingerence and model selection, political science and communication. A main focus of the discussion is the generative power of IT measures in statistical examinations of unknown distributions and random phenomena. Measures of concentration and inequality, aggregation functions and index numbers, tests of nested and non_nested hypotheses, and measures of volatility, movility and divergence are presented. Extending the author's previous work, estimation of unknown regression functions, densities and score functions is examined based on the maximum entropy principle. Some empirical examples are cited.