Convergence of rule-of-thumb learning rules in social networks

Abstract
We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent¿s belief and the underlying state as a function of the type of learning rule and signal structure.

This publication has 15 references indexed in Scilit: