Abstract
Suppose that $f$ is a function from $\mathbb{R}^k$ to $\mathbb{R}^k$ and for some $\theta, f(\theta) = 0$. Initially $f$ is unknown, but for any $x$ in $\mathbb{R}^k$ we can observe a random vector $Y(x)$ with expectation $f(x)$. The unknown $\theta$ can be estimated recursively by Blum's (1954) multivariate version of the Robbins-Monro procedure. Blum's procedure requires the rather restrictive assumption that infimum of the inner product $(x - \theta)^tf(x)$ over any compact set not containing $\theta$ be positive. Thus at each $x, f(x)$ gives information about the direction towards $\theta$. Blum's recursion is $X_{n+1} = X_n - a_n Y_n$ where the conditional expectation of $Y_n$ given $X_1, \cdots, X_n$ is $f(X_n)$ and $a_n > 0$. Unlike Blum's method, the procedure introduced in this paper does not necessarily attempt to move in a direction that decreases $\|X_n - \theta\|$, at least not during the initial stage of the procedure. Rather, except for random fluctuations it moves in a direction which decreases $\|f\|^2$, and it may follow a circuitous route to $\theta$. Consequently, it does not require that $(x - \theta)^tf(x)$ have a constant signum. This new procedure is somewhat similar to the multivariate Kiefer-Wolfowitz procedure applied to $\|f\|^2$, but unlike the latter it converges to $\theta$ at rate $n^{-1/2}$. Deterministic root finding methods are briefly discussed. The method of this paper is a stochastic analog of the Newton-Raphson and Gauss-Newton techniques.