Abstract
The signal model presently considered is composed of a linear combination of basis signals chosen to reflect the basic nature believed to characterize the data being modeled. The basis signals are dependent on a set of real parameters selected to ensure that the signal model best approximates the data in a least-square-error (LSE) sense. In the nonlinear programming algorithms presented for computing the optimum parameter selection, the emphasis is placed on computational efficiency considerations. The development is formulated in a vector-space setting and uses such fundamental vector-space concepts as inner products, the range- and null-space matrices, orthogonal vectors, and the generalized Gramm-Schmidt orthogonalization procedure. A running set of representative signal-processing examples are presented to illustrate the theoretical concepts as well as point out the utility of LSE modeling. These examples include the modeling of empirical data as a sum of complex exponentials and sinusoids, linear prediction, linear recursive identification, and direction finding.