Processing math: 100%

Interpolation vs Least Squares Approximation

Thomas J. Kennedy

Contents:

1 Least Squares Approximation

We spent quite a bit of time discussing Least Squares Approximation. In total we discussed the following methods…

  1. [XTX|XTY]

  2. [A|b]

  3. ckR(fˆφ)2

  4. Presolved [A|b] for a line

While we discussed four methods for deriving ˆφ… all four methods were really different notations for the same techniques. Note that our focus here is on Least Squares Approximation as applied to discrete data.

Least Squares Approximation is a single method.

2 Why Interpolation?

Least Squares Approximation captures the behavior of a set of data. Interpolation also captures the behavior of a set of data. However,

Interpolation refers to a collection of techniques… with a common invariant.

The error at every input point must be zero. Given a polynomial p(x), the polynomial must be equal to f(x) at every input point (x0,x1,,xn). This invariant is often written as

xkp(xk)=f(xk)

This notation is read as for all xk p(xk) must be equal to f(xk).

3 Let Us Try that Again… In Plain English

Least Squares Approximation can be thought of as…

We have a memory of a pieces of a picture and want to capture the idea as inspired by the pieces

Interpolation can be though of as…

We have pieces of a picture and want to fill in the blanks… while preserving the orignal pieces and trying to figure out should be around each piece.