Processing math: 60%

Least Squares Examples - sin(x)

Thomas J. Kennedy

Contents:

Do you remember f(x)=sin(x)? I want to use more than 3 points to approximate 0xπ.

1 Getting Started

We must use the Ac=b method:

[πx=0π0π0dxπx=0π0π1dxπx=0π0π2dxπx=0π1π0dxπx=0π1π1dxπx=0π1π2dxπx=0π2π0dxπx=0π2π1dxπx=0π2π2dx][c0c1c2]=[πx=0π0fdxπx=0π1fdxπx=0π2fdx]

Note how A is a 3-by-3 matrix. We have three constants to compute (i.e., c0,c1,c2).

ˆφ=c0π0+c1π1+c2π2

Since we need a polynomial of degree 2, our basis functions are:

which leads us to

ˆφ=c0+c1x+c2x2

2 Some Calculus

We have twelve integrals to solve. Let us start with the nine (9) from A.

π0π0=11π0dx=x|π0=ππ0π1=1xπ0xdx=12x2|π0=π22π0π2=1x2π0x2dx=13x3|π0=π33π1π0=π0π1==π22π1π1=xxπ0x2dx==π33π1π2=xx2π0x3dx=14x4|π0=π44π2π0=π0π2==π33π2π1=π1π2==π44π2π2=x2x2π0x4dx=15x5|π0=π55

Let us plug the results into Ac=b.

[π12π213π312π213π314π413π314π415π5][c0c1c2]=[πx=0π0fdxπx=0π1fdxπx=0π2fdx]

2.1 Integration by Parts

We are left with the three (3) integrals from b.

π0f=1sin(x)π0sin(x)dxπ1f=xsin(x)π0xsin(x)dxπ2f=x2sin(x)π0x2sin(x)dx

The first integral can be evaluated with a quick definition:

π0sin(x)dx=cos(x)|πx=0=(11)=(2)=2 

The remaining two integrals require integration by parts. We will use the DI Method.

π0xsin(x)dx

DI Steps for x sin x

π0xsin(x)dx=(xcos(x)+sin(x))|πx=0=(π(1)+0))(0(1)+0)=π


π0x2sin(x)dx

DI Steps for x squared sin x

π0x2sin(x)dx=(x2cos(x)+2xsin(x)+2cos(x))|πx=0=(2xsin(x)(x22)cos(x))|πx=0=2π(0)(π22)(1)(0(02)(1))=(π22)+(2)=π24

3 Using the Twelve Integrals

We can now update b.

[π12π213π312π213π314π413π314π415π5][c0c1c2]=[2ππ24]

It is time to construct the augmented A|b matrix!

[π12π213π3212π213π314π4π13π314π415π5π24]

3.1 Solving Ac|b

Our first step is to:

…which results in…

[112π13π22π123π12π22π134π35π23π12π3]

The next step is elimination:

[112π13π22π016π16π20014π415π21π12π3]

3.1.1 Second Iteration

Our first step is to:

…which results in…

[112π13π22π01π0011615π24π248π4]

The next step is (once again) elimination:

[112π13π22π01π000115π4π248π4]

3.1.2 Third Iteration

Our first step is to:

[112π13π22π01π000160π3720π5]

3.1.3 Time to Backsolve

Instead of backsolving within the matrix… let us take the augmented column (b) and label each entry with a subscript:

That leads us to:

b013π2b2=2π13π2(60π3720π5)=2π20π240π3=18π+240π3

and

b1πb2=0π(60π3720π5)=60π2+720π4

Let us update the augmented matrix.

[112π018π+240π301060π2+720π400160π3720π5]

Now… for one last step…

b012πb1=18π+240π312π(60π2+720π4)=18π+240π3+30π360π3=12π120π3

4 The Approximation Function

After all that Calculus and Linear Algebra… we have… our approximation function!

ˆφ=(12π120π3)+(60π2+720π4)x+(60π3720π5)x2

4.1 Remove the Constant Term?

Suppose that after all the derivation we questioned whether we needed c0 (i.e., considered dropping the c0 term).

We know that ˆφ is the best possible approximation function out of every possible degree-two polynomial in the form

p2=c0π0+c1π1+c2π2

Therefore, we know that

||fˆφ||<||f(ˆφc0)||||f(c0+c1x+c2x2)||<||f(c1x+c2x2)||

4.2 The Wrong Approach

We might be inclined to frame dropping c0 as

If c0 is small, we should condition on it. If we have a condition number that is less than or equal to 1…

However, this only works if c00. For sake of discussion… let us continue this incorrect analysis.

Let us start with notation. The approximation function ˆφ needs to be replaced with φn. If we are dropping c0, we no longer have the best possible approximation!

Let us condition on c0… which leads us to φn(c0).

(cond1φn)(c0)=|c0φφ|

We need to compute φ:

φ=φc0(c0+c1x+c2x2)=φc0(c0)+φc0(c1x)+φc0(c2x2)=1+0+0=1

We can now use φ and φ.

(cond1φn)(c0)=|c0φφ|=|c0(1)c0+c1x+c2x2|=|c0(1)c0+c1x+c2x2|

We could argue that…

(cond1φn)(c0)=|c0(1)c0+c1x+c2x2||c0c0|1

the problem is well conditioned on c_0.

The next step would be to compute c_0:

\frac{12}{\pi} - \frac{120}{\pi^3} \approx -0.05

If we erroneously consider -0.05 to be “small enough”… that would lead to:

\text{abs error} \approx \left|\hat{\varphi} - \left(\hat{\varphi} - 0.05 \right)\right| = 0.05

and

\text{rel error} \approx \left|\frac{\hat{\varphi} - \left(\hat{\varphi} - 0.05 \right)}{\hat{\varphi}}\right| = \left|\frac{\hat{\varphi} - \hat{\varphi} + 0.05 }{\hat{\varphi}}\right| = \left|\frac{0.05}{\hat{\varphi}}\right|

for x \in [0, \pi]

4.3 The Correct Question

However, none of that analysis is justified. We are really interested in minimizing arithmetic error propagation. How many arithmetic operations are there? Can we reduce the number of arithmetic operations?

Let us count the number of arithmetic operations.

\hat{\varphi} = \left(\frac{12}{\pi} - \frac{120}{\pi^3} \right) + \left(-\frac{60}{\pi^2} + \frac{720}{\pi^4} \right) x + \left(\frac{60}{\pi^{3}} - \frac{720}{\pi^{5}}\right) x^2


\hat{\varphi} = \left(\frac{12}{\pi} - \frac{120}{\pi^3} \right) + \left(\left(-\frac{60}{\pi^2} + \frac{720}{\pi^4} \right) + \left(\frac{60}{\pi^{3}} - \frac{720}{\pi^{5}}\right)x \right)x


\hat{\varphi} = \frac{1}{\pi} \left[\left(12 - \frac{120}{\pi^2} \right) + \left(\left(-\frac{60}{\pi} + \frac{720}{\pi^3} \right) + \left(\frac{60}{\pi^{2}} - \frac{720}{\pi^{4}}\right)x \right)x\right]

or if we treat \frac{1}{\pi} as \frac{(\ldots)}{\pi} (i.e., divide by \pi):

5 Changing the Basis Functions

If we truly intend to forgo c_0, we would change our approximation function from:

 

\hat{\varphi} = c_0 \pi_0 + c_1 \pi_1 + c_2 \pi_2

to

\hat{\varphi} = c_1^{\prime} \pi_1 + c_2^{\prime} \pi_2

Note that \pi_1 = x and \pi_2 = x^2 in both functions.

We would then have a slightly different Ac = b

\left[\begin{array}{ccc} \int\limits_{x=0}^{\pi} \pi_{1} \pi_{1}dx & \int\limits_{x=0}^{\pi} \pi_{1} \pi_{2} dx\\ \int\limits_{x=0}^{\pi} \pi_{2} \pi_{1}dx & \int\limits_{x=0}^{\pi} \pi_{2} \pi_{2} dx\\ \end{array}\right] \left[\begin{array}{c} c_1^{\prime} \\ c_2^{\prime} \\ \end{array}\right] = \left[\begin{array}{c} \int\limits_{x=0}^{\pi} \pi_1 f dx\\ \int\limits_{x=0}^{\pi} \pi_2 f dx\\ \end{array}\right]

Since… \pi_1 and \pi_2 are the same as in the original basis function… we can reuse a few of the previous integrals!

\begin{array}{lcc} \pi_{1} \pi_{1} &\rightarrow& \int\limits_{0}^{\pi} x^2 dx &=& \frac{\pi^3}{3} \\ \pi_{1} \pi_{2} &\rightarrow& \int\limits_{0}^{\pi} x^3 dx &=& \frac{\pi^4}{4} \\ \pi_{2} \pi_{1} &\rightarrow& \ldots &=& \frac{\pi^4}{4} \\ \pi_{2} \pi_{2} &\rightarrow& \int\limits_{0}^{\pi} x^4 dx &=& \frac{\pi^5}{5} \\ \end{array}

and

\begin{array}{ll} \int\limits_{0}^{\pi} x \sin(x) dx &=& \pi \\ \int\limits_{0}^{\pi} x^{2} \sin(x) dx &=& \pi^{2} - 4 \\ \end{array}

Which leads to a 2-by-2 matrix (plus the augmented column):

\left[\begin{array}{cc|c} \frac{1}{3} \pi^{3} & \frac{1}{4} \pi^{4} & \pi \\ \frac{1}{4} \pi^{4} & \frac{1}{5} \pi^{5} & \pi^2 - 4 \\ \end{array}\right]

5.1 Solving the New Augmented Matrix

Note that we will use one-based indexing for this matrix.

Our first step is to:

\left[\begin{array}{cc|c} 1 & \frac{3}{4} \pi & \frac{3}{\pi^2} \\ 1 & \frac{4}{5} \pi & \frac{4}{\pi^2} - \frac{16}{\pi^4} \\ \end{array}\right]

The next step is elimination.

\left[\begin{array}{cc|c} 1 & \frac{3}{4} \pi & \frac{3}{\pi^2} \\ 0 & \frac{1}{20} \pi & \frac{1}{\pi^2} - \frac{16}{\pi^4} \\ \end{array}\right]

The last step (before backsolving) is to scale row 2 by \frac{20}{\pi}

\left[\begin{array}{cc|c} 1 & \frac{3}{4} \pi & \frac{3}{\pi^2} \\ 0 & 1 & \frac{20}{\pi^3} - \frac{320}{\pi^5} \\ \end{array}\right]

Now to backsolve…

\begin{array}{cc} b_{0} - \frac{3}{4}\pi b_1 &=& \frac{3}{\pi^2} - \frac{3}{4}\pi \left(\frac{20}{\pi^3} - \frac{320}{\pi^5}\right)\\ &=& \frac{3}{\pi^2} - \frac{15}{\pi^2} + \frac{240}{\pi^4}\\ &=& -\frac{12}{\pi^2} + \frac{240}{\pi^4}\\ \end{array}

That leaves us with…

\left[\begin{array}{cc|c} 1 & 0 & -\frac{12}{\pi^2} + \frac{240}{\pi^4} \\ 0 & 1 & \frac{20}{\pi^3} - \frac{320}{\pi^5} \\ \end{array}\right]

5.2 The Alternate Approximation Function

The alternate appoximation function is…

\hat{\varphi}^{\prime} = \left(-\frac{12}{\pi^2} + \frac{240}{\pi^4}\right) x + \left(\frac{20}{\pi^3} - \frac{320}{\pi^5}\right) x^2

5.3 Counting… Again

Let us count the number of arithmetic operations.

\hat{\varphi}^{\prime} = \left(-\frac{12}{\pi^2} + \frac{240}{\pi^4}\right) x + \left(\frac{20}{\pi^3} - \frac{320}{\pi^5}\right) x^2


\hat{\varphi}^{\prime} = \frac{x}{\pi^2}\left(\left(-12 + \frac{240}{\pi^2}\right) + \left(\frac{20}{\pi} - \frac{320}{\pi^3}\right) x\right)

6 Which Approximation Function?

Which approximation function is the best? That can be answered by evaluating…

\begin{array}{ccc} ||f - \hat{\varphi}|| &<& ||f - \left(\hat{\varphi} - c_0 \pi_0 \right)|| &\stackrel{\text{?}}{=}& ||f - \hat{\varphi}^{\prime}||\\ ||f - \left(c_0 \pi_0 + c_1 \pi_1 + c_2 \pi_2 \right)|| &<& ||f - \left(c_1 \pi_1 + c_2 \pi_2 \right)|| &\stackrel{\text{?}}{=}& ||f - \left(c_1^{\prime} \pi_1 + c_2^{\prime} \pi_2 \right)||\\ ||f - \left(c_0 + c_1 x + c_2 x^2 \right)|| &<& ||f - \left(c_1 x + c_2 x^2 \right)|| &\stackrel{\text{?}}{=}& ||f - \left(c_1^{\prime} x + c_2^{\prime} x^2\right)||\\ \end{array}

But… I will leave that as an exercise for the reader.