Newton's Method - Algorithm 3.2

Thomas J. Kennedy

Contents:

1 Input

We are given, as input:

$$ x_0, x_1, c_2, \ldots, x_n $$

and

$$ f_0, f_1, f_2, \ldots, fx_n $$

where $f_k = f(x_k)$.

Which (similar to algorithm 3.1) allows us to write the first two columns of the divide difference table.

$x_0 $ $f[x_0]$
$x_1 $ $f[x_1]$
$x_2 $ $f[x_2]$
$\vdots$ $\vdots$
$x_n $ $f[x_n]$

However, there is a hidden assumption that the individual reading the code knows to write the setup code for the $d_i$ values.

Example 1: Setup

$$ \begin{array}{l} \text{for } k = 1, 2, 3, \ldots, n \text{ do} & \\ \phantom{4}\phantom{4} d_i := f(x_i) & \\ \text{end for}\\ \end{array} $$

2 Formal Pseudocode

The actual pseudocode is almost identical to Algorithm 3.1.

$$ \begin{array}{l} \text{for } k = 1, 2, 3, \ldots, n \text{ do} & \\ \phantom{4}\phantom{4} \text{for } i = 0,1 , \ldots, (n-k) \text{ do} & \\ \phantom{4}\phantom{4}\phantom{4}\phantom{4} d_{i} = \frac{ d_{i+1} - d_{i} }{x_{i+k} - x_{i}} & \\ \phantom{4}\phantom{4} \text{end for}\\ \text{end for}\\ \end{array} $$

3 Adapting the Pseudocode

We are now ready to (as before) adapt this pseudocode to our langauge of choosing. Let us start with Python.

Example 2: Python
def alg_31(x_values, d_values):
    for k in range(1, n+1):
        for i in range(0, n-k+1):
            # Note that this next line still needs to be tweaked for a
            # to-be-selected data structure
            d[i] = (d[i+1] - d[i]) / (x_i_plus_k - x_i)

Rust is a bit nicer since the ..= allows us to include the upper bound when writing a loop.

Example 3: Rust
fn alg_31(x_values: &[f64], d_values: &[f64]) {
    for k in 1..=n {
        for i in 0..=n-k {
            // Note that this next line still needs to be tweaked for a
            // to-be-selected data structure
            d[i] = (d[i+1] - d[i]) / (x_i_plus_k - x_i)
        }
    }
}

C++ and Java are essentially identical (assuming arrays are used in both).

Example 4: C++
void alg_31(const double* x_values, const double* d_values, const int n)
{
    for (int k = 1; k <= n; ++k) {
        for (i = 0; i <= n-k; ++i) {
            // Note that this next line still needs to be tweaked for a
            // to-be-selected data structure
            d[i] = (d[i+1] - d[i]) / (x_i_plus_k - x_i)
        }
    }
}
Example 5: Java
void alg_31(final double[] x_values, final double[] d_values)
{
    for (int k = 1; k <= n; ++k) {
        for (i = 0; i <= n-k; ++i) {
            // Note that this next line still needs to be tweaked for a
            // to-be-selected data structure
            d[i] = (d[i+1] - d[i]) / (x_i_plus_k - x_i)
        }
    }
}

4 Complexity Analysis

Of course… no discussion of algorithms would be complete without discussing performance. In this case… we are interested in tail behavior (i.e., performance for a large number of points).

Note that complexity analysis is only part of the picture. Considerations such as cache performance, compiler optimizations, data dependencies, and vectorization (e.g., SIMD) cannot be ignored.

4.1 Spatial

If we examine the data used by the algorithm (i.e., input, intermediary values, and output)…

However, this time everything except the top entry in each row is eventaully overwritten. This time… we end up with

$$ n + n + n $$

which simplifies to

$$ O(3n) $$

which then simplifies to

$$ O(n) $$

…even if we decided to use a fourth array to avoid dealing with array index trickery.

4.2 Temporal

Analyzing temporal complexity requires looking at the two loops as one unit…

$k$ Inner Loop Total Iterations
1 $0, 1, 2, \ldots (n-1)$ $n$
2 $0, 1, 2, \ldots (n-2)$ $n - 1$
3 $0, 1, 2, \ldots (n-3)$ $n - 2$
$\vdots$ $\vdots$ $\vdots$
$n-1$ $0, \ldots, n - (n-1)$ 2
$n$ $0$ 1

which leads to… $O(n^2)$ by way of $\frac{n(n+1)}{2}$. This is the same process and result as in Algorithm 3.1.