Arithmetic Error

Thomas J. Kennedy

Contents:

How does finite precision impact arithmetic operations? For this discussion we will assume:

There are four (4) basic arithmetic operations:

As you work through these notes, think of $\epsilon$ as relative error, $\epsilon = \frac{|x - x^{*}|}{|x|}$.

Believe it or not, the errors for multiplication and division are less tedious to derive than the errors for addition and subtraction!

1 Multiplication

Suppose we were multiplying two numbers:

$$ x * y = z $$

To denote finite precision, we need to use the star/chop notation:

$$ x^{*} * y^{*} = z^{*} $$

where

Let us start by expanding all the terms (and cleaning everything up).

\begin{array}{rl} z^{*} &=& x^{*} * y^{*} \\ &=& x^{*} y^{*} \\ &=& x (1 + \epsilon_{x}) y (1 + \epsilon_{y}) \\ &=& xy (1 + \epsilon_{x})(1 + \epsilon_{y}) \\ &=& xy (1 + \epsilon_{x} + \epsilon_{y} + \epsilon_{x}\epsilon_{y}) \\ \end{array}

We can drop the last epsilon term, i.e., $\epsilon_{x}\epsilon_{y}$. This is a higher order epsilon term. Any epsilon terms with a degree of two or higher (i.e., raised to a power of 2 or higher) are treated as zero. The same rule applies to products of epsilon terms.

After dropping that last epsilon term…

\begin{array}{rl} z^{*} &=& xy (1 + \epsilon_{x} + \epsilon_{y} + \epsilon_{x}\epsilon_{y}) \\ &\approx& xy (1 + \epsilon_{x} + \epsilon_{y}) \\ \end{array}

We know how to compute the absolute error.

\begin{array}{rl} z^{*} - z &=& | xy (1 + \epsilon_{x} + \epsilon_{y}) - xy | \\ &=& | xy + xy (\epsilon_{x} + \epsilon_{y}) - xy | \\ &=& | xy - xy + xy (\epsilon_{x} + \epsilon_{y}) | \\ &=& | xy (\epsilon_{x} + \epsilon_{y}) | \\ \end{array}

Of course, that only tells us part of the story. Think about the following three cases:

Which case (or cases) are acceptable? We need the full picture (i.e., relative error).

The numerator is, by definition, relative error (which lets us reuse our previous result).

\begin{array}{rl} |\frac{z^{*} - z}{z}| &=& | \frac{xy (\epsilon_{x} + \epsilon_{y})}{xy} | \\ &=& | \epsilon_{x} + \epsilon_{y} | \\ \end{array}

The error for multiplication is dependent on the errors representing the numbers, not the numbers themselves.

Before moving on to the next section, think about repeated multiplication. Can you show that the error for repeated multiplication is $|\epsilon_{1} + \epsilon_{2} + \ldots + \epsilon_{n}|$?

2 Division

We can tackle division using the same tricks from the multiplication examples.

Let us start with absolute error.

\begin{array}{rl} \left| \frac{x^{*}}{y^{*}} - \frac{x}{y} \right| &=& \left| \frac{x(1 + \epsilon_{x})}{y (1 + \epsilon_{y})} - \frac{x}{y} \right| \\ &=& \left| \frac{x}{y} \left( \frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} - 1 \right) \right| \end{array}

Relative error will end up in a much “nicer” form.

\begin{array}{rl} \left| \frac{\frac{x^{*}}{y^{*}} - \frac{x}{y}}{\frac{x}{y}} \right| &=& \left| \left(\frac{x^{*}}{y^{*}} - \frac{x}{y} \right) \frac{y}{x} \right| \\ &=& \left| \frac{x}{y} \left( \frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} - 1 \right) \frac{y}{x} \right| \\ &=& \left| \frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} - 1 \right| \\ \end{array}

We are not quite done yet. Let us focus on $\frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} $. We can use a standard math trick (i.e., multiply by the conjugate).

\begin{array}{rl} \frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} \frac{1 - \epsilon_{y}}{1 - \epsilon_{y}} &=& \frac{(1 + \epsilon_{x} - \epsilon_{y} - \epsilon_{x}\epsilon_{y})}{(1 - \epsilon_{y}^{2})} \\ &\approx& \frac{(1 + \epsilon_{x} - \epsilon_{y})}{1} \\ &\approx& 1 + \epsilon_{x} - \epsilon_{y} \\ \end{array}

Now, we can continue the relative error derivation.

\begin{array}{rl} \left| \frac{\frac{x^{*}}{y^{*}} - \frac{x}{y}}{\frac{x}{y}} \right| &=& \left| \frac{(1 + \epsilon_{x})}{(1 + \epsilon_{y})} - 1 \right| \\ &\approx& \left| 1 + \epsilon_{x} - \epsilon_{y} - 1 \right | \\ &\approx& \left| \epsilon_{x} - \epsilon_{y} \right | \\ \end{array}

The relative error is very similar to the relative error for multiplication.

How about repeated division? Give it a try (as a practice problem).

3 Addition

It feels odd to tackle addition after multiplication. Let us jump straight to relative error this time around.

\begin{array}{rl} \left| \frac{(x^{*} + y^{*}) - (x + y)}{(x+y)} \right| &=& \left| \frac{(x(1 + \epsilon_{x}) + y (1 + \epsilon_{y}) - (x + y)}{(x+y)} \right| \\ &\approx& \left| \frac{(x + x\epsilon_{x} + y + y \epsilon_{y}) - (x + y)}{(x+y)} \right| \\ &=& \left| \frac{(x + y) + (x\epsilon_{x} + y \epsilon_{y}) - (x + y)}{(x+y)} \right| \\ &=& \left| \frac{ (x\epsilon_{x} + y \epsilon_{y})}{(x+y)} \right| \\ &=& \left| \frac{x}{x+y} \epsilon_{x} + \frac{y}{x+y} \epsilon_{y} \right| \\ \end{array}

The error is dependent on $x$ and $y$. How would you continue this evaluation?

4 Subtraction

Our subtraction derivation will mirror addition… with a few flipped signs.

\begin{array}{rl} \left| \frac{(x^{*} - y^{*}) - (x - y)}{(x-y)} \right| &=& \left| \frac{(x(1 + \epsilon_{x}) - y (1 + \epsilon_{y}) - (x - y)}{(x-y)} \right| \\ &\approx& \left| \frac{(x + x\epsilon_{x} - y - y \epsilon_{y}) - (x - y)}{(x-y)} \right| \\ &=& \left| \frac{(x - y) + x\epsilon_{x} - y \epsilon_{y} - (x - y)}{(x-y)} \right| \\ &=& \left| \frac{(x - y) - (x - y) + x\epsilon_{x} - y \epsilon_{y}}{(x-y)} \right| \\ &=& \left| \frac{x\epsilon_{x} - y \epsilon_{y}}{(x-y)} \right| \\ &=& \left| \frac{x}{x+y} \epsilon_{x} - \frac{y}{x+y} \epsilon_{y} \right| \\ \end{array}

The error is dependent on $x$ and $y$. How would you continue this evaluation?