Computer simulations are valuable tools for understanding the behaviour of complicated natural systems. They make predictions and help us form hypotheses, without the need for expensive or impossible experiments. Computing these models on computers, however, means accepting a small amount of imprecision in our results. This imprecision comes from things like rounding errors when performing floating-point arithmetic on computers, truncation errors when using approximate numerical methods like Euler integration, and even errors in input data from measuring instruments.

Although small, when accumulated over time, these errors often influence the overall behaviour of the software, sometimes with dramatic effect. In a famous example, unchecked rounding error caused havoc at the Vancouver Stock exchange (Fig. 1). The gradual accumulation of rounding errors, between January 1982 to November 1983, was causing stocks to loose around 25 points per month, according to the Wall Street Journal and the Toronto Star. Another more deadly example of rounding error disaster involved the failure of a Patriot surface-to-air missile defense system (Fig. 2), allowing a Scud missile to pass through intact, resulting in the death of 28 millitary personel.

Such occurances have spurred a lot of interest in analysing how various error sources will propagate through computer programs. In our case, we need to know how these errors change the behaviour of our simulations, so that we can determine how reliable the predictions they make are, and whether our variables even lie within acceptable ranges. So how do we bound these small cumulative errors in computer simulations? This post explores two methods; interval arithmetic and affine arithmetic. Time to put your maths hat on.

Method One: Interval Arithmetic

One way to bound simulation error, named interval arithmetic (IA), involves replacing the simulation’s state variables with interval-type variables. Such an interval is represented by two real values, an upper and a lower bound, within which the exact arithmetic solution of the computation must reside:

$x \in \hat x = [\underline x, \overline x]$

In higher-dimensional systems, these n intervals form an n-dimensional bounding box, representing the region of all possible simulation states. We must also modify the arithmetic operations to handle this new operand format. The basic addition, subtraction, multiplication and division operations are listed below:

$\hat x + \hat y = [\underline x + \underline y, \overline x + \overline y] \\ \hat x - \hat y = [\underline x - \overline y, \overline x - \underline y] \\ \hat x \times \hat y = \left [min(\underline x \underline y, \underline x \overline y, \overline x \underline y, \overline x \overline y), max(\underline x \underline y, \underline x \overline y, \overline x \underline y, \overline x \overline y) \right] \\ \hat x \div \hat y = \left [min({\underline x \over \underline y}, {\underline x \over \overline y}, {\overline x \over \underline y}, {\overline x \over \overline y}), max({\underline x \over \underline y}, {\underline x \over \overline y}, {\overline x \over \underline y}, {\overline x \over \overline y}) \right], 0 \notin \hat y$

As an example of the interval subtraction operation given above, consider the two intervals:

$\hat x = [4.2, 5.1] \\ \hat y = [8.1, 11.2]$

For subtraction, in order to ensure that respectively the lower bound, and upper bound, of the result is the lowest, and highest, value possible, We must subtract the upper bound of y from the lower bound of x, and vice-versa. We have:

$\hat x - \hat y = [(4.2 - 11.2), (5.1 - 8.1)] = [-7, -3]$

Other interval operators can be defined, as long as they preserve the invariant that the upper and lower interval bounds of a result are always respectively the highest and lowest value that can possibly be returned by the operation. IA can account for the floating-point rounding errors on a computer by ensuring the upper bound of a resulting interval is rounded towards positive infinity, while the lower bound of the result is rounded towards negative infinity. Other error sources can be accounted for in this way. If this is done for every arithmetic operation, we can keep track of the total error in a computation. However, IA becomes a poor choice for long chained computations, such as numerical integration – i.e. in dynamical systems simulations.

The Dependency Problem and The Wrapping Effect

The problem is that the error bounds become too conservative when the operands of the arithmetic operations are correlated. This is known as the dependency problem. An example commonly used to show this effect is given below, where an interval x minus itself should give the resulting interval [0, 0] in real arithmetic, but fails to because x is 100% correlated with itself:

$\hat x = [1, 2] \\ \hat x - \hat x = [(1 - 2), (2 - 1)] = [-1, 1]$

This happens because no variable correlation information is stored in the standard interval representation; only the upper and lower bound values are stored. It occurs, for example, when integrating differential equations like the one given below, where m appears twice on the right-hand side, because IA does not ‘know’ that they represent the same quantity:

${{dm} \over {dt}} = \alpha(V) (1 - m) - \beta(V) m$

In multiple dimensions, IA tends to include regions of state space which are impossible to reach in a simulation run, since the orientation of the bounding box is fixed to that of the axes. This is known as the wrapping effect. In (Fig. 3), for instance, it could be that the point (x, y) lies at an edge of the grey bounding box. However, (x, y) can never lie at two edges due to the linear correlation that exists between them. Instead, (x, y) can only ever lie within the white diamond-shaped region. The grey bounding box is said to ‘wrap’ the white diamond-shaped region of actually reachable states.

Method Two: Affine Arithmetic

An approach which deals with these problems is Affine Arithmetic (AA), created by João L. D. Comba and J. Stolfi. It does this by encoding linear correlations within the interval representation. Each affine interval $\hat x$ is represented as a first-order (affine) polynomial, with a constant value $x_0$ representing the centre of the interval, and n deviation terms $x_i \epsilon_i$ each representing a linear correlation with another variable:

$\hat x = x_0 + x_1 \epsilon_1 + ... + x_n \epsilon_n = \sum^n_{i = 1} x_i \epsilon_i \\ \epsilon \in [-1, 1]$

The value of symbols $\epsilon_i$ are unknown, and collectively represent the uncertainty in the interval. If they were known, then the exact error-free solution of a computation can be determined by simply substituting them into the above formula. AA variables $\hat x$ and $\hat y$ are correlated if $\exists i > 0 : x_i > 0 \land y_i > 0$. In other words, two affine intervals are correlated if they have any non-zero deviation symbols in common. The radius of a standard IA interval is the distance from its midpoint to its endpoints:

$rad(x) = {\overline x - \underline x \over 2}$

The radius of an affine interval is the sum of the absolute values of all deviations:

$rad(\hat x) = \sum^n_{i=1} |x_i|$

Note how an IA interval expressed in centre-radius form is quite similar to an AA interval with a single deviation term. The difference, however, is in how arithmetic operations are performed on IA and AA forms. We will look at arithmetic operations and limitations of AA in other posts, but for now I leave you with a simple subtraction example. Since affine forms are all first-order (linear) polynomials, the results of all linear operations between two affine forms can be represented exactly:

$\hat x = x_0 + \sum^n_{i = 1} x_i \epsilon_i \\ \hat y = y_0 + \sum^n_{i = 1} y_i \epsilon_i \\ \hat x - \hat y = (x_0 - y_0) + \sum^n_{i = 1} (x_i - y_i) \epsilon_i$

Returning to the interval subtraction problem explained earlier, with $x = [1, 2]$, and $x - x = [-1, 1]$, converting x into the affine form $\hat x = 1.5 + 0.5 \epsilon_1$ yields $\hat x - \hat x = (1.5 + 0.5) - (1.5 + 0.5) = 0$ as required.

Stay tuned for more posts exploring affine arithmetic and its properties. For a primer in affine arithmetic, see (Jorge Stolfi and Luiz Henrique De Figueiredo, 1997, Self-Validated Numerical Methods and Applications) at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.8089.

Follow the link below to MPFA, my arbitrary-precision implementation of affine arithmetic based on GNU MPFR, which is in development over at GitHub: https://github.com/jamesturner246/mpfa.

Happy holidays!