Affine arithmetic (AA), as discussed in a previous post, is a correlation-sensitive variant of interval arithmetic (IA), created by João L. D. Comba and J. Stolfi. Rather than representing error bounds by upper and lower interval values, as in standard IA, error bounds are represented by a centre value and n >= 0 deviation terms – i.e. a first-order polynomial – where each deviation term represents a linear correlation with another AA variable.

\hat x = x_0 + x_1 \epsilon_1 + ... + x_n \epsilon_n = x_0 + \sum^n_{i = 1} x_i \epsilon_i

We have seen the basic affine interval form, so now let’s look at some basic linear operations on them.

Linear Affine Operations

Since affine interval forms are first-order polynomials, it follows that all linear operations on them can be represented exactly as another affine form, without incurring additional error from linearisation. In this post, we examine the general form of these linear operations, as well as how specific arithmetic operations like addition, subtraction and scalar product, can be derived from this generic affine operation form. We also see how floating-point arithmetic rounding errors, and other sources of error, can be incorporated into the resulting affine form. The generic form of an affine operation on a single affine form is as follows:

f_1(\hat x, \alpha, \gamma) = \alpha \hat x + \gamma = \alpha x_0 + \left ( \sum^n_{i = 1} \alpha x_i \right ) + \gamma

Other operations, which take more than one affine form as arguments, can be defined in a similar manner. For example, the generic form of an affine operation on two affine forms is as follows:

f_2(\hat x, \hat y, \alpha, \beta, \gamma) = \alpha \hat x + \beta \hat y + \gamma = \alpha x_0 + \beta y_0 + \left ( \sum^n_{i = 1} \alpha x_i + \beta y_i \right ) + \gamma

All linear operations, including addition, subtraction and scalar product, can be expressed in the general forms described above:

\hat x + \hat y = f_2(\hat x, \hat y, 1, 1, 0) \\ \hat x - \hat y = f_2(\hat x, -\hat y, 1, 1, 0) \\ \hat x + y = f_1(\hat x, 1, y) \\ \hat x - y = f_1(\hat x, 1, -y) \\ \hat x y = f_1(\hat x, y, 0)

The above functions assume that real arithmetic is in use, but what happens when we are stuck with the floating-point arithmetic of a computer?

Floating-Point Rounding Errors

All linear operations described here are exact, and the typical properties of linear algebra hold true for affine arithmetic when real arithmetic is used. That is to say that the associativity and commutativity hold true for affine addition and affine scalar product, while associativity holds true for affine subtraction, if {x_0, x_1, ..., x_n, y_0, y_1, ..., y_n} \in \mathbb{R}. Associativity, however, no longer holds true with floating-point arithmetic, due to the inevitable rounding errors – we will look closer at floating-point arithmetic and rounding errors another time.

So how can we represent the rounding errors of affine operations within an affine form? In order to preserve the invariant that the computed interval result always encloses the mathematically correct interval result of an operation, both interval boundaries must always be expanded by the upper bound of the total rounding error resulting from all of the basic floating-point operations performed in the affine operation. For AA, expanding an affine interval simply means adding an additional deviation term z_{n + 1} \epsilon_{n + 1} to the end of the affine form, with z_{n + 1} being the absolute-valued worst-case rounding error incurred from all floating-point operations:

f(\hat x) = \hat z = z_0 + \left ( \sum^n_{i = 1} z_i \epsilon_i \right ) + z_{n + 1} \epsilon_{n + 1}

It is safe to add this extra term because \epsilon_{n + 1} \in [-1, 1] is a brand new deviation symbol, completely independent from all other \epsilon_1, ..., \epsilon_n \in [-1, 1] currently in use, and thus does not corrupt the existing correlation information. Next we will look at nonlinear affine operations, such as multiplication of two affine forms, which are nontrivial due to the linearisation methods required to store the nonlinear result in the linear affine form. Stay tuned!