3. PROPAGATION OF ERRORS

3.1 INTRODUCTION

Once error estimates have been assigned to each piece of data, we must then find out how these errors contribute to the error in the result. The error in a quantity may be thought of as a variation or "change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value of a result. We say that "errors in the data propagate through the calculations to produce error in the result."

3.2 MAXIMUM ERROR

We first consider how data errors propagate through calculations to affect error limits (or maximum error) of results. It's easiest to first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to indeterminate errors.

The underlying mathematics is that of "finite differences," an algebra for dealing with numbers that have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors.

Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly:

[3-1]

    A + ΔA   and   B + ΔB

We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB."

The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written:

    (A + ΔA) + (B + ΔB) = (A + B) + (Δa + Δb)

So the result, with its error ΔR explicitly shown in the form R + ΔR, is:

    R + ΔR = (A + B) + (Δa + Δb)

[3-2]

The error in R is: ΔR = ΔA + ΔB.

We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing:

    Sum and difference rule. When two quantities are added (or subtracted), their determinate errors add (or subtract).

Now consider multiplication: R = AB. With errors explicitly included:

    R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB)

    [3-3]

    or : ΔR = (ΔA)B + A(ΔB) + (ΔA)(ΔB)

This doesn't look like a simple rule. However, when we express the errors in relative form, things look better. When the error a is small relative to A and ΔB is small relative to B, then (ΔA)(ΔB) is certainly small relative to AB. It is also small compared to (ΔA)B and A(ΔB). Therefore we can throw out the term (ΔA)(ΔB), since we are interested only in error estimates to one or two significant figures. The relative error in R as

[3-4]

    ΔR ΔAB + ΔBA ΔA ΔB —— ≈ ————————— = —— + —— , R AB A B

this does give us a very simple rule:

    Product rule. When two quantities are multiplied, their relative determinate errors add.

A similar procedure is used for the quotient of two quantities, R = A/B.

    A + ΔA A (A + ΔA) B A (B + ΔB) —————— - — ———————— — - — ———————— ΔR B + ΔB B (B + ΔB) B B (B + ΔB) —— = —————————— = ——————————————————————— R A A — — B B
    [3-5]
    (A + ΔA) B - A (B + ΔB) (ΔA)B - B(ΔA) ΔA ΔB = ——————————————————————— ≈ ————————————— ≈ —— - —— A(B + ΔB) AB A B

The approximation made in the next to last step was to neglect ΔB in the denominator, which is valid when the relative errors are small. So the result is:

    Quotient rule. When two quantities are divided, the relative determinate error of the quotient is the relative determinate error of the numerator minus the relative determinate error of the denominator.

A consequence of the product rule is this:

    Power rule. When a quantity Q is raised to a power, P, the relative determinate error in the result is P times the relative determinate error in Q. This also holds for negative powers, i.e. the relative determinate error in the square root of Q is one half the relative determinate error in Q.

3.3 PROPAGATION OF INDETERMINATE ERRORS.

Indeterminate errors have unknown sign. If we assume that the measurements have a symmetric distribution about their mean, then the errors are unbiased with respect to sign. Also, if indeterminate errors in different measurements are independent of each other, their signs have a tendency offset each other when the quantities are combined through mathematical operations.

When we are only concerned with limits of error (or maximum error) we assume a "worst-case" combination of signs. In the operation of subtraction, A - B, the worst case deviation of the answer occurs when the errors are either +ΔA and -ΔB or -ΔA and +ΔB. In either case, the maximum error will be (ΔA + ΔB).

In the operation of division, A/B, the worst case deviation of the result occurs when the errors in the numerator and denominator have opposite sign, either +ΔA and -ΔB or -ΔA and +ΔB. In either case, the maximum size of the relative error will be (ΔA/A + ΔB/B).

The results for addition and multiplication are the same as before. In summary, maximum indeterminate errors propagate according to the following rules:

    Addition and subtraction rule. The absolute indeterminate errors add.

    Product and quotient rule. The relative indeterminate errors add.

A consequence of the product rule is this:

    Power rule. When a quantity Q is raised to a power, P, the relative error in the result is P times the relative error in Q. This also holds for negative powers, i.e. the relative error in the square root of Q is one half the relative error in Q.

These rules only apply when combining independent errors, that is, individual measurements whose errors have size and sign independent of each other.

It can be shown (but not here) that these rules also apply sufficiently well to errors expressed as average deviations. One drawback is that the error estimates made this way are still overconservative. They do not fully account for the tendency of error terms associated with independent errors to offset each other. This, however, is a minor correction, of little importance in our work in this course.

Error propagation rules may be derived for other mathematical operations as needed. For example, the rules for errors in trigonometric functions may be derived by use of the trigonometric identities, using the approximations: sin θ ≈ θ and cos θ ≈ 1, valid when θ is small enough. Rules for exponentials may also be derived.

When mathematical operations are combined, the rules may be successively applied to each operation. In this way an equation may be algebraically derived that expresses the error in the result in terms of errors in the data. Such an equation can always be cast into standard form in which each error source appears in only one term. Let Δx represent the error in x, Δy the error in y, etc. Then the error in any result R, calculated by any combination of mathematical operations from data values x, y, z, etc. is given by:

[3-6]

    ΔR = (cx) Δx + (cy) Δy + (cz) Δz ... etc.,

which may always be algebraically rearranged to:

[3-7]

    ΔR Δx Δy Δz —— = {C } —— + {C } —— + {C } —— ... etc. R x x y y z z

The coefficients {cx} and {Cx} etc. in each term are extremely important because they, along with the sizes of the errors, determine how much each error affects the result. It is the relative size of the terms of this equation that determines the relative importance of the error sources.

If this error equation is derived from the determinate error rules, the relative errors may have + or - signs. The coefficients may also have + or - signs, so the terms themselves may have + or - signs. It is therefore likely for error terms to offset each other, reducing ΔR/R.

If this error equation is derived from the indeterminate error rules, the error measures Δx, Δy, etc. are inherently positive. The coefficients will turn out to be positive also, so terms cannot offset each other.

The indeterminate error equation may be obtained directly from the determinate error equation by simply choosing the "worst case," i.e., by taking the absolute value of every term. This forces all terms to be positive. This step should only be done after the determinate error equation, Eq. 3-6 or 3-7, has been fully derived in standard form.

The error equation in standard form is one of the most useful tools for experimental design and analysis. It should be derived (in algebraic form) even before the experiment is begun, as a guide to experimental strategy. It can show which error sources dominate, and which are negligible, thereby saving time you might otherwise spend fussing with unimportant considerations. It can suggest how the effects of error sources may be minimized by appropriate choice of the sizes of variables. It can tell you how good a measuring instrument is needed to achieve a desired accuracy in the results.

The student who neglects to derive and use this equation may spend an entire lab period using instruments, strategy, or values insufficient to the requirements of the experiment. The student may have no idea why the results were not as good as they ought to have been.

A final comment for those who wish to use standard deviations as indeterminate error measures: Since the standard deviation is obtained from the average of squared deviations, Eq. 3-7 must be modified—each term of the equation (both sides) must be squared:

[3-8]
(r/R)2 = (Cx)2(x/X)2 + (Cy)2(y/Y)2 + (Cz)2(z/Z)2

This rule is given here without proof. This method of combining the error terms is called "summing in quadrature."

3.4 AN EXAMPLE OF ERROR PROPAGATION ANALYSIS

The physical laws one encounters in elementary physics courses are expressed as equations, and these are combinations of the elementary operations of addition, subtraction, multiplication, division, raising to powers, etc. Laboratory experiments often take the form of verifying a physical law by measuring each quantity in the law. If the measurements agree within the limits of error, the law is said to have been verified by the experiment.

For example, a body falling straight downward in the absence of frictional forces is said to obey the law:

[3-9]

1 2 s = v t + — a t o 2

where s is the distance of fall, vo is the initial speed, t is the time of fall and a is the acceleration. In this case, a is the acceleration due to gravity, g, which is known to have a constant value of about 980 cm/sec2, depending on latitude and altitude. More precise values of g are available, tabulated for any location on earth. There's a general formula for g near the earth, called Helmert's formula, which can be found in the Handbook of Chemistry and Physics.

The student might design an experiment to verify this relation, and to determine the value of g, by measuring the time of fall of a body over a measured distance.

One simplification may be made in advance, by measuring s and t from the position and instant the body was at rest, just as it was released and began to fall. Then vo = 0 and the entire first term on the right side of the equation drops out, leaving:

[3-10]

1 2 s = — g t 2

The student will, of course, repeat the experiment a number of times to obtain the average time of fall. The average values of s and t will be used to calculate g, using the rearranged equation:

[3-11]

2s g = —— 2 t

The experimenter used data consisting of measurements of s and t, to calculate a result, g. The errors in s and t combine to produce error in the experimentally determined value of g. The error in g may be calculated from the previously stated rules of error propagation, if we know the errors in s and t.

Let fs and ft represent the fractional errors in t and s. Similarly, fg will represent the fractional error in g. The number "2" in the equation is not a measured quantity, so it is treated as error-free, or exact.

So the fractional error in the numerator of Eq. 11 is, by the product rule:

[3-12]

    f2 + fs = fs

since f2 = 0.

The fractional error in the denominator is, by the power rule, 2ft. Using division rule, the fractional error in the entire right side of Eq. 3-11 is the fractional error in the numerator minus the fractional error in the denominator.

[3-13]

    fg = fs - 2 ft ,

which we have indicated, is also the fractional error in g.

The absolute error in g is:

[3-14]

    Δg = g fg = g (fs - 2 ft)

Equations like 3-11 and 3-13 are called determinate error equations, since we used the determinate error rules. It's a good idea to derive them first, even before you decide whether the errors are determinate, indeterminate, or both.

Some students prefer to express fractional errors in a quantity Q in the form ΔQ/Q. Using this style, our results are:

[3-15,16]

    Δg Δs Δt Δs Δt —— = —— - 2 —— , and Δg = g —— - 2g —— g s t s t

In this experiment we can recognize possible sources of determinate error: reaction time in using a stopwatch, stretch of the string used to measure the distance of fall. But, if you recognize a determinate error, you should take steps to eliminate it before you take the final set of data.

Indeterminate errors show up as a scatter in the independent measurements, particularly in the time measurement. The experimenter must examine these measurements and choose an appropriate estimate of the amount of this scatter, to assign a value to the indeterminate errors.

Then, these estimates are used in an indeterminate error equation. That is easy to obtain. Look at the determinate error equation, and choose the signs of the terms for the "worst" case error propagation. In Eqs. 3-13 through 3-16 we must change the minus sign to a plus sign:

[3-17]

f + 2 f = f s t g

[3-18]

Δg = g f = g (f + 2 f ) g s t

[3-19, 20]

    Δg Δs Δt Δs Δt —— = —— + 2 —— , and Δg = g —— + 2g —— g s t s t

3.5 EXAMPLES:

(1) Two data quantities, X and Y, are used to calculate a result, R = XY. X = 38.2 ± 0.3 and Y = 12.1 ± 0.2. What is the error in R?

Solution: First calculate R without regard for errors:

    R = (38.2)(12.1) = 462.22

The product rule requires fractional error measure. The fractional error in X is 0.3/38.2 = 0.008 approximately, and the fractional error in Y is 0.017 approximately. Adding these gives the fractional error in R: 0.025. Multiplying this result by R gives 11.56 as the absolute error in R, so we write the result as R = 462 ± 12. Note that once we know the error, its size tells us how far to round off the result (retaining the first uncertain digit.) Note also that we round off the error itself to one, or at most two, digits. This is why we could safely make approximations during the calculations of the errors.

This result is the same whether the errors are determinate or indeterminate, since no negative terms appeared in the determinate error equation.

(2) A quantity Q is calculated from the law: Q = (G+H)/Z, and the data is:

    G = 20 ± 0.5
    H = 16 ± 0.5
    z = 106 ± 1.0

The calculation of Q requires both addition and division, and gives Q = 0.340. The error calculation therefore requires both the rule for addition and the rule for division, applied in the same order as the operations were done in calculating Q.

First, the addition rule says that the absolute errors in G and H add, so the error in the numerator (G+H) is 0.5 + 0.5 = 1.0. Therefore the fractional error in the numerator is 1.0/36 = 0.028. The fractional error in the denominator is 1.0/106 = 0.0094. The fractional determinate error in Q is 0.028 - 0.0094 = 0.0186, which is 1.86%. The absolute fractional determinate error is (0.0186)Q = (0.0186)(0.340) = 0.006324. We quote the result in standard form: Q = 0.340 ± 0.006.

If we knew the errors were indeterminate in nature, we'd add the fractional errors of numerator and denominator to get the worst case. The fractional indeterminate error in Q is then 0.028 + 0.0094 = 0.122, or 12.2%. The absolute error in Q is then 0.04148. We quote the result as Q = 0.340 ± 0.04.

3.6 EXERCISES:

(3.1) Devise a non-calculus proof of the product rules.

(3.2) Devise a non-calculus proof of the quotient rules. Do this for the indeterminate error rule and the determinate error rule. Hint: Take the quotient of (A + ΔA) and (B - ΔB) to find the fractional error in A/B. Try all other combinations of the plus and minus signs.

(3.3) The mathematical operation of taking a difference of two data quantities will often give very much larger fractional error in the result than in the data. Why can this happen? Does it follow from the above rules? Under what conditions does this generate very large errors in the results?

(3.4) Show by use of the rules that the maximum error in the average of several quantities is the same as the maximum error of each of the individual quantities. This reveals one of the inadequacies of these rules for maximum error; there seems to be no advantage to taking an average. But more will be said of this later.

3.7 ERROR PROPAGATION IN OTHER MATHEMATICAL OPERATIONS

Rules have been given for addition, subtraction, multiplication, and division. Raising to a power was a special case of multiplication. You will sometimes encounter calculations with trig functions, logarithms, square roots, and other operations, for which these rules are not sufficient.

The calculus treatment described in chapter 6 works for any mathematical operation. But for those not familiar with calculus notation there are always non-calculus strategies to find out how the errors propagate.

The trick lies in the application of the general principle implicit in all of the previous discussion, and specifically used earlier in this chapter to establish the rules for addition and multiplication. This principle may be stated:

The maximum error in a result is found by determining how much change occurs in the result when the maximum errors in the data combine in the worst possible way.

Example: An angle is measured to be 30° ±0.5°. What is the error in the sine of this angle?

Solution: Use your electronic calculator. The sine of 30° is 0.5; the sine of 30.5° is 0.508; the sine of 29.5° is 0.492. So if the angle is one half degree too large the sine becomes 0.008 larger, and if it were half a degree too small the sine becomes 0.008 smaller. (The change happens to be nearly the same size in both cases.) So the error in the sine would be written ±0.008.

The size of the error in trigonometric functions depends not only on the size of the error in the angle, but also on the size of the angle. A one half degree error in an angle of 90° would give an error of only 0.00004 in the sine.

3.8 INDEPENDENT INDETERMINATE ERRORS

Experimental investigations usually require measurement of a number of different physical quantities, each of which may have error. The errors are said to be independent if the error in each one is not related in any way to the others. Errors encountered in elementary laboratory are usually independent, but there are important exceptions.

When errors are independent, the mathematical operations leading to the result tend to average out the effects of the errors. This makes it less likely that the errors in results will be as large as predicted by the maximum-error rules.

A simple modification of these rules gives more realistic predictions of size of the errors in results. These modified rules are presented here without proof. They are, in fact, somewhat arbitrary, but do give realistic estimates that are easy to calculate.

The previous rules are modified by replacing "sum of" with "square root of the sum of the squares of." Instead of summing, we "sum in quadrature."

This modification is used only when dealing with indeterminate errors, so we restate the modified indeterminate error rules:

Sum and Difference Rule: The indeterminate error in the sum or difference of several quantities is the square root of the sum of the squares of the errors of the individual quantities. [Sum the errors in quadrature.]

Product and Quotient Rule: The fractional indeterminate error in the product or quotient of several quantities is the square root of the sum of the squares of the fractional errors of the individual quantities. [Sum the fractional errors in quadrature.]

Raising a number to a power might seem to be simply a case of multiplication: A2 = A × A. But here the two numbers multiplied together are identical and therefore not inde- pendent. So the modification of the rule is not appropriate here and the original rule stands:

Power Rule: The fractional indeterminate error in the quantity An is given by n times the fractional error in A.

3.9 EXERCISES (3.10) What is the fractional indeterminate error in A-n in terms of the fractional error in A?

(3.11) What is the fractional indeterminate error in AA (A raised to the power A)?

(3.12) What is the fractional indeterminate error in 3A? (The number 3 is error-free).

3.10 ERROR IN AN AVERAGE

As an example of these rules, let's reconsider the case of averaging several quantities. We previously stated that the process of averaging did not reduce the size of the error. Now that we recognize that repeated measurements are independent, we should apply the modified rules of section 9.

Suppose n measurements are made of a quantity, Q. The fractional error may be assumed to be nearly the same for all of these measurements. Call it f. Then our data table is:

    Q  ± fQ
     1     1
    Q  ± fQ
     2     2
    ....
    Q  ± fQ
     3     3
    

The first step in taking the average is to add the Qs. The error in the sum is given by the modified sum rule:

[3-21]

But each of the Qs is nearly equal to their average, <Q>, so the error in the sum is:

[3-22]

    Error in sum = (f√n)<Q> .

The next step in taking the average is to divide the sum by n. There is no error in n (counting is one of the few measurements we can do perfectly.) So the fractional error in the quotient is the same size as the fractional error in the numerator.

[3-23]

Therefore, the fractional error in an average is reduced by the factor 1/√n. For example, the fractional error in the average of four measurements is one half that of a single measurement. Note that this fraction converges to zero with large n, suggesting that zero error would be obtained only if an infinite number of measurements were averaged! We'd have achieved the elusive "true" value!

3.11 EXERCISES

(3.13) Derive an expression for the fractional and absolute error in an average of n measurements of a quantity Q when each measurement of Q has a different fractional error. The result is most simply expressed using summation notation, designating each measurement by Qi and its fractional error by fi.

© 1996, 2004 by Donald E. Simanek.