Measurements as Random Variables [PDF]

confidence intervals to estimate the value being measured. 3.1 Measurement ... measure the volume of a liquid or as comp

17 downloads 4 Views 588KB Size

Recommend Stories


Random Variables
We can't help everyone, but everyone can help someone. Ronald Reagan

Random Variables
Respond to every call that excites your spirit. Rumi

1 Subgaussian random variables
Where there is ruin, there is hope for a treasure. Rumi

Discrete Random Variables
We can't help everyone, but everyone can help someone. Ronald Reagan

Multiple Random Variables
In the end only three things matter: how much you loved, how gently you lived, and how gracefully you

probability and random variables
Don’t grieve. Anything you lose comes round in another form. Rumi

Random variables (continuous)
Learning never exhausts the mind. Leonardo da Vinci

Jointly Gaussian Random Variables
This being human is a guest house. Every morning is a new arrival. A joy, a depression, a meanness,

Discrete & Continuous Random Variables
Everything in the universe is within you. Ask all from yourself. Rumi

Discrete random variables
Why complain about yesterday, when you can make a better tomorrow by making the most of today? Anon

Idea Transcript


Chapter 3

Measurements as Random Variables A large number of experiments in science consist of a series of measurements. Measurements are an attempt to obtain an estimate of some (usually unknown) quantity such as mass or volume. The properties of random variables, which we have described over the last two chapters, are an important aspect of measurement science, because it is almost always the case that measurements are random variables. In other words, a measurement will contain some unpredictable, random component, and will usually not equal the true value of the property we are trying to estimate. The difference between the true value and the measured value is called the measurement error. In this chapter we will describe the process of estimating some property through the use of a series of measurements. We begin by defining two types of measurement error: random and systematic error. Our study of statistics will allow us to deal with the ambiguities associated with random measurement error. We then describe how to determine the uncertainty in values calculated from measurements through the propagation of random measurement error. Finally, we will describe the properties and utility of confidence intervals to estimate the value being measured.

3.1 Measurement Error 3.1.1 The Nature of Measurement Error A measurement process might be as simple as using a graduated cylinder to measure the volume of a liquid or as complicated as measuring the width of a spectral emission line to determine the temperature of a hot combustion flame. In all cases, however, a measurement is an attempt to estimate some property that is not known. In chemistry, measurements are taken all the time. For example, we might measure the following: • the overall yield of a series chemical reactions; • the equilibrium constant of a chemical process; • the rate constant for a fundamental reaction; • the aqueous solubility of a solid; or 53

Chapter topics: 1. Random and systematic measurement error 2. Propagation of random measurement error 3. Confidence intervals

54

3. Measurements as Random Variables • the refractive index of a liquid.

In quantitative chemical analysis, we would be interested in measuring some property related to the concentration of a chemical in an object. Obviously, measurements are not solely the province of the field of chemistry. Any measurement will contain some error. The difference between the measured value (i.e., our estimate) and the true value of the property is the measurement error, εx . εx = x − ξ

(3.1)

where x is the measured value and ξ is the true value. It helps to visualize measurement error on a number line:

x

ξ

x

Repeated measurements of the same property very often do not give the same value.

Any measurement has some (hopefully small!) error. In many experiments, repeated measurement of the same property will result in measurements that are not quite identical, as follows:

ξ Each one of these measurements has some measurement error; obviously, the amount of error differs from measurement to measurement. Examining this last situation, we see that there are two components to measurement error. First of all, repeated measurements are not all identical due to random fluctuation inherent in the measurement process itself. The spread of measurements indicates the reproducibility, or precision, of the measurements. Secondly, it appears that the above measurements do not cluster around the true value, ξ; the difference between the central tendency of the measurements and the true value of the property being measured indicates the bias of the measurements.

3.1.2 Characterizing Measurement Accuracy and Precision It is common in science that repeated measurements of the same property results in slightly different measurement values. This is due to the uncertainty in the measurement itself — in other words, the measurement is a random variable. As such, it is most fruitful to describe measurement error in terms of the probability distribution associated with the measurement process. Measurement bias is the offset between the true value of the property being measured and the location of the probability distribution, while measurement precision is indicated by the width (i.e., the dispersion) of the distribution. Figure 3.1 represents measurement error in these terms. The terms “precise” and “accurate” are used to describe these two properties of a measurement process. Measurements are accurate if there is no inherent offset in the measurement process, while they are precise if the measurement values cluster closely together. These distinctions are more clearly seen in terms of the probability distributions of the measurement process, as shown in figure 3.2.

3.1.3 Quantitative Treatment of Measurement Error Like any random variable, measurements can be described by a probability distribution. The usual properties of location and dispersion of the dis-

3.1. Measurement Error

55

true value of property

population mean of measurements

µx

ξ

probability distribution of measurements

measurement bias

measurement precision

Figure 3.1: A statistical view of measurement error. Measurement value follow a certain probability distribution. The dispersion of the distribution determines the measurement precision, while the offset between the location and the true value is the measurement bias.

tribution of measurements can be used to quantitatively describe the two components of measurement error. 1. Measurement bias is the offset between the location of the probability distribution (i.e., the population mean, µx ) and the true value, ξ, of the property being measured. γx = µx − ξ

Bias is the offset inherent in a measurement.

(3.2)

Measurement bias is also known as systematic measurement error. 2. Measurement precision refers to the typical spread in the values of repeated measurements, as indicated by the dispersion of the probability distribution. Precision is thus best described quantitatively by the standard deviation, σx , (or the variance, σx2 ) of the measurement probability distribution. Measurement precision is known as random measurement error; the term measurement noise is also frequently used. These two components of measurement error — random and systematic error — combine to create the error εx that is observed for every single measurement. The total amount of measurement error is best characterized by the mean square error, MSE : MSE =

N N 1  2 1  εi = (xi − ξ)2 N i=1 N i=1

(3.3)

where N is the number of observations in the entire population. The relative contributions of the two components of measurement error — random and systematic error — are given by MSE = γx2 + σx2

(3.4)

This last equation shows how both bias and noise contribute to the overall measurement error. The purpose of a measurement is to determine the true value, ξ, of the property being measurement. As long as measurement noise is present, it

MSE is a measure of the total amount of error in a measurement process.

56

3. Measurements as Random Variables

ξ

method #1 is more precise than method #2

Probability density

probability distribution for measurement method #1

probability distribution for measurement method #2

10

20

30

40

50

60

70

80

90

Measurement value

(a) ξ

method #1 is more precise but less accurate than method #2

Probability density

measurement method #1

measurement method #2

10

20

30

40

50

60

70

80

Measurement value

(b) Figure 3.2: Comparison of accuracy and precision of two measurement processes on the basis of the probability distribution of the measurements.

3.1. Measurement Error

57

is impossible to know exactly the value of ξ, since random noise is inherently unpredictable. However, statistical methods described in this text — confidence intervals, linear regression, hypothesis testing, etc. — allow us to draw conclusions about the value of ξ even with the presence of random measurement error. Indeed, we might say that the entire purpose of statistical data analysis is to allow us to account for the effects of random fluctuation when drawing conclusions from data. Measurement bias, however, must be corrected before we can draw meaningful conclusions about our data. We may correct for measurement bias in two ways:

Bias must usually be eliminated from the data before statistical analysis is performed.

1. We can modify the measurement procedure to eliminate the bias. In practice, it is not necessary to completely eliminate the bias; it is sufficient to reduce it to levels where the overall measurement error is dominated almost completely by noise. In other words, we reduce the bias, γx , so that γx  σx Equation 3.4 is relevant in this regard. For example, if we desire that 99% of the mean square error, MSE , is due to random error, then we need to reduce the bias until γx2 < 0.01σx2 , or until γx < 0.1σx . 2. Instead of eliminating the bias, we can account for it empirically by applying a mathematical correction factor to the data. In the simplest case, we obtain a separate estimate gx of the bias γx and subtract that estimate from each measurement. For example, in analytical chemistry it is common to measure the instrument response to the solvent in order to correct for inherent instrument offset — the measured response to this “blank” is subtracted from all other measurements. In some cases, bias correction may not be trivial. For example, it is not unusual for the measurement bias to exhibit a functional dependence on the value of the property being measured: γx = f(x). A bias correction procedure would be more involved in such a case. Throughout this text, unless stated otherwise, we assume that measurement bias has been eliminated or corrected in some fashion (i.e., we assume µx = ξ), so that we will be dealing solely with the effects of random measurement error. Before moving on, let’s do two examples to apply what we’ve learned so far.

Example 3.1 An overzealous Highway Patrol officer has miscalibrated his radar gun so that there is a bias of +4 mph in the measurements of car speed. If a car is traveling at the speed limit of 65 mph, what is the probability that the radar gun will give a measurement greater than 65 mph? Assume that the measurement precision, as specified by the standard deviation of measurements, is 2.5 mph. We are given the information that the true car speed — the quantity being measured — is 65 mph: ξ = 65 mph. We need the mean µx and standard deviation σx of the probability distribution of the radar gun measurements. The question states outright that σx = 2.5, and from eqn. 3.2, µx = γx + ξ = +4 + 65 = 69 mph

Note that statistics can be used to test for significant bias in a set of measurements, if the true value, ξ, of the property being measured is known.

58

3. Measurements as Random Variables

We must find the probability that a single measurement will give a measurement greater than 65 mph, so     65 − 69 65 − µx =P P(x > 65 mph) = P z > σx 2.5 = P(z > −1.6)

µx = 69

ξ = 65

From eqn. 1.16 on p. 20 (and the z-tables in the appendix) 60

62

64

66

68

70

72

74

76

78

P(x > 65 mph)

80

P(z > −1.6) = 1 − P(z > +1.6) = 0.9452

Question: why would the probability be 50% if there were no measurement bias?

So there is a 94.52% probability that the measured car speed will be greater than 65 mph, even though the true car speed is constant at 65 mph. This shows the effect of measurement bias: even though the car is traveling at the speed limit, there is a high probability that the driver will be flagged for speeding. If the radar gun exhibited no bias in its measurements, then there would only be a 50% chance that the measurement would register above 65 mph.

Example 3.2 A certain chemist has scrupulously corrected for all systematic errors, and has improved measurement precision as best as she could while attempting to weigh a precipitate of silver sulfide. The actual (unknown) mass of the dry precipitate is 122.1 mg. If the relative standard deviation (RSD) of the measurement is 2.0%, what is the probability that the measured mass will have a relative error worse than ±5.0%? As with so many disciplines in science, one of the most frustrating aspect of statistics is the language barrier. The basic concepts are actually not terribly difficult, but it can be challenging to penetrate the sometimes arcane terminology that is used. If you are having trouble with a question, often the real barrier is not knowing exactly what information is given or what is asked. Before we can answer this particular question, we need to recall the definition of relative standard deviation, RSD, from the first chapter (see p. 14): if the value of µx is known, then RSD = σx /µx ; otherwise, it is σx /x (or sometimes σx /x, depending on the context). Knowing the value of the RSD allows us to calculate the standard deviation of the mass measurements, which is needed to solve this problem. As with the previous example, we need the true value, ξ, and the mean µx and standard deviation σx of the measurement probability distribution. Since measurement bias has been eliminated, µx = ξ = 122.1 mg and since RSD = 0.02 (i.e., 2%), σx = 0.02 · 122.1 = 2.442 mg We need to find the probability that the relative error is worse than ±5.0%. What does this mean? We know that the error εx is simply the difference between a measurement and the true value: εx = x − ξ. To determine the error that corresponds to a relative error of ±5.0%, you should

3.1. Measurement Error

59

ask yourself what is meant by ‘relative’ error: relative to what? It must be relative to the actual value ξ, which is also the mean of the probability distribution, µx , so that the corresponding error εx is given by: εx = ±0.05 · 122.1 = ±6.1 mg So when the question asks “what is the probability that the relative error is worse than ±5.0%?”, what is really wanted is the probability that the measurement error will be worse than ±6.1 mg,

122.1

P(εx < −6.1) + P(εx > +6.1) = P(x < 122.1 − 6.1) + P(x > 122.1 + 6.1) = P(x < 116.0) + P(x > 128.2) 116.0

Whew! We’ve finally ‘translated’ the question. We are looking for the sum of the area in the two tails P(x < 116.0 mg) and P(x > 128.2 mg). Now the problem is much easier to solve. If you are familiar with the material in section 1.3 (hint, hint!), you should be able to understand the following manipulations.

110

115

128.2

120

125

130

135

P(x < 116.0) + P(x > 128.2)

   128.2 − 122.1 116.0 − 122.1 +P z > P(x < 116.0) + P(x > 128.2) = P z < 2.442 2.442 = P(z < −2.5) + P(z > +2.5) 

= 2 · P(z > +2.5) = 0.0124 So, finally, we calculate a 1.24% probability that the relative error is worse than ±5.0%.

3.1.4 Propagation of Random Measurement Error Many experiments involve measurements; however, sometimes it is not the measured value itself that is needed, but some value calculated form the measurement. For example, we might measure the diameter d of some coins but we are actually interested in the circumference c = π d of the coins. The desired property may even required calculations involving two (or more) measurements. If we are interested in the density of an object, we could measure its mass m and volume V to calculate the density ρ = m . V Measurements that contain noise are random variables that must be described using probability distributions, sample statistics, etc., because of the uncertainty inherent in a random variable. Moreover, any value calculated using one or more random variables is also a random variable. For example, if a measured diameter d is a random variable, it has some ‘uncertainty’ associated with it; that uncertainty is the random measurement error, as quantified by the standard deviation σd . Now, when calculating a circumference c = π d, that uncertainty necessarily transfers to the calculated value — it can’t just disappear! In other words, the calculated circumference c has its own standard deviation σc due to the random measurement error in d. We say that the random error has propagated to the calculated value. We will now describe how to determine the magnitude of the propagated error. Let y be a value calculated from one or more random variables: y = f(a, b, c, . . . )

‘Propagation of random measurement error’ is often shortened to ‘propagation of error.’

Propagation or error refers to the determination of the uncertainty in a value calculated from one or more random variables.

60

3. Measurements as Random Variables

where a, b, and c represent random variables. Since y is a function of one or more random variables, y itself is a random variable whose variance is given by  σy2 =

∂y ∂a

2

 σa2 +

∂y ∂b

2

 σb2 +

∂y ∂c

2 σc2 + · · ·

(3.5)

The previous expression depends on the following two assumptions: 1. The errors in the random variables are independent. 2. The calculated value y is a linear function of the variables. Both of these assumptions are violated routinely. If the first assumption is violated, knowledge of the measurement error covariance can be used to determine σy . Violation of the second assumption is not too serious if the relative standard deviations of the measured variables are small (less than 10% is pretty safe). Equation 3.5 is the general expression for propagation of error. However, this expression can be simplified for some common situations. These situations will now be described, and the use of these equations in error propagation calculations will be demonstrated using several examples. Each of the following equations were derived from eqn. 3.5. Case 1: Addition or Subtraction of Random Variables When two (or more) random variables are either added or subtracted: y =a+b or y = a − b the variances of the variables are added to determine the variance in the calculated value. σy2 = σa2 + σb2  σy = σa2 + σb2

(3.6a) (3.6b)

Remember this rule: variances are additive when two variables are added or subtracted (as long as the errors are independent). It is a common mistake to assume that the standard deviations are additive, but they are not: σy ≠ σa + σb . Case 2: Multiplication or Division of Random Variables This case is the one that applies to calculating a density of an object from mass and volume measurements. When two random variables are multiplied or divided, y =a·b a y= b then either of the following equations may be used to calculate σy RSD2y = RSD2a + RSD2b      σa 2 σb 2 + σy = y · a b

(3.7a) (3.7b)

3.1. Measurement Error

61

The first expression is simpler to remember and calculate. In fact, as we will see, it is simpler to do many error propagation calculations using relative standard deviation (RSD) values instead of using the standard deviation (σ ). Case 3: Multiplication or Division of a Random Variable by a Constant This case is the one that applies to calculating a circumference from a measured diameter, using c = π d. In this last expression, we are not multiplying two random variables together, because π is not random. Thus, this case is different than the previous one. When a random variable is multiplied or divided by a constant value, k, y =k·a the standard deviation of the calculated value y is simply the standard deviation, σa , of the random variable multiplied by the same constant value. σy = k · σa

(3.8a)

RSDy = RSDa

(3.8b)

These equations also apply is a random variable is divided by a constant. For example, y y= =k·y j where k = 1/j. Applying eqn 3.8 gives σy = σa /j. Case 4: Raising a Random Variable to the kth Power Let a be a random variable and k be a constant; if we calculate y such that y = ak then the standard deviation σy can be calculated from RSDy = |k| · RSDa

(3.9a)

|k| · y · σa (3.9b) a It is worth noting that taking the inverse or root of a random variable are both covered by this case: σy =

1 = a−1 ⇒ RSDy = RSDa a  1 1 y = y = y 2 ⇒ RSDy = RSDa 2 y=

Case 5: Logarithm/Antilogarithm of a Random Variable If we calculate the logarithm of a random variable (y = log a), then RSDa ln(10) σy = 0.434 · RSDa σy =

(3.10)

while if we calculate the antilogarithm of a random variable (y = 10a ), RSDy ln(10) RSDy = 2.303 · σa σa =

(3.11)

62

3. Measurements as Random Variables

Table 3.1: Useful expressions governing random error propagation in calculations. The simplest expression may be for σy directly, or for RSDy

Case y =a+b y =a−b y =a×b y =a÷b y = ka (k is a constant)

σy σy2

=

σa2

RSDy +

σb2

RSD2y = RSD2a + RSD2b σy = kσa

y = ak (k is a constant) y = log a

RSDy = RSDa RSDy = |k|RSDa

σy = 0.434 · RSDa

y = 10a

RSDy = 2.303σz

Summary and Applications Table 3.1 summarizes the most useful expressions for propagation of error calculations. All of these expressions can be derived from eqn. 3.5. In some cases, the expression for σy is easier to remember or use, while in others the calculation of RSDy is simpler. Being adept at error propagation calculations takes time and practice. First you must master the mechanics: which equation should be used, and when? Then you must learn to recognize when an error propagation calculation is appropriate. The first step is really not too difficult to learn; the examples presented in this section will get you started. The second step can be tricky. Real life situations rarely come with a signpost that shouts “error propagation needed here!” Just keep in mind that whenever you perform a mathematical operation on a measured value, you may need to do a random error propagation calculation. With that in mind, let’s do a few examples. The first few will be fairly simple, and are designed to give you practice in identifying which “case” applies from table 3.1.

Example 3.3 The density of a stainless steel ball bearing is determined in the following manner. Its mass is measured in an analytical balance and its volume is determined by the displacement of a liquid in a graduated cylinder. The following measurements were obtained (along with the standard deviation of these measurements): m = 2.103 g σm = 0.01 g V = 0.34 mL σV = 0.05 mL Calculate the density and its standard deviation. First we calculate the density: ρ = 2.103 = 6.185 g/mL. In this calculation, 0.34 we divide one measured value by another — since they are both random variables, this is Case 2. In order to determine the standard deviation of this value, we must use eqn. 3.7.

3.1. Measurement Error

63

This problem will be easier if we calculate the relative standard deviations of the measurements first. σm 0.01 = = 0.0047551 m 2.103 σV 0.05 = = 0.14706 RSDV = V 0.34

RSDm =

Now we can determine the RSD of the calculated density, and then calculate the standard deviation of the density:   RSDρ = RSD2m + RSD2V = 0.00475512 + 0.147062

2  RSD 2 . Note that RSDm V

= 0.14714 σρ = ρ · RSDρ = 6.185 · 0.14714 = 0.91 g/mL So the density of the ball bearing is 6.19 g/mL with a standard deviation of 0.91 g/mL. Notice in the last calculation that RSDρ ≈ RSDV . This suggests that if we wish to decrease the uncertainty in the calculated density we should improve the precision of the volume measurement. Propagation of error calculations can be useful in pinpointing the major contributor(s) of uncertainty in a value calculated from several measurements.

Propagation of error reveals which measurement contributes the most to the overall uncertainty.

Example 3.4 In a titration, the initial reading on the burette is 28.51 mL and the final reading is 35.67 mL, both with a standard deviation of 0.02 mL. What is the error on the volume of titrant used? The volume of titrant dispensed from the burette, Vt , is calculated by subtracting the initial volume reading, Vi , from the final reading, Vf . Vt = Vf − Vi = 35.67 − 28.51 = 7.16 mL Since both volume measurements contain random error, there will be error in the calculated volume; we must use error propagation calculations to determine the magnitude of this error. This is an example of case 1. The standard deviation in the volume of dispensed titrant is calculated from eqn. 3.6.  σ (Vt ) = σ 2 (Vf ) + σ 2 (Vi )   = 0.022 + 0.022 ) = 2 · 0.02 = 0.028 mL So the standard deviation of the dispensed volume of titrant is 0.028 mL. It is interesting in this case to compare the relative standard deviations of the measurements with that of the calculated value. 0.02 = 0.0561% 35.67 0.02 = = 0.0702% 28.51 0.028 = = 0.279% 7.16

RSDVf = RSDVi RSDVt

Case 1: addition or subtraction of random variables

64

3. Measurements as Random Variables

Notice that the RSD of the calculated value is quite a bit larger than those of the initial measurements. This situation sometimes happens when one random variable is subtracted from another. In particular, it is always desirable to avoid a situation when the difference between two large measurements is calculated. For example, imagine that you want to determine the mass of a feather, and you have the following options: • Place the feather directly on the scale and measure it • Measure the mass of an elephant, then place the feather on top of the elephant and mass them both. Calculate the mass of the feather by difference. The precision of the calculated difference between two large measurements is often very poor.

Putting aside the problem of weighing an elephant, which of these methods would you prefer? Propagation of error predicts that the precision of the feather’s mass obtained by the second method would be quite poor because you are taking the different between two large measured values. This is a basic principle in measurement science. The next example is slightly more complicated in that a value is calculated from a measurement in two steps. The key to applying the simplifying equations in table 3.1 is that the error should be propagated for each step of the calculation.

Example 3.5 The solubility of calcium carbonate in pure water is measured to be 6.1 × 10−8 mol/L at 15 ◦ C. This measurement is performed with 7.2% RSD. Calculate the standard deviation of the solubility product, Ksp that would be calculated from this measurement. Calcium carbonate dissolves according to CaCO3  Ca2+ + CO2− 3 The solubility product Ksp is calculated from the molar solubility s as follows:  Ksp = 2 · s  = 2(6.1 × 10−8 ) = 3.49 × 10−4 This calculation actually proceeds in two steps: Case 3: multiplication of a random variable by a constant value.

Step 1: First multiply the molar solubility by two: y = 2 · s This is case 3 — multiplication of a random variable by a constant value — and so the relative standard deviation remains unchanged: RSDy = RSDs = 0.072

Case 4: raising a random variable to a constant power.

Step 2: Now we must take the square root: Ksp = 1 √ This is case 4, since y = y 2 .



y

1 RSDy = 0.5(0.072) 2 = 0.036

RSDKsp =

σ (Ksp ) = Ksp · RSDKsp = (3.49 × 10−4 )(0.036) = 1.256 × 10−5

3.1. Measurement Error

65

So from the solubility measurement we find that, at this temperature, Ksp = 3.49 × 10−4 and σ (Ksp ) = 0.13 × 10−4 . In our final example we have a four step calculation using two measured values.

Example 3.6 A cylinder has a measured diameter of 0.25 cm and a length of 250.1 cm. The standard deviation of the first measurement, made with a micrometer, is 50 microns; the standard deviation of the second measurement is 0.3 cm. Calculate the volume from these measurements, and determine the error in this value. The volume of the cylinder is calculated using the following equation. V = π r 2l Again, we do the random error propagation in steps. Step 1: Calculate the radius of the circular base: r =

d 2

This is an example of case 3:

Case 3: multiplication of a random variable by a constant.

0.25 = 0.125 cm 2 0.0050 RSDr = RSDd = 0.25 = 0.02 r =

Step 2: Square the radius: y = r 2 This is an example of case 4:

Case 4: raising a random variable to a constant power

y = 0.1252 = 0.0156 cm2 RSDy = 2 · RSDr = 2(0.02) = 0.04 Step 3: Calculate the area of the circular base: A = π · y This is case 3 again:

Case 3: multiplication of a random variable by a constant.

A = π · 0.0156 = 0.0491 cm2 RSDA = RSDy = 0.04 Step 4: Finally, we can calculate the volume: V = A · l This is case 2, since both A and l are random variables. V = 0.0491 · 250.1 = 12.277 cm3     0.3 2 2 2 RSDV = RSDa + RSDl = 0.042 + 250.1  2 2 = 0.04 + 0.0012 = 0.04001 σV = V · RSDV = 12.277(0.04001) = 0.4913 cm3

Case 2: multiplying two random variables together.

66

3. Measurements as Random Variables

So the volume of the cylinder is 12.28 cm3 with a standard deviation of 0.49 cm3 . The major contributor to the uncertainty in the calculated volume was the uncertainty in the calculated base circular area, which in turn resulted from random error in the diameter measurement. Thus, to decrease the standard deviation of the calculated volume, the precision of the diameter measurement must be improved. The error propagation calculation reveals that improving the precision of the length measurement would have very little effect on the uncertainty of the calculated volume. If you wish, you use eqn. 3.5 directly for this calculation. Since V = π r 2l =

π 2 d l 4

and the random variables are the measured values for d and l, applying eqn. 3.5 yields   2 π π 2 2 2 σl dl σd2 + d 2 4 = 9645 · 0.0052 + 0.002409 · 0.32 

σV2 =

= 0.2411 + 0.0002168 = 0.2413168 σV = 0.4912 cm3 Again it is apparent that the value of σd largely determines the uncertainty in the calculated value. Those of you who feel comfortable with partial derivatives may prefer to use eqn. 3.5 directly in this manner. In most cases, it is just as fast (or even faster) to do the propagation calculations in a stepwise manner, using the simplified expressions in table 3.1, and there is generally less chance of error.

3.2 Using Measurements as Estimates If the purpose of obtaining measurements is to provide an estimate of an unknown property of some object or system, we must deal with the fact that any estimate will contain a certain amount of measurement error. Even if all measurement bias has been eliminated, the presence of random error will introduce some uncertainty into any single estimate of the property. A single measurement value by itself is of limited use in estimating a system property; we also need to indicate somehow the precision, or uncertainty, of this estimate. A common way to indicate precision is to obtain repeated measurements, perhaps under a variety of typical experimental conditions, and then report both the measurement mean (as the estimate of the system property) and the standard error of the mean (to indicate the estimate’s uncertainty). I will now describe how to use a confidence interval to estimate a property’s true value. The confidence interval is a very useful estimation method because it accounts for the effects of random error on the estimate.

3.2.1 Point Estimates and Interval Estimates Let’s consider an example. The data were originally given in example 2.2 on page 44.

3.2. Using Measurements as Estimates

67

Example 3.7 The levels of cadmium, which is about ten times as toxic as lead, in soil is determined by atomic absorption. In five soil samples, the following levels were measured: 8.6, 3.6, 6.1, 5.5, 8.4 ppb What is the concentration of cadmium in the soil? The process for the analysis of cadmium in soil would be something like the following: 1. Collect the samples to be analyzed. 2. Store them in suitable containers and transport them to the laboratory. 3. Dissolve or extract the soil samples. 4. Use atomic absorption instrument to measure the concentration of cadmium in the solution. Performing this entire process repeatedly does not yield the same estimate of cadmium concentration because (i) the cadmium in the soil is not distributed evenly, so that different soil samples will contain different levels of cadmium; and (ii) even assuming bias has been eliminated, random error will be present in steps 2–4 in the measurement procedure. In order to account for these effects on our estimate of the “true” concentration of cadmium in soil, it is necessary to perform measurements repeatedly and then use sample statistics to provide our estimate. The important sample statistics for these data are x = 6.44 ppb sx = 2.10 ppb sx s(x) = √ = 0.94 ppb 5 We use the measurement mean x to estimate the true value ξ of the property being measured. Of course, the measurement mean actually estimates the mean µx of the population of measurements, but if we assume no bias in the measurement process (i.e., µx = ξ) then x provides an unbiased estimate of both µx and ξ. Sample statistics are sometimes called point estimates of population parameters, because they provide a single number representing a “guess” at the true value of the parameter. Point estimates are simple to interpret, but they have one major disadvantage: they give no indication of the uncertainty of the estimate. To see why it is important to indicate uncertainty, imagine that the maximum safe level of cadmium in soil is 8.0 ppb. Can you state with certainty that the actual concentration of cadmium is below this level? Our best guess (so far!) of the concentration of cadmium in the soil is 6.44 ppb, but so what? We need an indication of the uncertainty of this estimate. The standard deviation of the estimate certainly says something about its uncertainty. In this case, the standard error of the measurement mean is 0.94 ppb. So what does that tell us? Is the soil cadmium concentration above 8.0 ppb?

x → µx µx = ξ (no measurement bias) Hence, x → ξ

x is a point estimate of µx .

68

3. Measurements as Random Variables point estimate

true value

interval estimate

true value

Figure 3.3: Comparison of a point estimate and an interval estimate of a population parameter.

Instead of a single point estimate, it is better by far to provide an interval estimate of the true value. An interval estimate is a range of numbers within which the population parameter is likely to reside. The location and width of this range will depend on the point estimate, the standard error and the number of measurements, so that providing an interval estimate is equivalent to providing all these parameters. Figure 3.3 contrasts the philosophy of a point estimate and an interval estimate. An interval estimate always has a confidence level (CL) associated with it. The confidence level is the probability that the interval contains the parameter being estimated. For example, for our cadmium measurements, we might report an interval estimate as follows: [Cd] = 6.4 ± 1.5 ppb (90% CL) The confidence level in this estimate is indicated: there is a 90% probability that the range contains the true cadmium concentration, assuming no measurement bias. The interval estimate given here is called a confidence interval (CI). It is important to realize that the interval depends on the level of confidence that is chosen; a higher confidence level (for example, 95%) will give a wider confidence interval. Confidence intervals are much superior to point estimates as an indication of the values of population parameters. A glance at this confidence interval shows that we can be at least “90% confident” that the level of cadmium in the soil is not 8 ppb or higher; confidence intervals do not require statistical expertise to interpret (although some knowledge of statistics is needed to appreciate them fully). The object of the rest of this chapter is to learn how to construct and report confidence intervals, and to understand why they work.

3.2.2 Confidence Intervals Introduction In this section, we will discover how to construct confidence intervals, and explain why they work. During these explanations, we will sometimes refer to a particulate measurement process, described below.

3.2. Using Measurements as Estimates

69

Imagine that we are measuring the concentration of an analyte. The true (but unknown) concentration of the analyte is 50 ppm, and the true standard deviation is 5 ppm. Four measurements of the concentration are obtained. The following table shows the values of the measurements, as well as other important information. measurements, ppm: sample statistics, ppm: true values, ppm:

48.1, 53.3, 59.2, 51.0 x = 53.12, sx = 4.61, s(x) = 2.30 µx = 50, σx = 5, σ (x) = 2.50

Example measurement process to illustrate interval estimates. All measurements are obtained from a population that is normally distributed with mean µx = 50 ppm and standard deviation σx = 5 ppm.

The entire process — obtaining a group of four measurements — is repeated ten more times to generate ten more sample means, each one a mean of mean of four measurements. These data are shown later in fig. 3.5. The difference between the sample standard deviation, sx , and the standard error of the mean, s(x), should be emphasized again: sx is an estimate of σx , which is the standard deviation of the individual measurements, x, while the standard error is an estimate of σ (x), which in this case is the standard deviation of the mean of 4 measurements. Prediction Intervals Before tackling confidence intervals, it is instructive to look at a closely related interval, the prediction interval. The purpose of the prediction interval is to indicate the probable range of future values of a random variable. Consider the following problem.

Example 3.8 For a normally distributed variable with a mean of 50 ppm and a standard deviation of 5 ppm, find a range of values that will contain the sample mean of four measurements with a probability of 80%. Let’s restate the problem. We want to determine x1 and x2 such that P(x1 < x n < x2 ) = 1 − α where n = 4 and α = 0.2, which is the probability that the sample mean falls outside the values of x1 and x2 . Figure 3.4 shows the situation. Note that we are interested in the probability distribution of the sample mean x n of 4 measurements, not the distribution of the individual measurements, x (which is also shown in the figure as a reference). We use the z-tables to solve this problem. We must find the z-score zα/2 such that   x − µx x1 − µx x2 − µx P(x1 < x n < x2 ) = P < < σ (x n ) σ (x n ) σ (x n ) = P(−zα/2 < z < zα/2 ) = 1 − α = 0.8 In other words, the value zα/2 is the z-score that gives a right-tail area of α/2: α P(z > zα/2 ) = 2

s(x) =

sx √ n

sx → σx s(x) → σ (x)

70

3. Measurements as Random Variables

mean of 4 measurements

X1

X2

individual measurements

0.8

30

35

40

45

50

55

60

65

70

Figure 3.4: Figure for example 3.8. We must find the values of x1 and x2 such that P(x1 < x n < x2 ) = 1 − α = 0.8. The sampling distribution for the mean of four measurements must be used. The dotted line in the figure shows the distribution of the individual measurements.

For this example, then, we seek z0.1 , which is the z-score that gives a righttail area of 0.1. From the z-tables, the desired value is z0.1 = 1.28. This means that x1 is 1.28 standard deviations below the mean and x2 is 1.28 standard deviations above the mean. −z0.1 =

x1 − µx σ (x n )

  5 x1 = µx − z0.1 · σ (x n ) = 50 − 1.28 √ 4 = 46.8 ppm Likewise, 

5 x1 = µx + z0.1 · σ (x n ) = 50 + 1.28 √ 4 = 53.2 ppm

A prediction interval is a range of values that predicts future observations with stated probability.

Important: whenever you use the z-tables, you are making an assumption of normality. It turns out that you also make this assumption when you use t-tables or F -tables, both of which will be introduced later in this text.



Thus our answer: the interval 46.8–53.2 ppm will contain the mean of 4 measurements, x n , with 80% probability. Consider the meaning of the interval found for the last example: there is a 80% probability that the mean of any four measurements will fall within this interval. This range of numbers is called a prediction interval, because it can be used to predict where future measurement means will fall. We calculated our prediction interval by using the z-tables. In other words, we assumed that the mean of four measurements, x n , follows a normal distribution. The central limit theorem provides justification for this assumption: it states that the probability distribution of the sample mean x n tends to become more normal as the sample size n increases. If the individual measurements x follow a normal distribution — which is often the situation — then x n follows a normal distribution for any sample size n. Thus, we will always assume that the sample means are normally

3.2. Using Measurements as Estimates

71

distributed. This assumption is reasonably valid, unless the sample size is quite small and the distribution of the individual measurements is quite different than a normal distribution. A general expression for the prediction interval (PI) with confidence 1−α of a future sample mean x is given by PI for x = µx ± zα/2 · σ (x)

(3.12)

where 1 − α is the probability that the sample mean x lies within the interval. In example 3.8 we had α = 0.20 and zα/2 = z0.10 = 1.28. Thus, the prediction interval for the sample mean of four measurements can be written as 50.0 ± 3.2 ppm (80% CL) meaning that the sample mean would fall within 3.2 ppm of the true measurement mean, µx = 50.0 ppm. Now, the central limit theorem tells us how the standard error of the sample mean, σ (x n ), is related to the standard deviation of the individual measurements, σx (eqn. 2.9). Using this information, we can also write the prediction interval (eqn. 3.12) as σx PI for x = µx ± zα/2 · √ n where n is the number of measurements in the sample mean. It is worth emphasizing the following points about prediction intervals: • the prediction interval is for the mean of n measurements; • the width of the prediction interval will depend on both the sample size n and on the value chosen for α (where the confidence level is 1 − α). Increasing sample size narrows the interval, while increasing confidence levels will widen the prediction interval. • the prediction interval calculated by eqn. 3.12 assumes that the sample means are normally distributed. We can illustrate prediction intervals using the measurement process described earlier on page 69. Using a random number generator, 40 measurements were generated with µx = 50 and σx = 5. These measurements were then grouped into 10 samples of four measurements, and the mean x of each sample was calculated, giving a total of ten sample means. Figure 3.5 displays the results, along with the 80% prediction interval (as dashed lines). On average — in other words, if this experiment were to be repeated many times — eight of the ten sample means should fall within the prediction interval. Interval Estimate of the Population Mean, µx The step from a prediction interval (PI) to a confidence interval (CI)is a small one, but an important one. As we will see, the equations used to construct confidence intervals are almost identical to those used for prediction intervals, but the purpose is quite different: Interval prediction intervals confidence intervals provide a range of values provide a range of values that contain future values of that contain the true value Purpose of a population parameter, a variable, with confidence with confidence level 1 − α level 1 − α

72

3. Measurements as Random Variables 55

54

53

Sample mean, ppm

52

51

prediction interval (CL = 80%)

50

49

48

47

46

45

Figure 3.5: The dotted lines show the 80% prediction interval for the sample mean of four measurements, x n . Ten such sample means were generated in a computer simulation (see text for details). We would predict that, on average, eight of the ten groups would give sample means within the prediction interval.

With a prediction interval, the goal is to predict future measurements, while with a confidence interval, the idea is to predict a population parameter. The most common use of the confidence interval is to predict the population mean, µx . For now we will assume that the value of the standard deviation of the population, σx , is available to us in constructing this interval.

CI ≡ confidence interval

Formal Derivation of CI for µx when σx is known. Mathematical proofs are definitely not the focus of this text. Nevertheless, your understanding of the nature of a confidence interval is enhanced by understanding this proof. We begin by defining a variable, z, such that: z=

x − µx σ (x)

(3.13)

The variable z is the standardized sample mean, and it is a random variable because x is random. In order to construct a interval estimate of µx , we must know the probability distribution function of the standardized sample mean, z. Key Assumption: the sample mean, x, is normally distributed, so that the standardized sample mean, z, follows a standard normal distribution. The central limit theorem provides some justification for this assumption.

3.2. Using Measurements as Estimates

73

To construct a confidence interval, we define a value zα/2 such that P(z > zα/2 ) =

α 2

which means P(−zα/2 < z < +zα/2 ) = 1 − α The key to constructing a confidence interval is the ability to determine the value zα/2 that satisfies the above expressions. Since we are assuming that the sample mean x is normally distributed, we may use the z-tables to determine the value of zα/2 (as we did in example 3.8). Why does knowledge of zα/2 allow us to form a confident interval to estimate µx ? The following derivation provides the answer.   x − µx P −zα/2 < < +zα/2 = 1 − α σ (x)

The standard normal distribution describes the distribution of the standardized mean:

P(−zα/2 · σ (x) < x − µx < zα/2 · σ (x)) = 1 − α −zα/2

1−α

zα/2

P(+zα/2 · σ (x) > −x + µx > −zα/2 · σ (x)) = 1 − α And finally P(−zα/2 <

P(x + zα/2 · σ (x) > µx > x n − zα/2 · σ (x)) = 1 − α

x−µx σ (x)

< +zα/2 ) = 1 − α

Examine the last expression carefully: the population mean µx will be within the range x ± zα/2 · σ (x) with probability 1 − α. That is exactly what we want in a confidence interval: a range of values that will contain the population parameter with some specified probability. Thus we have the expression for a confidence interval for the population mean, µx : CI for µx = x ± zα/2 · σ (x)

(3.14)

x is a point estimate of µx ; x ± zα/2 σ (x) is an interval estimate of µx .

By the central limit theorem (eqn. 2.9) we may also write this as σx CI for µx = x ± zα/2 √ n All that is needed, then, is to calculate x and σ (x) for a series of measurements, to choose a value for α and then use it to determine zα/2 . Since 95% confidence intervals are most common, z0.025 = 1.96 is often used for confidence intervals: σx 95% CI for µx = x ± 1.96 √ n There is nothing really mysterious about why the confidence interval works — it is a direct consequence of the fact that sampling distribution of the mean is (assumed to be) a normal probability distribution. If this assumption is incorrect, then the confidence interval is invalid. The equation for the confidence interval (eqn. 3.14) is very similar to that of the prediction interval (eqn. 3.12) The reason is that both expressions stem from the fact that the sample mean is normally distributed. The prediction interval states that any sample mean x will be within ±zα/2 · σ (x) of the population mean, µx . The confidence interval states exactly

PI: µx ± zα/2 · σ (x) CI: x ± zα/2 · σ (x)

74

3. Measurements as Random Variables

the same thing; the difference is that where the prediction interval (as defined by eqn. 3.12) is concerned with predicting future values of x from a known value of µx , the confidence interval is used to predict the value of µx from the known value of x. We will illustrate confidence intervals by using the measurements mentioned described at the beginning of this section, on p. 69: measurements: 48.1, 53.3, 59.2, and 51.9 ppm population parameters: µx = 50 ppm, σx = 5 ppm Recall that the 80% prediction interval for the sample mean of four measurements was earlier (example 3.8) determined to be 50.0 ± 3.2 ppm. If we know that σx = 5 ppm, then the 80% confidence interval is calculated from eqn. 3.14: 80% CI for µx = x ± z0.10 · σ (x)   5 = 53.12 ± 1.28 √ 4 = 53.1 ± 3.2 ppm As you can see, the confidence interval does indeed contain the true value of µx = 50 ppm — although there was a 20% probability that µx was not within the interval — and the width of the confidence interval was the same as the width of the prediction interval, as would be expected. Figure 3.6 compares the 80% prediction interval and the 80% confidence intervals from ten sample means generated by computer simulation (as described earlier on page 71). The widths of the prediction and confidence intervals are the same, so that if the sample mean is within the prediction interval, then the population mean will be within the confidence interval.

Example 3.9 The concentration of 7 aliquots of a solution of sulfuric acid were measured by titration with sodium hydroxide and found to be 9.8, 10.2, 10.4, 9.8, 10.0, 10.2, and 9.6 mM. Assume that the measurements follow a normal probability distribution with a standard deviation of 0.3 mM. Calculate a confidence interval for the acid concentration, with a confidence level of 95%. The sample mean, x, and standard error of the mean, σ (x), for the seven measurements can be calculated easily enough. x = 10.00 mM σx 0.3 σ (x) = √ = √ n 7 = 0.11 mM For a 95% confidence interval (i.e., α = 0.05) we need the z-score z0.025 = 1.96. From eqn. 3.14 we can calculate the confidence interval. 95% CI for µx = 10.00 ± 1.96 · 0.11 = 10.00 ± 0.22 mM We assume no bias in the measurements (γx = 0; ξ = µx ) and so the confidence interval contains the true concentration of sulfuric acid: [H2 SO4 ] = 10.00 ± 0.22 mM (95% CL)

3.2. Using Measurements as Estimates

75

55

54

53

Sample mean, ppm

52

51

50

49

48

47

46

45

Figure 3.6: This is the same data from fig. 3.5: ten sample means were generated by computer simulation (see text on p. 71 for details). The dotted lines show the 80% prediction interval generated by eqn. 3.12, while the error bars on each data point show the 80% confidence intervals calculated by eqn. 3.14.

The confidence interval is a powerful method of estimating the true value of a measured property. Due to the presence of random measurement error, we cannot state with certainty exactly what the concentration of sulfuric acid is, but we can declare, with stated certainty, that the true (unknown) concentration is within specified range of concentrations. Notice that there are two significant figures in the second number of the interval, and the same number of decimal places in the point estimate (i.e., the sample mean). It is also perfectly acceptable to use a single significant figure for the second number: [H2 SO4 ] = 10.0 ± 0.2 mM. General Expression: CI for µx when σx is known. The key to the validity of eqn. 3.14 for the confidence interval is that x is a normally distributed variable. We may generalize eqn. 3.14 for any normally-distributed variable, a:

Confidence Intervals

CI for µa = a ± zα/2 · σa

These two must match

(3.15)

Applying this general expression to the sample mean (i.e., a = x) gives eqn. 3.14. However, this equation can be used to construct an interval estimate for the mean µ of any normally distributed variable. For example, through linear regression we might obtain an estimate b1 for the slope of a line, β1 . The point estimate b1 is a sample statistic (i.e., a random variable) and it is often justified to assume that it is normally distributed. Applying eqn. 3.15 allows us to construct an interval estimate of the true slope β1 : CI for β1 = b1 ± zα/2 · σ (b1 )

a ± zα/2·σa

The sample statistic b1 is an estimate of the population parameter β1 : b1 → β1 Linear regression is covered in the next chapter.

76

3. Measurements as Random Variables

A common mistake is to divide the standard deviation by the following interval estimate, which is incorrect. CI for β1 = b1 ± zα/2 ·

CAREFUL!



n, resulting in

σ (b1 ) √ n

√ The reason we commonly divide by n is because the central limit theorem σx . states that the standard error of the sample mean is given by σ (x) = √n When σx is not known — the usual situation — eqn. 3.14 cannot be used.

Confidence Interval for µx when σx is not known. The previous method for constructing confidence intervals can only be used if the standard deviation σx of the entire population of measurements is known. That is usually not the case; instead, we must use the sample standard deviation sx to provide an estimate of σx . We need to find an expression for an interval estimate of µx using the sample statistics x and sx . The steps to deriving such as expression are basically the same as when σx is known (p. 73), so it is helpful to be very comfortable with that material. To start, we define the variable t as t=

The studentized value of any random variable x is x−µ given by sx x

x − µx s(x)

(3.16)

This variable is the studentized sample mean; sometimes called a t-statistic. The studentized mean is similar to the standardized mean (eqn. 3.13), except σx is not known. The denominator is the sample standard error of the s mean: s(x) = √xn . The ability to construct a confidence interval depends on the ability to determine the value tα/2 such that P(t > tα/2 ) =

α 2 +∞ 

=

p(t) dt tα/2

or, in other words, P(−tα/2 < t < tα/2 ) = 1 − α +t α/2

=

p(t) dt −tα/2

where p(t) is the probability distribution function of the t-statistic. This function must be known in order to determine tα/2 . Since the studentized sample mean (eqn. 3.16) is very similar to the standardized sample mean (eqn. 3.13), we might hope that t follows a standard normal distribution, like z. However, even if the sample mean x is normally distribution, t does not follow a normal distribution. That is because the standardized mean z is calculated using only one random variable (the sample mean, x) while the studentized mean t is calculated with two variables (x and s(x)). The principle of random error propagation predicts that the t-statistic is more variable than the z-statistic, since t is affected by the variability in both sample statistics. It turns out that if the sample mean x follows a normal distribution (as predicted by the central limit theory), then the studentized sample mean t

3.2. Using Measurements as Estimates

77

ν = 10 ν=3 ν=1

-4

-3

-2

-1

0

1

2

3

4

Figure 3.7: Comparison of the Student’s t-distribution (solid curves) and the standard normal distribution (dotted curve). The shape of the t-distribution depends on the degrees of freedom, ν.

follows a probability distribution known as the Student’s t-distribution, or simply the t-distribution.  p(t) = k0

t2 1+ ν

−(ν+1)/2 (3.17)

where k0 is a normalization constant (with a value such that the integrated area under the entire curve is unity), and ν is the degrees of freedom of s(x) in eqn. 3.16 (almost certainly, ν = n − 1). Since t and z are so similar, it might be expected that the t-distribution is similar to the standard normal distribution. These distributions are compared in figure 3.7. Examine the figure carefully and notice the following features. z and t: 1. The t-distribution is not a single distribution but a family of related distributions, one for every different value of ν. 2. The t-distribution have ‘heavier’ tails than the z-distribution, because the t-statistic (which is calculated from two random variables) is more variable than the z-statistic (calculated from only one). However, as ν increases, the t-distribution resembles the z-distribution more closely. This behavior makes sense, because s(x) becomes less variable as ν increases. As the degrees of freedom increase (ν → ∞), the sample statistic becomes an increasingly good estimate of the population parameter (s(x) → σ (x)), which means that t becomes more and more like z (t → z). Now that we know the form of the distribution function, we can determine the value of tα/2 , which is needed to calculate the confidence interval. hard way to find tα/2 is by integrating the probability distribution:

The +∞ p(t) = α2 . However, these integrals — right-tail areas of t-distributions tα/2 — are usually given in standard statistical tables: the t-tables. These tables

The t-distribution was first derived by W.S. Gosset in 1908. Gosset worked for an Irish brewery that did not allow publication of research, so he published his results under the pseudonym ‘Student.’

78

3. Measurements as Random Variables

are structured somewhat differently than the z-tables that given right-tail areas for the standard normal distribution. The standard normal distribution is a single, unique function (i.e., a normal distribution with µ = 0 and σ = 1), and so a single table can be given to find the value of zα/2 for any value of α. By contrast, the t-distribution is not a single distribution but a family of related distributions, one for each distinct value of ν. For this reason, it is not possible to have a single t-table to find tα/2 for all possible values of α. Instead, t-tables give the values of tα/2 for specific values of α and ν. Inspect the t-table on page A–2 and compare it to the z-table on page A–1. You must be thoroughly familiar with the manner in which either table is used. So now we can derive a confidence interval to estimate the population mean, µx , from the sample mean, x, and an estimate of its standard error, s(x). Key Assumption: if the sample mean x is normally distributed, then x−µ the studentized sample mean, t = s(x)x follows Student’s t-distribution. The central limit theorem helps justify this assumption. Once tα/2 can be determined, it is possible to calculate an interval estimate. An expression can be derived as follows.   x − µx P −tα/2 < < +tα/2 = 1 − α s(x) P(−tα/2 · s(x) < x − µx < tα/2 · s(x)) = 1 − α P(+tα/2 · s(x) > −x + µx > −tα/2 · s(x)) = 1 − α So that, in the end, P(x + tα/2 · s(x) > µx > x n − tα/2 · s(x)) = 1 − α This last expression shows that a (1 − α) confidence interval for µx — when σx is not known — is given by CI for µx = x ± tα/2 · s(x)

(3.18)

We can generalize this expression: a confidence interval can be constructed to estimate the population mean, µa , of a normally distributed variable from a normally-distributed point estimate a and the sample standard deviation sa of the point estimate: CI for µa = a ± tα/2 · sa

(3.19)

It is not hard to see that eqn. 3.18 is a form of this equation with a = x. Let’s use the measurements described on page 69 to show how to construct an interval estimate of µx from a series of measurements. measurements: sample statistics:

48.1, 53.3, 59.2, and 51.9 ppm x = 53.12 ppm, sx = 4.61 ppm

3.2. Using Measurements as Estimates

79

An 80% confidence interval is calculated as follows. 80% CI for µx = x ± t0.10 · s(x)



= 53.12 ± 1.6377

4.61 √ 4



= 53.1 ± 3.8 ppm Thus, our interval estimate is 53.1 ± 3.8 ppm (80% CL), which does indeed contain the true concentration of 50 ppm. To determine the value of t0.10 from the t-table, the degrees of freedom, ν, must be known. Since four measurements (n = 4) were used to calculate the standard deviation, sx , we know that ν = n − 1 = 3. Figure 3.8 shows 80% confidence intervals calculated using eqn. 3.18. Note that the intervals have different width, since the value of s(x) will change from sample to sample. In the figure, seven of the ten intervals contain µx . On average — if the experiment were continued indefinitely – eight out the ten sample means would contain the true mean (solid line), assuming that the sample mean follows a normal probability distribution.

Example 3.10 In the previous example (example 3.9), a 95% confidence interval for the concentration of sulfuric acid in a solution was calculated, given the information that σx = 0.3 mM. Calculate another 95% confidence interval for this concentration, this time without assuming any prior knowledge of σx .

We must calculate the confidence interval entirely from sample statistics, since we presumably do not know the value of σx . x = 10.00 mM sx = 0.28 mM sx 0.28 s(x) = √ = √ n 7 = 0.11 mM Since ν = n − 1 = 6, t0.025 = 2.447 CI for µx = 10.00 ± 2.447 · 0.11 = 10.00 ± 0.26 mM (95% CL) So if we assume no measurement bias, we can say that [H2 SO4 ] = 10.00 ± 0.26 mM (95% CL) Notice that two significant figures were given for the second part of the confidence interval, and an equivalent number of decimal places were kept in the first part. See section 2.2 for more details.

80

3. Measurements as Random Variables 55

54

53

Sample mean, ppm

52

51

50

49

48

47

46

45

Figure 3.8: Forty measurements were collected from a normally distributed population with µx = 50 and σx = 10 (see p. 69). These values were grouped into ten samples of four measurements each (see text on p. 71). The figure shows the confidence intervals (calculated from eqn. 3.18) for each of the ten samples.

3.3 Summary and Skills Two very important skills developed in this chapter: 1. Ability to do error propagation calculations. 2. Ability to calculate confidence intervals to estimate µx

Measurements usually contain two types of error: random and systematic measurement error. The magnitude of the random measurement error, as indicated by the standard deviation of the measurements, is characteristic of measurement precision; the amount of systematic measurement error is the bias, which indicates the accuracy of the measurements. Bias and standard deviation are statistical concepts that can be related to the probability distribution of the measurements. If a measurement contains random error, then so too will any value derived (i.e., calculated) from that measurement. In some cases, two or more measurements are used to calculate the desired result. The procedure to determine the standard deviation of the derived value is called propagation of (random) error. The error propagation method presented in this chapter assumes that (i) the error of the measurements in the calculation are independent of one another; and (ii) the calculated value is a linear function of the measurements. Confidence intervals are widely used to estimate population parameters. Any confidence interval has an associated confidence level (1 − α) that states the probability that the population parameter is within the interval. In this chapter, methods to calculate confidence intervals for µx were given for two situations: (i) the situation when σx is known, and (ii) the more common situation when σx is unknown. Both methods depend on assuming that the sample mean follows a normal probability distribution, an assumption that is somewhat justified by the central limit theorem.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.