The concept of uncertainty is fundamental to all chemical measurements, representing the range of values within which the true value is expected to be found. Every piece of laboratory equipment and technique possesses inherent limitations that prevent a measurement from being perfectly exact. Quantifying this uncertainty is paramount because it directly determines the reliability and comparability of chemical data. A reported measurement is incomplete without an accompanying statement of its uncertainty, as this value provides the context needed to judge the quality of the result.
Understanding the Types and Sources of Uncertainty
It is important to distinguish between uncertainty and error in a chemical context. An error is the difference between a single measurement and the accepted true value, while uncertainty is the estimated range of possible values for a measurement. Error and uncertainty exist in all physical measurements, unlike a mistake, which is a blunder that should be discarded.
Sources of uncertainty are broadly categorized into two main types: random and systematic. Random uncertainty is bidirectional, causing individual measurements to fluctuate randomly above or below the true value. This type often arises from statistical fluctuations, such as variations in reading a meniscus or small environmental factors like temperature drift. Random uncertainty affects the precision of a measurement but can be reduced by taking multiple measurements and averaging the results.
Systematic uncertainty is a consistent bias that causes all measurements to deviate from the true value in the same direction. This often stems from the limitations of the measuring instrument itself, such as an improperly calibrated volumetric flask or a digital balance that consistently reads high. Systematic uncertainty affects the accuracy of a measurement and typically requires calibration or correction procedures.
Determining Uncertainty in Individual Measurements
An uncertainty value must first be assigned to each individual measurement taken directly from an instrument. For analog instruments with a graduated scale, such as a thermometer or a burette, the instrumental uncertainty is estimated to be half of the smallest increment on the scale. For example, if a ruler has millimeter markings, the reading uncertainty is often taken as \(\pm 0.5\) millimeters.
Digital instruments, like an electronic balance, typically have an uncertainty equal to the smallest increment the display can resolve. If a balance reads \(10.000\) grams, the uncertainty is often stated as \(\pm 0.001\) grams, or the manufacturer’s stated tolerance may be used. When a single measurement involves two separate readings, such as measuring the volume delivered from a burette, the uncertainty is doubled because two reading judgments are made.
Calculating Combined Uncertainty (Error Propagation)
Most chemical results are calculated from multiple measured quantities, requiring the combination of individual uncertainties through a process known as error propagation. The method for propagation depends on the mathematical operation used to combine the initial values. The overall uncertainty of a result is represented by \(u_R\), and individual uncertainties are \(u_A\), \(u_B\), and so on.
When measured values are combined through addition or subtraction, their absolute uncertainties are combined using addition in quadrature. The absolute uncertainty in the final result is the square root of the sum of the squares of the individual absolute uncertainties. For a result \(R = A + B – C\), the combined uncertainty \(u_R\) is calculated as \(u_R = \sqrt{(u_A)^2 + (u_B)^2 + (u_C)^2}\).
For calculations involving multiplication or division, such as determining the concentration of a solution, the uncertainties are combined using their relative form. The relative uncertainty is the absolute uncertainty divided by the measured value. The square of the relative uncertainty of the final result equals the sum of the squares of the relative uncertainties of the individual measurements. For a result \(R = A \times B / C\), the relative uncertainty is \((\frac{u_R}{R})^2 = (\frac{u_A}{A})^2 + (\frac{u_B}{B})^2 + (\frac{u_C}{C})^2\). Once calculated, the relative uncertainty is multiplied by the final result \(R\) to convert it back into the absolute uncertainty \(u_R\) for reporting.
Presenting Final Results and Uncertainty Notation
The final step involves correctly reporting the calculated result along with its associated uncertainty in a standardized format. The accepted notation is the measured value plus or minus the absolute uncertainty, followed by the unit (Value \(\pm\) Uncertainty Unit). For example, a mass might be reported as \(10.55 \pm 0.02\) grams.
A convention exists for rounding the calculated absolute uncertainty: it is typically rounded to one significant figure. If the leading digit of the uncertainty is a one, it is acceptable to retain two significant figures, such as \(\pm 0.14\) or \(\pm 0.015\). Once the uncertainty is rounded, the measured value must also be rounded so that its last reported decimal place matches the decimal place of the final uncertainty. For instance, if the calculated value is \(10.5532\) and the calculated uncertainty is \(\pm 0.0156\), the uncertainty is rounded to \(\pm 0.02\), and the value is rounded to \(10.55 \pm 0.02\).