Measurement uncertainty quantifies the doubt surrounding a measurement result, providing a range within which the true value is believed to lie. No measurement can achieve perfect precision, as inherent limitations and external influences always introduce some variability. Understanding and expressing this uncertainty is important across various fields, from scientific research and engineering to everyday applications, as it allows for reliable data interpretation and informed decision-making.
Distinguishing Uncertainty from Error
Measurement uncertainty differs fundamentally from measurement error. An error represents a recognized deviation between a measured value and the true value, which can often be corrected. For instance, a miscalibrated instrument might consistently read too high, representing a systematic error that can be corrected through calibration.
In contrast, uncertainty is a non-negative parameter that characterizes the dispersion of values that could reasonably be attributed to the quantity being measured, reflecting a lack of complete knowledge about the true value.
Accuracy refers to how close a measurement is to the true or accepted value. Precision describes the reproducibility of measurements, reflecting how close repeated measurements are to each other. A measurement can be precise but inaccurate if readings are clustered but far from the true value, or accurate but imprecise if readings are scattered but average out to the true value.
Factors Contributing to Uncertainty
Multiple factors contribute to the overall uncertainty of a measurement, arising from various aspects of the measurement process. Instrument limitations are a common source, including device resolution, potential calibration errors, or gradual changes in performance over time (drift). For example, a digital scale displaying readings to one decimal place limits precision.
Environmental conditions also influence measurements. Fluctuations in temperature, humidity, air pressure, or vibrations can subtly alter results. For instance, a metal ruler expands or contracts with temperature changes, affecting its length.
Human factors also contribute, stemming from the operator’s involvement. Observer bias, such as parallax errors, reaction time variations, or inconsistencies in technique, can all affect the result.
The chosen methodology or procedure can also contribute due to inherent imperfections. Simplifications in models, approximations, or variations in method application can introduce deviations. Even the inherent variability of the measured quantity itself, such as fluctuations in a biological sample, adds to the overall uncertainty.
Calculating Uncertainty
Quantifying measurement uncertainty involves systematically evaluating all known sources of variation. The internationally recognized Guide to the Expression of Uncertainty in Measurement (GUM) outlines a framework for this process, categorizing evaluation methods into Type A and Type B. Both methods aim to express uncertainties as standard deviations, known as standard uncertainties, which can then be combined.
Type A Evaluation
Type A evaluation relies on statistical analysis of repeated observations. When measurements are repeated under the same conditions, random variations cause readings to differ slightly. The spread of these values provides a statistical estimate of uncertainty.
A common approach calculates the standard deviation of the mean from these observations. For example, if an object’s length is measured ten times, the average provides the best estimate. The standard deviation of these readings quantifies variability, and the standard error of the mean (standard deviation divided by the square root of the number of measurements) represents the Type A uncertainty for that average.
Type B Evaluation
Type B evaluation uses non-statistical methods, drawing on all available information about potential uncertainty sources. This includes manufacturer’s specifications (e.g., instrument accuracy or resolution) and calibration certificates. Other sources are previous measurements, general knowledge of material and instrument behavior, or expert judgment.
For digital instruments, resolution contributes uncertainty; for example, a digital thermometer displaying 0.1 °C has inherent uncertainty from that increment. If specifications provide a range without a clear distribution, a rectangular distribution is often assumed, and the standard uncertainty is calculated by dividing the half-range by the square root of three.
Combining Uncertainties (Propagation of Uncertainty)
Once individual standard uncertainties (Type A and Type B) are determined for each input quantity, they are combined to find the total uncertainty of the final measurement result. This is known as the propagation of uncertainty. For independent uncertainties, the “root sum of squares” (RSS) method, also called summation in quadrature, is most common.
This method assumes individual uncertainties are uncorrelated. The RSS method involves squaring each standard uncertainty, summing these squared values, and then taking the square root. For a measurement R that is a sum or difference of several quantities (e.g., R = A ± B ± C), the combined standard uncertainty uc(R) is calculated as $\sqrt{u(A)^2 + u(B)^2 + u(C)^2}$. For multiplied or divided quantities, relative standard uncertainties are combined using RSS. This systematic combination ensures the overall uncertainty reflects contributions from all identified sources.
Presenting Measurement Results
The final step involves reporting the measurement result with its associated uncertainty in a clear, standardized format. A common presentation is “Value ± Uncertainty Unit.” For example, 10.25 ± 0.03 cm. This conveys both the measured value and its associated doubt.
Uncertainty is typically rounded to one or two significant figures. The measured value should then be rounded so its last significant digit aligns with the uncertainty’s last significant digit. For instance, if uncertainty is 0.03 cm, the value is reported to two decimal places (e.g., 10.25 cm, not 10.253 cm).
The reported uncertainty is often an “expanded uncertainty” ($U$), providing an interval expected to contain the true value with a specified confidence level. This is obtained by multiplying the combined standard uncertainty by a “coverage factor” (k). A common 95% confidence level typically corresponds to k=2 for a normal distribution, meaning there is a 95% probability the true value lies within the stated range.