In chemistry, quantifying the reliability of a measurement is a fundamental part of the experimental process. Every measurement taken in the laboratory contains some degree of limitation, referred to as uncertainty. This is not about making a mistake, but an acknowledgment that no physical measurement can ever be perfectly exact. Reporting a chemical result without an associated uncertainty value makes the data meaningless, as its reliability cannot be judged.
Understanding Accuracy and Precision
The reliability of chemical data is assessed using two distinct concepts: accuracy and precision. Accuracy describes how closely a measured value approaches the true or accepted value of the quantity being measured.
Precision refers to the reproducibility of a series of measurements, indicating how closely multiple measurements agree with one another. Ideally, chemists strive for measurements that are both highly accurate and highly precise.
This distinction can be visualized using a target analogy, where the center represents the true value. Measurements that are clustered tightly together but far from the center are precise but not accurate. Measurements scattered widely around the center, with their average near the bullseye, are accurate but not precise.
These qualities are affected by different types of experimental error. Systematic errors consistently push the measurement in the same direction, introducing a bias that affects accuracy. For example, an uncalibrated balance adds a consistent error to every reading. Systematic errors can often be identified and corrected.
Random errors, also known as indeterminate errors, cause unpredictable fluctuations that vary with each reading. These errors are caused by factors like minor temperature fluctuations or the natural limits of reading a device. Random errors affect the precision of the data, causing repeated measurements to scatter around an average value.
Quantifying Uncertainty in Single Measurements
Random error in a series of measurements is quantified statistically using the mean and the standard deviation. The mean (\(\bar{x}\)) is the arithmetic average of all repeated measurements and serves as the reported value.
The precision of the data set is represented by the standard deviation (\(\sigma\) or \(s\)), which measures the spread of the individual data points around the calculated mean. A smaller standard deviation indicates that the measurements are tightly grouped and therefore possess higher precision. For a chemist, the standard deviation of a sample is considered the absolute uncertainty of the measured value.
If a mass is measured multiple times, the standard deviation defines the uncertainty in grams. If the mean mass is \(10.05\) g and the standard deviation is \(\pm 0.02\) g, the result is reported as \(10.05 \pm 0.02\) g. This means the true mass is expected to fall between \(10.03\) g and \(10.07\) g.
To compare the precision of different measurements, the relative standard deviation (RSD) is calculated. The RSD is the absolute uncertainty divided by the mean value, expressed as a percentage, indicating the size of the uncertainty relative to the measurement itself.
A measurement with a mean of \(100\) grams and an absolute uncertainty of \(\pm 1\) gram has an RSD of \(1\%\). If a different measurement has a mean of \(10\) grams and an absolute uncertainty of \(\pm 1\) gram, its RSD is \(10\%\). Expressing the uncertainty as a percentage clearly shows that the \(100\)-gram measurement is significantly more precise in a relative sense.
Combining Uncertainties in Derived Results
Many reported results are derived quantities calculated from multiple individual measurements, each carrying uncertainty. The process of calculating the final uncertainty is known as the propagation of uncertainty. This method is used because simply adding absolute uncertainties would overestimate the total error, as individual errors are unlikely to align in the same direction.
Individual uncertainties are combined “in quadrature,” meaning their squares are added together before taking the square root. This mathematical approach accounts for the random nature of the errors, allowing some fluctuations to cancel others out. The specific rule used depends on the mathematical operation performed.
Addition and Subtraction
For results calculated by addition or subtraction, the absolute uncertainties of the measurements are combined in quadrature to find the absolute uncertainty of the final result. For example, calculating a change in temperature (\(\Delta T\)) involves subtraction. If \(T_2 = 50.0 \pm 0.2^\circ \text{C}\) and \(T_1 = 20.0 \pm 0.1^\circ \text{C}\), the final temperature change is \(30.0^\circ \text{C}\).
The absolute uncertainty of \(\Delta T\) is found by taking the square root of the sum of the squares of the individual absolute uncertainties: \(\sqrt{(0.2)^2 + (0.1)^2}\). This yields an absolute uncertainty of \(\pm 0.22^\circ \text{C}\), resulting in \(\Delta T = 30.0 \pm 0.22^\circ \text{C}\).
Multiplication and Division
For results calculated by multiplication or division, the relative uncertainties of the measurements are combined in quadrature to find the relative uncertainty of the final result. For example, calculating concentration requires dividing mass by volume. If the mass uncertainty is \(1\%\) and the volume uncertainty is \(2\%\), the final relative uncertainty is calculated as \(\sqrt{(1\% )^2 + (2\% )^2}\).
The resulting relative uncertainty is \(2.24\%\). If the calculated concentration is \(0.150 \text{ M}\), the final absolute uncertainty is \(2.24\%\) of \(0.150 \text{ M}\), or \(\pm 0.0034 \text{ M}\). This approach uses relative size because multiplication and division scale the errors along with the values.