Absorbance measures how much light a substance prevents from passing through it. This measurement is calculated from the difference between the intensity of the light entering the sample and the intensity of the light exiting it. Quantifying this light interaction is valuable in chemistry, biology, and medicine. Scientists use absorbance values to determine the concentration of a specific compound in a solution or to monitor the progress of a chemical or biological reaction over time.
Understanding the Beer-Lambert Law
The theoretical basis for relating light absorption to a substance’s concentration is the Beer-Lambert Law. The law is expressed as A = \(\epsilon\)lc, where A is the measured absorbance. This logarithmic value is dimensionless and represents the fraction of light absorbed by the sample.
Absorbance is directly proportional to three factors. Molar absorptivity (\(\epsilon\)) is a constant indicating how strongly a specific chemical absorbs light at a particular wavelength. Path length (\(l\)) is the distance the light travels through the sample, typically measured in centimeters (cm). The cuvette determines this length, and a longer path length results in higher absorbance. Concentration (\(c\)) is the amount of the absorbing substance in the solution, usually expressed in moles per liter (M).
Since molar absorptivity (\(\epsilon\)) and path length (\(l\)) are kept constant during a measurement, the absorbance (\(A\)) becomes directly proportional to the substance’s concentration (\(c\)). This linearity allows researchers to accurately determine an unknown concentration by measuring its absorbance. The relationship holds true only when the light used is monochromatic, meaning it consists of a single, specific wavelength.
Components of a Spectrophotometer
The physical device used to measure absorbance is the spectrophotometer, which uses a sequence of components to execute the Beer-Lambert Law’s requirements. The process begins with the light source, which provides a stable, broad-spectrum beam of light. This initial light then enters the monochromator, a component that separates the polychromatic light into its individual component wavelengths.
Inside the monochromator, a prism or a diffraction grating disperses the light, and a system of slits selects only the desired, narrow band of wavelengths to pass through the instrument. This ensures the light striking the sample is as close to a single wavelength as possible for accurate measurement. The selected light then passes through the sample holder, where the solution is placed in a transparent container called a cuvette.
The cuvette must be made of a material transparent to the chosen wavelength, such as quartz for UV light or glass for visible light. Finally, the light that passes through the sample hits the detector, a device like a photodiode or photomultiplier tube. The detector measures the intensity of the transmitted light (\(I_t\)) and compares it to the initial intensity (\(I_o\)), calculating the absorbance from this ratio.
Steps for Accurate Absorbance Measurement
Obtaining a reliable absorbance reading requires a specific, methodical procedure to eliminate sources of error. Before any measurement, the substance must be properly dissolved to create a homogenous solution. The cuvette holding the sample must be clean and free of fingerprints or bubbles, which can scatter light. The first action on the instrument is selecting the optimal wavelength for analysis, typically the wavelength of maximum absorption (\(\lambda_{max}\)) for the substance.
The most fundamental step is “blanking” or “zeroing” the instrument, which establishes a baseline reading. This involves placing a reference solution, called the blank, into the cuvette holder. The blank contains all components of the sample except the analyte of interest, such as the solvent. By setting the instrument’s absorbance to zero with the blank in place, any background absorption caused by the solvent or the cuvette itself is subtracted from all subsequent readings.
Once the baseline is set, the blank is replaced with the sample cuvette, and the instrument reads the absorbance value. For quantitative analysis, this raw absorbance number must be translated into an actual concentration, which is achieved through a calibration curve. A calibration curve is created by measuring the absorbance of several standard solutions with known, varying concentrations and then plotting these values. The resulting line plot shows the linear relationship between absorbance and concentration, allowing the researcher to use the measured absorbance of an unknown sample to interpolate its concentration.