Seismic measurement has evolved significantly alongside scientific understanding and technology. Early attempts to quantify the energy and impact of seismic events led to several measurement standards. Seismologists eventually shifted to a more robust approach to accurately capture the immense power of the planet’s largest tremors. This evolution led the scientific community to favor the Moment Magnitude Scale (MMS).
Understanding the Richter Scale
The Richter scale, formally known as the local magnitude scale (\(M_L\)), was developed in 1935 by Charles F. Richter to measure earthquakes in Southern California. The scale’s value is derived from the logarithm of the maximum amplitude of seismic waves recorded by a specific seismograph. This original method was calibrated only for local earthquakes recorded within a 600-kilometer range.
The Richter scale is logarithmic, meaning each whole number increase represents a tenfold increase in the measured wave amplitude. For instance, a magnitude 6 earthquake produces ten times the wave amplitude of a magnitude 5 event. This technique focused solely on the amplitude of the waves recorded on the instrument.
The Critical Flaw: Saturation Point
The critical limitation of the Richter scale is “saturation.” Because the scale measures only the peak amplitude of seismic waves, it cannot accurately differentiate between very large earthquakes. For events above magnitude 6.5 or 7, the recorded wave amplitude plateaus, meaning the seismograph reading no longer reflects the true increase in energy release.
Saturation occurs because a very large earthquake involves a massive fault rupture, generating seismic waves with wavelengths too long for the Richter methodology to measure accurately. The measurement effectively maxes out, causing a magnitude 8 and a magnitude 9 earthquake to register nearly the same value. The Richter scale therefore severely underestimates the actual energy released by the most powerful earthquakes.
How the Moment Magnitude Scale Works
In the 1970s, scientists developed the Moment Magnitude Scale (\(M_w\)) to overcome the Richter scale’s saturation problem. Instead of measuring wave amplitude, the MMS is based on the concept of seismic moment (\(M_0\)), which directly measures the physical size of the fault rupture. This approach shifts the focus from the resulting ground shaking to the earthquake’s mechanical source.
The calculation of seismic moment incorporates three physical properties of the fault. These factors are the area of the fault that slipped, the average distance the fault moved (slip), and the rigidity of the rock material involved. Quantifying these dynamic factors provides a measure of the total work done at the earthquake source. The final magnitude value is derived from the seismic moment using a specific formula.
Why MMS Provides a More Accurate Picture
Scientists prefer the Moment Magnitude Scale because it provides a more accurate measure of an earthquake’s total energy release. The MMS avoids the saturation issue entirely because it is based on the physical dynamics of the fault rupture, not just the peak wave amplitude. This allows it to accurately distinguish the true energy difference between a magnitude 8 and a magnitude 9 event.
Measuring the seismic moment allows the scale to better reflect the size of the subsurface faulting, which relates directly to the earthquake’s destructive potential. The MMS is applicable globally and consistently measures earthquakes of all sizes. This makes the Moment Magnitude Scale a more reliable and scientifically robust standard.