The audiometer is a fundamental tool for measuring hearing sensitivity, providing objective data that moved hearing assessment beyond simple observation. Several inventors contributed to its evolution from a theoretical concept to the precise clinical device used today. Exploring its origins helps understand how it became a standardized part of hearing healthcare.
Defining the Device and Its Purpose
An audiometer is an instrument designed to measure hearing acuity across a range of sound frequencies. Its primary function is to generate pure tones or speech signals at precisely calibrated intensity levels. The device presents these sounds through headphones or bone conductors to determine the softest sound a person can hear, known as the hearing threshold.
The results are plotted on an audiogram, which maps a patient’s hearing sensitivity. Sound intensity is measured in decibels (dB), representing loudness. Frequency, or pitch, is measured in Hertz (Hz), indicating the number of sound wave cycles per second. The audiometer allows professionals to test specific frequencies, typically ranging from 250 Hz to 8000 Hz, to diagnose different types and degrees of hearing loss.
The Earliest Conceptions and Inventors
The earliest known device to quantitatively measure hearing was developed in 1879 by Welsh scientist David Edward Hughes. Hughes adapted his “induction balance,” a device that measured electrical induction currents, and was the first to use the term “audiometer.” His instrument used two electrical circuits linked by an induction coil, allowing measurement of hearing on a scale from zero to 200 units.
Hughes introduced his device to the Royal Society, emphasizing its value for providing an “absolute measure of our hearing powers.” However, this early, complex instrument was largely rejected by medical practitioners who preferred simpler methods like tuning forks or the spoken voice test. A later, formalized device was introduced in 1899 by American psychologist Carl Seashore. Seashore’s device, which he called an audiometer, operated on a battery and used an attenuator with a 40-step scale to measure the “keenness of hearing” for psychological studies.
The Shift to Electrical Audiometry
The move away from early mechanical or induction-based concepts was necessary for the audiometer to become a reliable clinical tool. The introduction of vacuum tube technology in the early 1900s allowed for the creation of standardized electrical circuits capable of producing a stable range of pure tones. This technological leap enabled precise, repeatable calibration required for medical diagnostics, which was not possible with earlier devices.
The transition was solidified by the involvement of the telephone industry, particularly Western Electric. The company patented an electric audiometer in 1914 and produced the first commercially available electronic audiometer, the Western Electric 1A. Its successor, the widely used 2-A model of 1923, allowed for hearing testing across a broad frequency range (sometimes from 32 Hz up to 16,384 Hz). Collaboration between physicists like Harvey Fletcher and otolaryngologists was instrumental in establishing the use of octave-interval frequencies and plotting intensity downward to represent hearing loss on the audiogram.
The Modern Standard of Hearing Assessment
Today’s audiometers represent an evolution from their mechanical and early electrical predecessors. Modern devices are digital and microprocessor-controlled, offering unparalleled precision in stimulus generation and delivery. They are often computer-based or tablet-based, which streamlines the testing process and allows for seamless integration with electronic medical records.
Current clinical applications rely on pure-tone audiometry, the most common test to determine the presence and severity of hearing loss. These instruments also support advanced diagnostic procedures, including bone conduction testing, speech audiometry, and specialized tests that measure the ear’s response to various signals. Accuracy in modern audiology clinics is ensured through rigorous, internationally recognized calibration standards, guaranteeing that test results are reliable and consistent across different devices and locations.