When Were Cochlear Implants Invented?

Cochlear implants (CIs) are medical devices designed to restore hearing function for individuals with severe to profound hearing loss. Unlike traditional hearing aids that only amplify sound, a CI is an electronic system that bypasses damaged parts of the inner ear to directly stimulate the auditory nerve. The device converts sound energy into electrical signals, allowing the brain to perceive sound. The history of this technology spans over two centuries, moving from early electrical experiments to the advanced systems available today.

The Foundation of Electrical Hearing

The theoretical foundation for the cochlear implant began with early experiments proving that electrical current could produce an auditory sensation. The Italian physicist Alessandro Volta, inventor of the electric battery, conducted the first documented experiment in 1800. He applied a 50-volt current to his head via metal rods inserted into his ears, reporting a sound like a loud “jolt” followed by the sensation of “boiling thick soup.”

The first direct stimulation of the auditory nerve in a human occurred much later, in 1957. In France, surgeon Charles Eyriès and physicist André Djourno implanted an electrode and an induction coil near the auditory nerve of a deaf patient. This preliminary device allowed the patient to perceive sounds resembling the “chirping of a grasshopper” and recognize a few simple words. This experiment inspired investigators to pursue the possibility of an implantable hearing prosthesis.

The First Prototype Devices

The first practical prototype for long-term use emerged in the United States, pioneered by otologist Dr. William F. House and engineer Jack Urban, who began their collaboration in 1965. Dr. House implanted his first single-channel device, which used a single electrode to stimulate the cochlea, in 1972. This device allowed for sound awareness, though not speech comprehension, and became the prototype for the first commercial implant.

A separate, concurrent development in Australia laid the groundwork for modern cochlear implants. In 1978, Professor Graeme Clark and his team at the University of Melbourne successfully implanted the first multi-channel device. Clark’s design utilized a multi-electrode array—eventually with 20 electrodes—to stimulate different parts of the auditory nerve. This multi-channel approach was based on the idea that stimulating multiple nerve bundles would better mimic the ear’s natural frequency separation, which proved essential for speech understanding.

Transition to Clinical Use and Regulatory Approval

Following years of clinical trials, the first cochlear implant received regulatory approval in the United States. In 1984, the U.S. Food and Drug Administration (FDA) approved the 3M/House single-channel device for use in adults. This approval marked the first time a cochlear implant was sanctioned for commercial use.

The multi-channel design soon demonstrated superior performance, gaining FDA approval for use in adults in 1985. The regulatory focus then shifted to younger patients, recognizing the importance of early intervention for language development. In 1990, the FDA approved the multi-channel cochlear implant for use in children aged two years and older. This milestone cemented the technology’s acceptance as a standard medical treatment.

Modern Iterations and Current Technology

Since the 1990s, the technology has advanced rapidly, driven by improvements in digital signal processing. Modern implants utilize sophisticated sound coding strategies that better translate acoustic information into electrical pulses, enhancing speech recognition. Device components have become smaller, with external processors now lightweight and often worn discreetly behind the ear.

Advancements in electrode design, such as flexible and shorter arrays, reduce surgical trauma and allow for the preservation of residual natural hearing. Implantation criteria have broadened, with the FDA approving the use of some devices for infants as young as 9 to 12 months, and even 7 months in some cases, to optimize the window for speech and language acquisition. Current innovations also include features like direct wireless connectivity for audio streaming and improved performance in noisy environments.