Who Developed the Concept of Mental Age?

French psychologist Alfred Binet, working with his collaborator Theodore Simon, developed the concept of mental age in 1905. The idea emerged from a practical need: French schools wanted a way to identify children who needed extra academic support, and Binet designed a test that could compare a child’s cognitive performance to what was typical for their age group.

Why Binet Created the Test

In the early 1900s, France was expanding public education, and teachers needed a reliable method for spotting students who were falling behind intellectually rather than simply being unmotivated or poorly taught. Binet and Simon responded by building a series of tasks that tested attention, memory, and verbal skill in schoolchildren. The core insight was simple: if a seven-year-old consistently performed at the level expected of a five-year-old, that gap could be quantified. The child’s “mental age” was five, even though their actual (chronological) age was seven.

How Mental Age Was Calculated

Binet and Simon grouped test items by age level. A child started with tasks designed for younger children and worked upward. The examiner credited the child with the highest age level at which they passed all (or nearly all) of the tasks. For every five additional tasks passed beyond that level, the child earned one more year of mental age. The result was expressed as a decimal, so a child might score a mental age of 5.6 years against a chronological age of 8 years and 3 months.

This made the concept intuitive for teachers and parents. A two-year gap between mental age and chronological age signaled a child who likely needed specialized instruction, while a mental age above chronological age suggested advanced ability.

Revisions That Shaped the Modern Test

Binet didn’t stop with the 1905 version. A major 1908 revision introduced age-graded tasks, meaning each set of questions was explicitly tied to a specific age norm. This is the version most psychology textbooks describe as the first true intelligence test. A further revision followed in 1911, shortly before Binet’s death that same year.

In the United States, Stanford University psychologist Lewis Terman translated the test into English, established new age norms for American children, and standardized the scoring. His adaptation became known as the Stanford-Binet Test, a name still used today. Terman’s work made mental age testing widespread in American schools and military recruitment during World War I.

From Mental Age to IQ

Binet himself never developed the concept of IQ. That step came in 1912, when German psychologist William Stern proposed a simple formula: divide a child’s mental age by their chronological age, then multiply by 100. A ten-year-old with a mental age of ten scores 100 (perfectly average). A ten-year-old performing at a twelve-year-old level scores 120. This “intelligence quotient” gave researchers a single number that was easy to compare across different ages.

Stern’s ratio IQ dominated testing for decades, but it had a fundamental flaw. It assumed cognitive ability keeps growing at a steady rate throughout life, and it doesn’t. Research shows that the linear relationship between age and raw test performance starts to flatten around ages 9 to 12, which maps onto the shift from concrete to abstract thinking in childhood development. By ages 16 to 18, most cognitive abilities have largely plateaued. That makes the mental-age-divided-by-chronological-age formula meaningless for adults. A 50-year-old who scores like an average 25-year-old isn’t “half” as intelligent; the math simply breaks down because adult cognition doesn’t scale with age the way children’s does.

Why Mental Age Fell Out of Use

Modern intelligence tests no longer use ratio IQ scores based on mental age. Instead, they use what’s called a deviation score: your result is compared to a large sample of people your own age, producing a score with an average of 100 and a standard deviation of 15. This approach avoids the problems that plagued mental age calculations, particularly the nonsensical results it produced when applied to adults or when comparing people of very different ages.

The language of the field is shifting too. The most widely used cognitive assessment based on current theoretical models, the Woodcock-Johnson IV, doesn’t even use the word “intelligence” in its title. It refers to “cognitive abilities” instead, reflecting a broader recognition that human intellect is complex and shaped heavily by environment, not captured neatly by a single number. Still, Binet’s original insight, that you can learn something meaningful by comparing a person’s performance to age-based norms, remains the foundation underneath every modern cognitive test.