How to Linearize Equations Step by Step

Linearizing an equation means replacing a curved, nonlinear relationship with a straight-line approximation that’s valid near a chosen point. The core idea is simple: zoom in close enough on any smooth curve and it starts to look like a straight line. This technique shows up constantly in physics, engineering, and applied math because linear equations are dramatically easier to solve, graph, and analyze than nonlinear ones.

There are two broad approaches. One uses calculus (Taylor series) to approximate a nonlinear function with its tangent line near a specific point. The other uses algebraic transformations like logarithms to reshape a nonlinear equation into a form that plots as a straight line. Which one you need depends on what you’re trying to do.

Linearization With Taylor Series

The most general method for linearizing any smooth function uses the first-order Taylor expansion. For a function f(x), the linear approximation around a point x = a is:

f(x) ≈ f(a) + f′(a)(x − a)

This is just the equation of the tangent line at x = a. You evaluate the function at your chosen point, then add the slope (the derivative at that point) multiplied by how far x is from a. The result is a linear expression that closely matches the original function when x is near a, and grows less accurate as you move farther away.

The point a is often called the “operating point” or “equilibrium point,” depending on context. In a physics problem, it might be the angle at which a pendulum hangs at rest. In an engineering system, it might be the steady-state speed of a motor. The key requirement is that you pick a specific point and build your approximation around it.

Step-by-Step Process

To linearize a single-variable equation:

  • Choose your operating point. This is the value of x around which you want the approximation to be accurate. For dynamic systems, it’s typically an equilibrium where the system naturally settles.
  • Evaluate the function at that point. Calculate f(a) directly.
  • Take the derivative and evaluate it at the same point. Calculate f′(a).
  • Write the linear approximation. Plug everything into f(x) ≈ f(a) + f′(a)(x − a).

For example, if you need to linearize f(x) = sin(x) around x = 0, you get f(0) = 0, f′(0) = cos(0) = 1, so sin(x) ≈ x. That’s the famous small-angle approximation used throughout physics and engineering.

Common Small-Angle Approximations

Some linearizations come up so often that they’re worth memorizing. When the angle x is measured in radians and is close to zero:

  • sin(x) ≈ x
  • cos(x) ≈ 1
  • tan(x) ≈ x

These follow directly from the Taylor series of each function. The sine and tangent series both start with x as the first term, so dropping everything after that gives a linear approximation. The cosine series starts with 1, then the next term involves x², so the linear (first-order) approximation is just 1, a constant. These approximations work well for angles roughly below 10 to 15 degrees. Beyond that, the error grows quickly.

Linearizing Multivariable Systems

When your equation involves two or more variables, the same idea applies, but you use partial derivatives instead of ordinary ones. For a system of two equations with variables x and y:

dx/dt = f(x, y)
dy/dt = g(x, y)

You first find the equilibrium point (x*, y*) where both f and g equal zero. Then you build a matrix of partial derivatives, evaluated at that equilibrium. This matrix is called the Jacobian:

J = [ ∂f/∂x ∂f/∂y ]
    [ ∂g/∂x ∂g/∂y ]

Each entry is calculated at the equilibrium point. The linearized system then becomes a simple matrix equation. If you define new variables u = x − x* and v = y − y* (measuring displacement from equilibrium), the linearized system is:

d/dt [u, v] = J · [u, v]

This is now a linear system that you can solve with standard techniques from linear algebra. The Jacobian captures how each variable’s rate of change responds to small changes in every other variable, all evaluated at the equilibrium.

When Linearization Tells You About Stability

One of the most powerful uses of linearization is determining whether an equilibrium point is stable. If you nudge the system slightly away from equilibrium, does it return or does it fly off?

The answer comes from the eigenvalues of the Jacobian matrix. If all eigenvalues have negative real parts, the equilibrium is stable and the system returns to it after small disturbances. If any eigenvalue has a positive real part, the equilibrium is unstable. This connection between the linearized system and the original nonlinear system is guaranteed by a result known as the Hartman-Grobman theorem, which states that the linearized version faithfully represents the behavior of the nonlinear system near the equilibrium, as long as none of the eigenvalues have a real part of exactly zero. That edge case (purely imaginary eigenvalues, for instance) requires more advanced analysis.

Log Transformations for Power Laws and Exponentials

Not all linearization involves calculus. If your equation follows a power-law form like y = kx^n, you can linearize it algebraically by taking the logarithm of both sides:

log(y) = log(k) + n · log(x)

Define new variables: let u = log(x) and v = log(y). Now you have v = n·u + log(k), which is a straight line with slope n and y-intercept log(k). Plotting log(y) versus log(x) on a graph produces a straight line, and you can read off the exponent n directly from the slope. This is why log-log plots are standard in experimental science for identifying power-law relationships.

The same logic works for exponential equations. If y = ae^(bx), taking the natural log gives ln(y) = ln(a) + bx. Plotting ln(y) versus x yields a straight line with slope b. These algebraic transformations are especially useful for fitting curves to experimental data, because linear regression (fitting a straight line) is far simpler and more robust than nonlinear curve fitting.

How Far the Approximation Holds

Every linear approximation introduces error, and that error grows as you move away from the operating point. For a Taylor-series linearization, the error is proportional to (x − a)², where a is your operating point. More precisely, the remainder after the linear term is:

R₁(x) = [f″(c) / 2] · (x − a)²

for some value c between x and a. This tells you two things. First, the error scales with the square of the distance from the operating point, so doubling your distance roughly quadruples the error. Second, the error depends on the second derivative of the original function. Functions with high curvature (large second derivatives) lose accuracy faster than gently curving ones.

As a practical rule, linearization works well when the deviations from the operating point are small relative to the scale over which the function changes shape. For the small-angle approximation sin(x) ≈ x, the error at 10 degrees (0.17 radians) is less than 0.5%. At 30 degrees (0.52 radians), it’s about 4.5%. At 90 degrees, the approximation is essentially useless.

Piecewise Linearization

When a single tangent line can’t capture enough of the curve’s behavior, you can break the function into segments and approximate each one with a different straight line. This is piecewise linearization. The idea is to divide the range of your variable into intervals and fit a separate linear function to each interval, creating a chain of line segments that follows the curve more closely than any single line could.

This approach is common in optimization, where you need to feed a nonlinear function into a solver that only handles linear problems. By replacing the nonlinear objective or constraint with a set of linear pieces, you can apply standard linear programming algorithms. The tradeoff is complexity: more pieces give better accuracy but create a larger problem to solve.

Real-World Applications

Linearization is a daily tool in control engineering. Designing a controller for a system (cruise control in a car, the autopilot on an aircraft, a temperature regulator in a chemical plant) almost always starts by linearizing the system’s equations around the desired operating condition. The linear model is then used to design a controller that keeps the system stable and responsive. As long as the system stays near that operating condition, the controller works well.

In robotics, feedback linearization is a technique that mathematically cancels the nonlinear terms in a robot’s equations of motion, effectively transforming the system into a linear one that’s easier to control. This has been applied to everything from autonomous ground vehicles tracking paths across agricultural fields to stabilizing drones in flight. The technique works for both simple single-input systems and complex multi-input, multi-output platforms.

Engineering software automates much of this process. Tools like MATLAB’s Simulink Control Design can linearize a model either by computing partial derivatives block by block or by numerically perturbing inputs and states to estimate the linearized response. You specify the operating point (the steady-state values of all system states and inputs) and the software returns the linear state-space model. The critical requirement is that your states should be at or near steady state. Otherwise, the linear model is only accurate over a very short time window.