A physics engine is software that simulates how objects move and interact in a virtual environment. It handles the math behind gravity, collisions, friction, and momentum so that objects in a game, simulation, or training environment behave the way you’d expect them to in real life. Every time a ball bounces off a wall in a video game or a virtual robot learns to walk, a physics engine is doing the work behind the scenes.
What a Physics Engine Actually Does
At its core, a physics engine takes the laws of motion (the same ones from high school physics class) and applies them to virtual objects, frame by frame. It tracks where every object is, how fast it’s moving, what forces are acting on it, and what happens when it hits something else. This breaks down into a few key jobs.
Dynamics solving is the main task. The engine calculates how each object moves based on its mass, the forces pushing or pulling on it, and any torques (twisting forces) being applied. Drop a crate off a ledge, and the dynamics solver figures out how fast it falls, how it rotates on the way down, and where it lands.
Collision detection checks whether any objects are overlapping or about to overlap. This is computationally expensive because, in a scene with hundreds of objects, the engine needs to check for potential contact between all of them, many times per second. To keep this manageable, most engines use a two-phase approach: a fast, rough check that eliminates obviously distant objects, followed by a precise check on the remaining pairs.
Collision response determines what happens after two objects make contact. Do they bounce? Slide? Shatter? The engine uses properties like friction, bounciness (restitution), and mass to calculate the outcome. Constraints also fall into this category. These are rules like “this door can only swing on its hinge” or “this chain link is connected to the next one,” which limit how objects can move relative to each other.
How Motion Gets Calculated
Physics engines need to predict where an object will be a tiny fraction of a second into the future, then repeat that prediction over and over. This process is called numerical integration, and the method an engine uses has a big impact on both accuracy and performance.
The simplest approach is the Euler method. It looks at an object’s current position and velocity, assumes the velocity stays constant for one tiny time step, and moves the object accordingly. It’s fast to compute but has a fundamental flaw: it doesn’t conserve energy properly. A ball bouncing in an Euler simulation will gradually gain or lose height over time, drifting further from reality with every frame. Making the time step smaller reduces this error but never eliminates it entirely.
A common improvement is to use the velocity at the end of the time step (rather than the beginning) to update position. This variation, sometimes called the symplectic Euler method, is much better at conserving energy and is the workhorse behind many real-time physics engines in games.
For higher accuracy, engines can use Runge-Kutta methods, which estimate the slope of motion at the midpoint of each time step rather than just the start or end. This captures changes in acceleration more faithfully and introduces higher-order correction terms that make the simulation noticeably more precise. The trade-off is more computation per step, which matters when you’re trying to hit 60 frames per second with hundreds of interacting objects.
Rigid Bodies vs. Soft Bodies
Most physics engines start with rigid body dynamics, which treats every object as a solid shape that never bends or deforms. A wooden crate, a bowling ball, a car chassis: these are all rigid bodies. The math is relatively straightforward because the engine only needs to track each object’s position, orientation, and velocity. Rigid body simulation is fast and covers the vast majority of what games and simulations need.
Soft body dynamics is a different challenge. Cloth, jelly, muscle tissue, and rubber all change shape when forces act on them. Instead of tracking one object as a single unit, the engine models it as a mesh of connected points, each influencing its neighbors. A flag waving in the wind might be represented by thousands of tiny vertices, all pulling on each other. This is far more computationally demanding, and getting soft bodies to interact correctly with rigid ones (say, a cloth draped over a table) adds another layer of complexity. Fluid simulation pushes the cost even higher, since liquids don’t hold any fixed shape at all and require tracking the movement of huge numbers of particles.
Major Physics Engines in Use Today
A handful of physics engines power the majority of games and simulations you’ll encounter.
- NVIDIA PhysX is one of the most widely adopted, integrated directly into both Unreal Engine and Unity. It handles rigid body simulations, cloth, and fluid dynamics, and can offload work to the GPU for scenes with many interacting objects.
- Havok is a high-performance commercial engine commonly found in AAA game titles. It’s known for advanced rigid body dynamics, character physics, and vehicle simulations, though its licensing costs put it out of reach for many smaller studios.
- Bullet is an open-source engine used across games, film production, and scientific simulation. It supports both rigid and soft body dynamics and is highly customizable, making it popular with developers who need to modify the engine’s behavior for specialized projects.
- Box2D is a lightweight, open-source engine built specifically for 2D games. It provides solid collision detection and rigid body physics for side-scrollers, platformers, and mobile games without the overhead of a full 3D system.
- Unity Physics (DOTS) is Unity’s own multithreaded physics solution, designed for scalability in projects that need to simulate large numbers of objects simultaneously.
For indie developers and smaller teams, open-source options like Bullet and Box2D are cost-effective starting points. Larger studios with bigger budgets often lean toward Havok or PhysX for their optimization and support ecosystems.
GPU Acceleration
Physics simulation involves repeating the same calculations across many objects at once, which makes it a natural fit for GPUs. A graphics card can process thousands of small operations in parallel, compared to a CPU that handles fewer tasks but with more flexibility.
The performance difference can be dramatic. In engineering fluid simulations, a single GPU has been shown to deliver solve speeds equivalent to over 400 CPU cores while reducing energy consumption by 67%. For aerodynamics calculations, GPU solvers run 3 to 10 times faster than their CPU counterparts. Scaling up to multiple GPUs compounds the gains: going from two to four GPUs in one benchmark reduced total simulation time by 78%.
These numbers come from engineering-grade simulations rather than game engines, but the same principle applies. PhysX, for instance, uses GPU acceleration to handle dense scenes with many colliding objects that would overwhelm a CPU-only approach. As GPU hardware gets faster, physics simulations can grow more detailed without sacrificing frame rate.
Uses Beyond Gaming
Physics engines started in games, but they’ve become essential tools in fields that need realistic virtual environments. Robotics is one of the biggest growth areas. NVIDIA’s Newton physics engine, developed in collaboration with Google DeepMind and Disney Research, is built specifically for robot learning. It lets robots practice tasks like grasping, walking, and navigating obstacles in a simulated world before ever touching real hardware. This is safer, cheaper, and faster than training on physical robots that can break or cause damage.
Autonomous vehicle development relies heavily on physics simulation too. Self-driving car systems need to encounter millions of driving scenarios, including rare and dangerous ones, to learn safe behavior. Running those scenarios in a physics-accurate virtual world lets engineers test edge cases that would be impossible or unethical to stage on real roads.
Film and animation studios use physics engines for visual effects: simulating explosions, collapsing buildings, ocean waves, and fabric movement. Architecture and engineering firms use them to test how structures respond to stress. And in scientific research, physics engines model everything from protein folding to planetary orbits, providing a computational sandbox for testing theories that can’t easily be tested in a lab.