To multiply two matrices, you take each row of the first matrix and each column of the second matrix, multiply their corresponding entries together, and add the results. That sum becomes one entry in your answer matrix. The process repeats for every row-column combination until the entire result is filled in.
The One Rule You Must Check First
Before multiplying, check that the number of columns in the first matrix equals the number of rows in the second matrix. If the first matrix is m × n (m rows, n columns) and the second is n × k, the multiplication works and produces an m × k result. If those inner dimensions don’t match, the multiplication is undefined.
For example, a 2×3 matrix can multiply with a 3×2 matrix (the inner dimensions are both 3), and the result will be 2×2. But a 2×3 matrix cannot multiply with a 2×3 matrix, because 3 does not equal 2.
Step-by-Step With a Simple Example
Suppose you have two matrices:
A = [1, 2, 3; 4, 5, 6] (a 2×3 matrix)
B = [7, 8; 9, 10; 11, 12] (a 3×2 matrix)
The result C will be a 2×2 matrix. To find the entry in row i, column j of C, you take the dot product of row i from A and column j from B. That means you multiply matching entries and add them up.
C₁₁ (row 1 of A, column 1 of B): (1×7) + (2×9) + (3×11) = 7 + 18 + 33 = 58
C₁₂ (row 1 of A, column 2 of B): (1×8) + (2×10) + (3×12) = 8 + 20 + 36 = 64
C₂₁ (row 2 of A, column 1 of B): (4×7) + (5×9) + (6×11) = 28 + 45 + 66 = 139
C₂₂ (row 2 of A, column 2 of B): (4×8) + (5×10) + (6×12) = 32 + 50 + 72 = 154
So C = [58, 64; 139, 154].
The general formula for entry cᵢⱼ is: multiply the first element of row i by the first element of column j, the second by the second, and so on through all n pairs, then sum everything. Written out: cᵢⱼ = aᵢ₁b₁ⱼ + aᵢ₂b₂ⱼ + … + aᵢₙbₙⱼ.
Scalar Multiplication Is Different
Multiplying a matrix by a single number (a scalar) is much simpler. You just multiply every entry in the matrix by that number. If you multiply a 3×4 matrix by 5, you get a 3×4 matrix where each entry is five times the original. No dot products, no dimension rules to check. When people say “matrix multiplication,” they almost always mean multiplying two matrices together, not scalar multiplication.
Order Matters
Unlike regular number multiplication, matrix multiplication is not commutative. A × B does not generally equal B × A, even when both products are defined. In many cases, reversing the order changes the dimensions of the result entirely. A 2×3 matrix times a 3×2 matrix gives a 2×2 result, but reversing them gives a 3×3 result.
Even when both A × B and B × A produce the same size matrix (say both A and B are 2×2), the entries will usually be different. This is one of the most common mistakes in linear algebra, so it’s worth committing to memory: always keep your matrices in the correct order.
The Identity Matrix
The identity matrix is the matrix equivalent of the number 1. It’s a square matrix with 1s along the main diagonal (top-left to bottom-right) and 0s everywhere else. The 2×2 identity matrix looks like [1, 0; 0, 1], and the 3×3 version has the same pattern with an extra row and column.
Multiplying any square matrix A by the appropriately sized identity matrix I gives you A back, regardless of order: A × I = I × A = A. This property makes the identity matrix useful as a starting point in many algorithms, from solving systems of equations to computing matrix inverses.
Tips for Doing It by Hand
- Write the dimensions first. Before you start computing, write the size of each matrix next to it. Confirm the inner dimensions match and note the size of your result matrix so you know how many entries to calculate.
- Use your finger as a guide. Trace along the row of the first matrix with one finger and down the column of the second matrix with another. This physical tracking helps you avoid skipping entries or mixing up rows and columns.
- Fill in the result systematically. Work left to right across each row of the result before moving to the next row. This keeps you organized when matrices get larger.
- Double-check with the identity matrix. If you’re learning the process, practice by multiplying a small matrix by the identity matrix. You should get the original matrix back. If you don’t, you’ve made a mechanical error somewhere and can diagnose your technique.
How Big the Work Gets
For two n × n matrices, the standard method requires roughly n³ individual multiplication operations. A pair of 2×2 matrices needs 8 multiplications. A pair of 10×10 matrices needs about 1,000. A pair of 100×100 matrices needs about 1,000,000. This cubic growth is why computers handle large matrix multiplication rather than humans.
Faster algorithms exist. Strassen’s algorithm, published in 1969, reduces the workload by using a clever divide-and-conquer approach that effectively lowers the exponent from 3 to about 2.81. More recently, DeepMind’s AlphaTensor system used reinforcement learning to discover new matrix multiplication algorithms that are faster still for specific matrix sizes. These optimizations matter enormously in fields like machine learning and scientific computing, where matrices with thousands of rows and columns are multiplied billions of times during training and simulation.
Where Matrix Multiplication Shows Up
If you’re learning this for a math class, it helps to know why the skill matters beyond exams. In computer graphics, every rotation, scaling, and movement of a 3D object on screen is performed by multiplying a matrix by a coordinate vector. Chaining multiple transformations together is just multiplying those matrices in sequence. In machine learning, neural networks are essentially stacks of matrix multiplications: input data multiplied by weight matrices, passed through activation functions, then multiplied by more weight matrices. The speed of matrix multiplication directly determines how fast AI models can be trained and run. Physics simulations, economics models, search engine algorithms, and image compression all rely on it as a core operation.