Multiply matrices with numbers or variables, with step-by-step solutions.
Enter matrix dimensions and values, then click "Calculate" to see the step-by-step solution.
Enter the number of rows and columns for each matrix. For matrices to be compatible for multiplication:
Fill in each cell of your matrices with numbers (or variables if in Variable Mode).
Click the Calculate Matrix Product button to perform the multiplication.
The calculator will multiply the matrices by taking the dot product of rows from Matrix A with columns from Matrix B:
Result[i,j] = (Row i from A) · (Column j from B)
= A[i,1]×B[1,j] + A[i,2]×B[2,j] + ... + A[i,n]×B[n,j]
Review the result matrix and the detailed step-by-step solution below it.
Matrix multiplication is a binary operation that takes a pair of matrices and produces another matrix. Unlike simple arithmetic multiplication, matrix multiplication follows specific rules and has requirements on the dimensions of the matrices.
💡 Matrix multiplication is fundamental to linear algebra and has numerous applications in science, engineering, computer graphics, and data analysis.
⚠️ Two matrices can be multiplied only when the number of columns in the first matrix equals the number of rows in the second matrix.
a11 | a12 |
a21 | a22 |
a31 | a32 |
b11 | b12 | b13 | b14 |
b21 | b22 | b23 | b24 |
c11 | c12 | c13 | c14 |
c21 | c22 | c23 | c24 |
c31 | c32 | c33 | c34 |
Columns of A (2) = Rows of B (2) ✓ Compatible
To calculate the element at position (i, j) in the resulting matrix:
🔢 Mathematically, if C = A×B, then:
C[i,j] = A[i,1]×B[1,j] + A[i,2]×B[2,j] + ... + A[i,n]×B[n,j]
Let's multiply these two matrices:
1 | 2 | 3 |
4 | 5 | 6 |
7 | 8 |
9 | 10 |
11 | 12 |
Calculating C[1,1]:
Take row 1 from A: [1, 2, 3]
Take column 1 from B: [7, 9, 11]
Multiply corresponding elements and sum:
C[1,1] = 1×7 + 2×9 + 3×11
C[1,1] = 7 + 18 + 33 = 58
Calculating C[1,2]:
Take row 1 from A: [1, 2, 3]
Take column 2 from B: [8, 10, 12]
Multiply corresponding elements and sum:
C[1,2] = 1×8 + 2×10 + 3×12
C[1,2] = 8 + 20 + 36 = 64
Calculating C[2,1]:
Take row 2 from A: [4, 5, 6]
Take column 1 from B: [7, 9, 11]
Multiply corresponding elements and sum:
C[2,1] = 4×7 + 5×9 + 6×11
C[2,1] = 28 + 45 + 66 = 139
Calculating C[2,2]:
Take row 2 from A: [4, 5, 6]
Take column 2 from B: [8, 10, 12]
Multiply corresponding elements and sum:
C[2,2] = 4×8 + 5×10 + 6×12
C[2,2] = 32 + 50 + 72 = 154
Final result:
58 | 64 |
139 | 154 |
The identity matrix (I) has 1s on the main diagonal and 0s elsewhere. When a matrix is multiplied by the identity matrix of appropriate size, the original matrix is unchanged.
A × I = I × A = A
This means the identity matrix works like the number 1 in regular multiplication
The identity matrix I3 for 3×3 matrices looks like this:
1 | 0 | 0 |
0 | 1 | 0 |
0 | 0 | 1 |
When we multiply any 3×3 matrix by I3:
5 | 2 | 8 |
1 | 3 | 7 |
4 | 6 | 9 |
1 | 0 | 0 |
0 | 1 | 0 |
0 | 0 | 1 |
5 | 2 | 8 |
1 | 3 | 7 |
4 | 6 | 9 |
💡 Key Insight: The identity matrix is crucial in matrix algebra. It's used in finding matrix inverses and solving systems of linear equations.
Identity matrices exist for any square dimension (2×2, 3×3, 4×4, etc.).
Unlike regular multiplication with numbers, matrix multiplication is not commutative. This is one of the most important properties to understand.
A × B ≠ B × A
The order of multiplication matters! This is different from regular number multiplication where a×b = b×a.
1 | 2 |
3 | 4 |
5 | 6 |
7 | 8 |
19 | 22 |
43 | 50 |
Steps:
5 | 6 |
7 | 8 |
1 | 2 |
3 | 4 |
23 | 34 |
31 | 46 |
Steps:
⚠️ Important: The results are completely different! A×B and B×A produce different matrices.
This non-commutativity has important implications in many applications, especially in physics and computer graphics where the order of transformations matters.
The grouping of matrices doesn't matter when multiplying 3 or more matrices.
This lets you multiply multiple matrices in any order of operations.
Matrix multiplication distributes over addition.
This works like the distributive property of regular multiplication.
When multiplying a matrix by a scalar (number), multiply each element by that number.
2 | 4 |
1 | 3 |
6 | 12 |
3 | 9 |
The transpose of a product equals the product of the transposes in reversed order.
Note: The transpose flips a matrix over its diagonal, turning rows into columns.
Original
a | b |
c | d |
Transpose
a | c |
b | d |
💡 Why These Properties Matter: These properties are the foundation of linear algebra and allow for solving complex systems of equations, proving mathematical theorems, and implementing efficient algorithms for computer graphics, machine learning, and quantum physics.
Matrix multiplication is a fundamental operation with powerful applications across numerous fields of science, technology, and business:
3D transformations (rotation, scaling, translation) are represented as matrix operations. Games and animation rely heavily on matrix multiplication.
Neural networks use matrix multiplication for both forward and backward propagation. Weight matrices transform input data through layers.
Matrix operations are used in encryption algorithms like Hill cipher. Secret keys are often represented as matrices.
Convolution operations in image filters (blurring, edge detection) are implemented as matrix multiplications.
Quantum states and operators are represented as matrices. Matrix multiplication describes how quantum systems evolve over time.
Stiffness matrices are used in finite element analysis to model how structures respond to forces and stresses.
Population genetics uses transition matrices to model how gene frequencies change over generations.
Systems of differential equations are solved using matrix methods. Mechanical systems are modeled with matrices.
Input-output models use matrices to analyze relationships between different sectors of an economy.
Adjacency matrices representing networks can be multiplied to find paths between nodes. PageRank algorithm uses matrix operations.
Portfolio risk analysis, option pricing, and other financial models use matrix operations to represent correlations and dependencies.
Linear programming and optimization problems use matrix formulations to find optimal solutions for resource allocation.
🔍 Understanding the Applied Value: These applications show why matrix multiplication is so important in real-world contexts. The ability to perform complex mathematical operations through matrix multiplication enables more efficient computation and clearer mathematical modeling.
💡 Remember: For matrix multiplication to work, the number of columns in Matrix A must equal the number of rows in Matrix B.
This matrix multiplier calculator follows the standard rules for multiplying matrices. If you need to calculate a matrix product for matrices of different sizes, make sure they are compatible for multiplication.
You can input:
⌨️ Pro Tip: You can use the Tab key to quickly navigate between matrix cells when entering values.
Need random values? Click the "Random Values" button to populate both matrices with random numbers between 1 and 10 (only available in Numeric Mode).
After entering your matrices:
🔄 Want to start over? Click the "Clear" button to reset all matrix values.
In Variable Mode, the calculator will perform symbolic calculations and display algebraic expressions in the result.
Use the mode toggle switch at the top of the calculator to switch between:
When in Variable Mode, you'll see additional helper buttons for commonly used variables and operators to make entering expressions easier.
💡 Remember to clear your inputs when switching modes, as numeric and variable calculations work differently.
Matrix multiplication requires that the number of columns in the first matrix equals the number of rows in the second matrix. This is because each element in the result is calculated by taking the dot product of a row from the first matrix and a column from the second matrix, which requires them to be the same length.
Matrix multiplication follows specific rules as explained in this calculator, resulting in a matrix where each element is the sum of products of corresponding row and column elements.
Element-wise multiplication (sometimes called the Hadamard product) simply multiplies corresponding elements directly. For element-wise multiplication, both matrices must be the same size, and the result will also be that size.
Element-wise multiplication example:
1 | 2 |
3 | 4 |
5 | 6 |
7 | 8 |
5 | 12 |
21 | 32 |
Unlike regular number multiplication where a×b = b×a, matrix multiplication is not commutative because the operation depends on the specific arrangement of elements in the matrices. The dimensions and internal structure of matrices make A×B and B×A different operations that generally yield different results.
In some special cases (like when multiplying by an identity matrix), matrix multiplication can be commutative, but this is the exception rather than the rule.
The standard algorithm for multiplying an m×n matrix by an n×p matrix has a time complexity of O(m×n×p). For square matrices of size n×n, this becomes O(n³).
However, there are more efficient algorithms for large matrices:
For practical purposes with small to medium-sized matrices (like those in this calculator), the standard algorithm is usually sufficient.
To use variables in your matrix calculations:
The calculator will handle the algebraic expressions and display the result as simplified symbolic expressions.
💡 Variable Mode is particularly useful for teaching and learning linear algebra concepts, proving matrix properties, and working with parameterized matrices.
While this calculator specifically focuses on matrix multiplication, solving systems of linear equations typically involves multiple matrix operations including:
For a full solution to systems of linear equations, you would need additional operations like matrix inversion or Gaussian elimination.
However, with the Variable Mode feature, you can now work with symbolic matrices that represent systems of equations, which is a step toward this capability.
For matrices A (m×n) and B (n×p), the product C = A×B is an m×p matrix where:
Cij = ∑k=1n Aik × Bkj
or more explicitly:
Cij = Ai1B1j + Ai2B2j + ... + AinBnj
This means each element of the result matrix is the dot product of the corresponding row of the first matrix and the corresponding column of the second matrix.
For matrices A and B to be multipliable:
⚠️ If the dimensions don't match, the multiplication is undefined.
Valid multiplications:
Invalid multiplications:
Property | Mathematical Form | Description |
---|---|---|
Non-Commutativity | A×B ≠ B×A (generally) | Matrix multiplication does not generally commute. The order matters. |
Associativity | (A×B)×C = A×(B×C) | The grouping of multiplications doesn't affect the result. |
Distributivity | A×(B+C) = A×B + A×C | Matrix multiplication distributes over addition. |
Identity | A×I = I×A = A | Multiplying by the identity matrix leaves a matrix unchanged. |
Zero Matrix | A×0 = 0×A = 0 | Multiplying by the zero matrix results in the zero matrix. |
Transpose | (A×B)ᵀ = Bᵀ×Aᵀ | The transpose of a product equals the product of the transposes in reversed order. |
Determinants | det(A×B) = det(A)×det(B) | The determinant of a product equals the product of the determinants. |
The identity matrix In is an n×n matrix with 1s on the main diagonal and 0s elsewhere.
For any m×n matrix A:
When D is a diagonal matrix and A is any compatible matrix:
Permutation matrices rearrange rows or columns:
The product of two upper (or lower) triangular matrices is also upper (or lower) triangular.
The standard algorithm for matrix multiplication has different complexities:
Algorithm | Time Complexity | Notes |
---|---|---|
Standard Algorithm | O(n³) | For n×n matrices; Simple nested loops implementation |
Strassen's Algorithm | O(n^2.807) | Uses recursive divide-and-conquer approach |
Coppersmith–Winograd | O(n^2.376) | Improved theoretical algorithm, but rarely used in practice |
💡 Order of multiplication matters for efficiency. For matrices A (m×n), B (n×p), and C (p×q), computing (A×B)×C requires m×n×p + m×p×q operations, while A×(B×C) requires n×p×q + m×n×q operations. Choosing the right order can significantly reduce computation.