Solving Systems of Linear Equations with Matrices
Learn how to use matrices to solve systems of linear equations. Understand Ax = b, Gaussian elimination, and Cramer's rule with practical examples.
Detailed Explanation
Systems of Linear Equations
A system of linear equations can be written in matrix form as Ax = b, where A is the coefficient matrix, x is the vector of unknowns, and b is the constants vector.
Example
2x + 3y = 8 | 2 3 | | x | | 8 |
4x + y = 6 => | 4 1 | | y | = | 6 |
Method 1: Direct Inverse
If A is invertible: x = A^(-1) * b
det(A) = 2 - 12 = -10
A^(-1) = (-1/10) * | 1 -3 | = | -0.1 0.3 |
| -4 2 | | 0.4 -0.2 |
x = | -0.1 0.3 | | 8 | = | -0.8 + 1.8 | = | 1 |
| 0.4 -0.2 | | 6 | | 3.2 - 1.2 | | 2 |
So x = 1, y = 2.
Method 2: Gaussian Elimination
Form the augmented matrix [A|b] and reduce to REF, then back-substitute.
Method 3: Cramer's Rule
For small systems, each variable can be found by replacing the corresponding column of A with b and computing the ratio of determinants:
x_i = det(A_i) / det(A)
Solution Types
- Unique solution: rank(A) = rank([A|b]) = n (number of unknowns)
- No solution: rank(A) < rank([A|b]) (inconsistent system)
- Infinite solutions: rank(A) = rank([A|b]) < n (underdetermined)
Use Case
Solving linear systems is fundamental to engineering, physics, economics, and computer science. Applications include circuit analysis (Kirchhoff's laws), structural analysis (force equilibrium), network flow optimization, least-squares fitting, and finite element analysis. Nearly every numerical simulation involves solving large systems of linear equations.