Matrix & Determinant Formula Sheet

Complete Reference Guide for Linear Algebra

1. Matrices - Definition

A matrix is a rectangular array of numbers arranged in rows and columns.

$$ A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}_{m \times n} $$

Where $m$ is the number of rows and $n$ is the number of columns.

Matrix Types by Dimension

Square Matrix: $m = n$
Row Vector: $1 \times n$ matrix
Column Vector: $m \times 1$ matrix

2. Matrix Operations

Addition

Matrices must have the same dimensions.

$$ A + B = [a_{ij} + b_{ij}] $$

Scalar Multiplication

$$ cA = [ca_{ij}] $$

Matrix Multiplication

For $A_{m \times k}$ and $B_{k \times n}$:

$$ (AB)_{ij} = \sum_{p=1}^{k} a_{ip}b_{pj} $$
Note: Matrix multiplication is NOT commutative: $AB \neq BA$ in general.

Transpose

$$ \text{If } A = [a_{ij}]_{m \times n} \text{, then } A^T = [a_{ji}]_{n \times m} $$

Trace

Sum of diagonal elements (square matrices only):

$$ \text{tr}(A) = \sum_{i=1}^{n} a_{ii} $$

Properties

$(A + B)^T = A^T + B^T$
$(AB)^T = B^T A^T$
$A(B + C) = AB + AC$
$(AB)C = A(BC)$
$\text{tr}(A + B) = \text{tr}(A) + \text{tr}(B)$
$\text{tr}(AB) = \text{tr}(BA)$

3. Special Matrices

Identity Matrix

$$ I = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$

$AI = IA = A$

Zero Matrix

$$ 0 = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} $$

All elements are zero

Diagonal Matrix

$$ D = \begin{bmatrix} d_1 & 0 & 0 \\ 0 & d_2 & 0 \\ 0 & 0 & d_3 \end{bmatrix} $$

Upper Triangular

$$ U = \begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{bmatrix} $$

Symmetric Matrix

$A = A^T$

$$ \begin{bmatrix} a & b & c \\ b & d & e \\ c & e & f \end{bmatrix} $$

Skew-Symmetric

$A^T = -A$

$$ \begin{bmatrix} 0 & a & b \\ -a & 0 & c \\ -b & -c & 0 \end{bmatrix} $$

Orthogonal Matrix

$A^T A = AA^T = I$

$A^T = A^{-1}$

Idempotent Matrix

$A^2 = A$

4. Determinants

2×2 Matrix

$$ \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc $$

3×3 Matrix

$$ \begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix} = a(ei-fh) - b(di-fg) + c(dh-eg) $$

Cofactor Expansion

$$ \det(A) = \sum_{j=1}^{n} a_{ij}C_{ij} = \sum_{j=1}^{n} a_{ij}(-1)^{i+j}M_{ij} $$

Properties

$\det(A^T) = \det(A)$
$\det(AB) = \det(A) \cdot \det(B)$
$\det(cA) = c^n \det(A)$ for $n \times n$ matrix
$\det(A^{-1}) = \frac{1}{\det(A)}$
$\det(I) = 1$
If two rows are identical: $\det(A) = 0$
Swapping two rows changes sign of determinant
For triangular matrices: $\det(A) = \prod a_{ii}$

5. Matrix Inverse

For square matrix $A$, if $AA^{-1} = A^{-1}A = I$, then $A^{-1}$ is the inverse.

Existence: $A^{-1}$ exists if and only if $\det(A) \neq 0$

Formula

$$ A^{-1} = \frac{1}{\det(A)} \text{adj}(A) $$

For 2×2 Matrix

$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} $$

Properties

$(A^{-1})^{-1} = A$
$(AB)^{-1} = B^{-1}A^{-1}$
$(A^T)^{-1} = (A^{-1})^T$
$(cA)^{-1} = \frac{1}{c}A^{-1}$

6. Eigenvalues & Eigenvectors

Definition

If $A\mathbf{v} = \lambda\mathbf{v}$ for non-zero $\mathbf{v}$, then $\lambda$ is an eigenvalue and $\mathbf{v}$ is an eigenvector.

Characteristic Equation

$$ \det(A - \lambda I) = 0 $$

Properties

$\sum \lambda_i = \text{tr}(A)$
$\prod \lambda_i = \det(A)$
Eigenvalues of $A^{-1}$ are $\frac{1}{\lambda_i}$
Eigenvalues of $A^k$ are $\lambda_i^k$
Symmetric matrices have real eigenvalues

7. Systems of Linear Equations

Matrix Form

$$ A\mathbf{x} = \mathbf{b} $$

Solutions

Method 1: Matrix Inverse

$$ \mathbf{x} = A^{-1}\mathbf{b} $$

Requires: $\det(A) \neq 0$

Method 2: Cramer's Rule

$$ x_i = \frac{\det(A_i)}{\det(A)} $$

$A_i$ is $A$ with column $i$ replaced by $\mathbf{b}$

8. Advanced Topics

Rank

Maximum number of linearly independent rows/columns

$\text{rank}(A) = \text{rank}(A^T)$
$\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))$

Matrix Norms

Frobenius: $$ \|A\|_F = \sqrt{\sum_{i,j} |a_{ij}|^2} $$
1-Norm: $$ \|A\|_1 = \max_j \sum_i |a_{ij}| $$

Decompositions

LU: $A = LU$
QR: $A = QR$
SVD: $A = U\Sigma V^T$
Cholesky: $A = LL^T$ (positive definite)

Cayley-Hamilton Theorem

Every square matrix satisfies its own characteristic equation.