Matrix and Determinants

Determinants

Determinant of a Square Matrix

\[\begin{aligned} \det(A) &= \sum_{\sigma} \text{sgn}(\sigma) a_{1\sigma(1)}a_{2\sigma(2)}\ldots a_{n\sigma(n)} \end{aligned}\]
\[\begin{aligned} \sigma &\text{ is a permutation of } (1, 2, 3, \ldots, n) \\ \text{sgn}(\sigma) &= \begin{cases} 1, & \text{if } \sigma \text{ is an even permutation} \\ -1, & \text{if } \sigma \text{ is an odd permutation} \end{cases} \end{aligned}\]
is calculated as follows: of order The determinant of a square matrix

Properties of Determinants

a. If the corresponding columns and rows of \(A\) are interchanged, \(\det(A)\) is unchanged.

b. If any two rows (or columns) are interchanged, the sign of \(\det(A)\) changes.

c. If any two rows (or columns) are identical, \(\det(A) = 0\).

d. If \(A\) is triangular (all elements above the main diagonal equal to zero), \(\det(A) = a_{11}a_{22}\ldots a_{nn}\).

e. If to each element of a row or column, there is added \(C\) times the corresponding element in another row (or column), the value of the determinant is unchanged.

Matrices

Definition

\[A = \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \\ \end{bmatrix}\]
\(a_{11}, a_{22}, \ldots, a_{nn}\)\(n\)\(n\)\(m = n\)\(n\)\(m\)\(m \times n\)\(j\)\(i\)\(j\)\(i\) or A matrix is a rectangular array of numbers and is represented by a symbol

Operations

  1. \[A + B = \begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12} & \ldots & a_{1n} + b_{1n} \\ a_{21} + b_{21} & a_{22} + b_{22} & \ldots & a_{2n} + b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \ldots & a_{mn} + b_{mn} \\ \end{bmatrix}\]
    \(B\)\(A\)Addition:
  2. Scalar Multiplication: If \(A = [a_{ij}]\) and \(c\) is a constant (scalar), then \(cA = [ca_{ij}]\), that is, every element of \(A\) is multiplied by \(c\). In particular, \((-1)A = -A = [-a_{ij}]\) and \(A + (-A) = 0\), a matrix with all elements equal to zero.

  3. \[c_{ij} = a_{i1}b_{1j} + a_{i2}b_{2j} + \ldots + a_{ik}b_{kj}\]
    \(B\)\(j\)\(A\)\(i\)\(c_{ij}\)\(m \times n\)\(C = AB\)\(k \times n\)\(B\)\(m \times k\)\(A\)\(B\)\(A\)\(B\)\(A\)Multiplication of Matrices:

Properties

  1. \(A + B = B + A\)

  2. \(A + (B + C) = (A + B) + C\)

  3. \((C_1 + C_2)A = C_1A + C_2A\)

  4. \(c(A + B) = cA + cB\)

  5. \(c_1(c_2A) = (c_1c_2)A\)

  6. \((AB)C = A(BC)\)

  7. \((A + B)C = AC + BC\)

  8. \(AB \neq BA\) (in general)

Transpose of a Matrix

\[\begin{aligned} (A^T)^T &= A \\ (A + B)^T &= A^T + B^T \\ (cA)^T &= cA^T \\ (AB)^T &= B^T A^T \end{aligned}\]
\(A = A^T\)\(A\), and their respective transposes: , . The following are properties of is called the transpose and is denoted obtained by interchanging the rows and columns of matrix, the matrix of order is an If

Identity Matrix

\[A I = I A = A\]
has the property: , the identity matrix of order -th order matrix . Thus, for any is called the identity, or unit matrix, and is denoted . A scalar matrix with diagonal elements , which is the same as multiplying by a scalar , the product is , and all other elements are zero, is called a scalar matrix. When a scalar matrix multiplies a conformable second matrix A square matrix in which each element of the main diagonal is the same constant

Adjoint of a Matrix

\[\text{adj}(A) = \begin{bmatrix} A_{11} & A_{21} & \ldots & A_{n1} \\ A_{12} & A_{22} & \ldots & A_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{1n} & A_{2n} & \ldots & A_{nn} \end{bmatrix}\]
: and is denoted is called the adjoint of , the transpose of is -order square matrix and the cofactor of element is an If

Inverse of a Matrix

\[A^{-1} = \frac{\text{adj}(A)}{\det(A)}\]
\(\det(A)\)\(A\)\(A\). Such a matrix is called nonsingular, and its inverse is unique. It is given by: has an inverse is . A necessary and sufficient condition that the square matrix . The inverse is denoted is called the inverse of , then such that , if there exists a matrix of order Given a square matrix

System of Linear Equations

Cramer’s Rule

\[\begin{aligned} a_{11}x_1 + a_{12}x_2 + \ldots + a_{1n}x_n &= b_1 \\ a_{21}x_1 + a_{22}x_2 + \ldots + a_{2n}x_n &= b_2 \\ \vdots \\ a_{n1}x_1 + a_{n2}x_2 + \ldots + a_{nn}x_n &= b_n \end{aligned}\]
\(b\)\(A\)\(i\)\(A\)\(A_i\)
\[x_i = \frac{\det A_i}{\det A},\]
\(n \times n\)\(A\)\(\det A \neq 0\)Given the system of linear equations:

Matrix Solution

\[X = A^{-1}B.\]
) exists, and the solution is given by: (the inverse of , then . If a unique solution exists, i.e., is a column vector containing the constants , and is a column vector containing the variables is the matrix of coefficients, , where The linear system may be written in matrix form as