Today I Learned

Some of the things I've learned every day since Oct 10, 2016

Category Archives: linear algebra

214: Orthogonal Matrices

In linear algebra, an orthogonal matrix is a square matrix A such that

AT^T = A^T A = I.

Equivalently, the rows and columns of A are, respectively, orthonormal vectors: vectors which are all unit length and orthogonal to one another.

Also equivalently, A^{-1} = A^T.

A fun fact about orthogonal matrices is that their determinants are always \pm 1.


81: General Linear Groups

The general linear group \textrm{GL}(V), where V is a vector space, is the group of all automorphisms T: V \rightarrow V under the operation of composition of linear transformations.

When V is over the field F and is finite-dimensional with \textrm{dim}(V) = n, this group is isomorphic to the group of invertible n \times n matrices with entries from F under the operation of matrix multiplication. In this case, the group is often written as \textrm{GL}(n, F).

General linear groups are used in group representations. A group representation is a representation of a group as a general linear group. That is, it is an automorphism

f: G \rightarrow \textrm{GL}(V)

where G is the group being represented and V is any vector space.

80: Multilinear Forms

In abstract algebra, a multilinear form is a mapping

f: V^n \rightarrow F

where V is a vector space over the field F, such that each argument of f is linear over F with the other arguments held fixed. A special case of this is when n = 2 and f is a bilinear form.

74: Linearly Separable Values (Euclidean)

2 sets X_1, X_2 of points in n-dimensional Euclidean space E^n are linearly separable if and only if there exists a non-zero vector \mathbf{w} \in E^n and a number k such that

\mathbf{w} \cdot \mathbf{x} < k

holds for every \mathbf{x} \in X_1, and does not hold for every \mathbf{x'} \in X_2. Intuitively, this means that two sets of points in an n-dimensional Euclidean space are linearly separable if there is an (n-1)-dimensional plane that when inserted into the same space separates the two sets.

(This concept could probably be extended to spaces which share certain properties with E^n, such as having a partial order, closure, etc., but E^n gives the simplest example.)

72: Bilinear Maps

In linear algebra, a bilinear map is a function f: X \times Y \rightarrow Z, where X, Y, Z are vector spaces over a common field, which is linear in each of its 2 components when the other is held fixed. When f(x, y) = f(y, x) for all (x, y) \in X \times Y, it is referred to as a symmetric bilinear map.

Examples of bilinear maps include matrix multiplication, the inner product, and the cross product.

57: Positive-Definite Matrices

A complex matrix A is said to be positive-definite iff for every column vector c \in \mathbb{C}^n,  c^{t} A c is real and positive. This is equivalent to the condition that all the eigenvalues of A are positive.

Similarly, there are variations of positive-definiteness with analogous conditions:

A positive-semidefinite \leftrightarrow c^{t} A c real and non-negative \leftrightarrow non-negative eigenvalues

A negative-semidefinite \leftrightarrow c^{t} A c real and non-positive \leftrightarrow non-positive eigenvalues

A negative-definite \leftrightarrow c^{t} A c real and negative \leftrightarrow negative eigenvalues

56: Unitary Linear Operators

In linear algebra, a unitary operator T over a inner product space V is one which satisfies

TT^* = T^*T = I.

Thus it is a special kind of normal operator. The following conditions are equivalent to T being unitary:

  • T preserves the inner product. That is, \langle T(x), T(y) \rangle = \langle x, y \rangle.
  • T is distance-preserving. That is, ||T(x)|| = ||x||.
  • T^* is unitary.
  • T is invertible and T^{-1} = T^*.
  • T is a normal operator with eigenvalues on the complex unit circle.

and the following are additionally true of a unitary operator:

  • T is normal.
  • The eigenspaces of T are orthogonal.
  • Every eigenvalue of T has an absolute value of 1.
  • U = P^*DP for some unitary transformations D, P, where D is diagonal.

(When T is over \mathbb{R} it is sometimes referred to as ‘orthogonal’ rather than ‘unitary’.)

48: Self-Adjoint Linear Operators

A linear operator T: V \rightarrow V is self-adjoint iff it is its own adjoint, i.e. iff

\langle T(x), y \rangle = \langle x, T(y) \rangle \quad \forall x, y \in V.

This is equivalent to the condition that the matrix of T with respect to any orthonormal basis is Hermitian (the matrix is its own conjugate transpose).

In addition, if T is self-adjoint, then there exists an orthonormal eigenbasis \beta for V such that the matrix representation of T with respect to \beta is a diagonal matrix with real entries.

47: Invariant Distributions as Eigenvectors

Since a stationary distribution \pi of a finite Markov chain X satisfies \pi P = \pi, where P is the transition matrix of X, it can be seen as an eigenvector of eigenvalue \lambda = 1 under the linear transformation by P. Specifically, \pi is the intersection of the eigenspace E_1 with the hyperplane formed by the constraint that \sum _{i = 1} ^n \pi (i) = 1.

(Here the vector space in question is \mathbb{R}^n, where n is the number of states in X.)

42: Adjoint Operators

In linear algebra, if T is a linear operator T: V \rightarrow V over a finite vector space V, then there exists a unique T^*: V \rightarrow V, the adjoint operator, such that

\langle T x, y \rangle =  \langle x, T^* y \rangle

for all x, y \in V.