# Today I Learned

Some of the things I've learned every day since Oct 10, 2016

## 21: Computation of Determinants

Because the computation of matrix determinants by cofactor expansion (Laplace expansion) is computationally very expensive — for example, computing the determinant of an $n \times n$ matrix by this method requires over $n!$ multiplications — other methods are usually used.

For instance, the simple method of using Gaussian elimination to reduce the matrix to its upper-triangular form, then taking the product of the diagonal to obtain the determinant, is just $O(n^3)$ in multiplications. Other, even less expensive algorithms exist as well, such as the Bareiss algorithm.

## 20: Generalized Eigenspaces

If $T$ is a linear operator on a finite-dimensional vector space $V$, and the characteristic polynomial of $T$ splits, then for a given eigenvalue $\lambda$ of $T$ with multiplicity $m$, the generalized eigenspace corresponding to $\lambda$ is

$K_{\lambda} = \textrm{ker} ((T - \lambda I)^m)$

## 19: Poisson Distribution

The distribution of a discrete random variable $X$ is considered a Poisson Distribution if, given an event $x$ an given interval $N$ (which can be thought of a number of chances $x$ is given to occur), the probability of $x$ occurring exactly $k$ times on that size interval is given by

$P(k \textrm{ occurrences}) = \frac{\lambda ^k e^{- \lambda}}{k!}$,

where $\lambda$ is the known average of occurrences given an identical interval.

[Restrictions: events must occur independently, and no more than one instance of an event can happen at a single point in the interval.]

A couple nice properties of Poisson-distributed random variables are that the expected value and variance are both equal to $\lambda$.

## 18: Kleene’s Theorem

Kleene’s Theorem states that a set is regular if and only if it is recognized by a finite-state automaton.

## 17: Linearity of Expectation

The expectation of a discrete random variable $X$, denoted $\textrm{E}(X)$, is linear. That is, for random variables $X, Y$ and a constant $c$,

$\textrm{E}(cX + Y) = c \textrm{E}(X) + \textrm{E}(Y)$ .

## 16: S-Expressions

In the Lisp language, all code is written in the form of S-Expressions, which are essentially expressions in the form of parenthesized lists. For example, the expression of a function f being called on arguments a, b would be (f a b). The fact that code in Lisp is expressed in the form of the list data structure allows code to be manipulated as data, facilitating easy metaprogramming.

## 15: Context-Free Grammars

context-free grammar is a formal grammar in which the rewrite rules (or ‘productions’) of the grammar are all from single variables. That is, as implied in the name, the rewrite rules of a given variable in a context-free grammar are dependent only on that variable and not on any other ‘context’.

Example:

This is a valid subset of rewrite rules in a context-free grammar

$X \rightarrow a$

$Y \rightarrow ab$

while this is not

$X \rightarrow a$

$Y \rightarrow ab$

$XY \rightarrow c$

The violation of freedom-from-context in the latter example is due to the last rule, where the thing being rewritten is not a single variable.

## 14: Kolmogorov Complexity

The Kolmogorov Complexity of something is, naively speaking, the minimum length of a program which produces that thing as output. Even more naively, it can be thought of as the amount of information necessary in a ‘description’ of an object.

## 13: Schläfli Symbols of Self-Dual Polytopes

The Schläfli symbol of the dual of a polytope with dimension $\geq 2$ will simply be the reverse of that polytope’s symbol. Thus, such a polytope is self-dual if and only if its Schläfli symbol is palindromic.

## 12: The Cayley-Hamilton Theorem

The Cayley-Hamilton Theorem states that a linear operator $T$ satisfies (is annihilated by) its own characteristic polynomial.

That is, if $T: V \rightarrow V$ is a linear operator and

$f(t) = a_0 + a_1t + a_2t^2 + \dots + a_nt^n$

is its characteristic polynomial, then

$f(T) = a_0I + a_1T + a_2T^2 + \dots + a_nT^n = T_0$,

where $T_0$ is the zero transformation.