# Today I Learned

Some of the things I've learned every day since Oct 10, 2016

## Monthly Archives: October 2016

## 21: Computation of Determinants

October 31, 2016

Posted by on Because the computation of matrix determinants by cofactor expansion (Laplace expansion) is computationally very expensive — for example, computing the determinant of an matrix by this method requires over multiplications — other methods are usually used.

For instance, the simple method of using Gaussian elimination to reduce the matrix to its upper-triangular form, then taking the product of the diagonal to obtain the determinant, is just in multiplications. Other, even less expensive algorithms exist as well, such as the Bareiss algorithm.

## 20: Generalized Eigenspaces

October 30, 2016

Posted by on If is a linear operator on a finite-dimensional vector space , and the characteristic polynomial of splits, then for a given eigenvalue of with multiplicity , the **generalized eigenspace** corresponding to is

## 19: Poisson Distribution

October 29, 2016

Posted by on The distribution of a discrete random variable is considered a **Poisson Distribution** if, given an event an given interval (which can be thought of a number of chances is given to occur), the probability of occurring exactly times on that size interval is given by

,

where is the known average of occurrences given an identical interval.

[Restrictions: events must occur independently, and no more than one instance of an event can happen at a single point in the interval.]

A couple nice properties of Poisson-distributed random variables are that the expected value and variance are both equal to .

## 18: Kleene’s Theorem

October 28, 2016

Posted by on **Kleene’s Theorem** states that a set is regular if and only if it is recognized by a finite-state automaton.

## 17: Linearity of Expectation

October 27, 2016

Posted by on The **expectation **of a discrete random variable , denoted , is linear. That is, for random variables and a constant ,

.

## 16: S-Expressions

October 26, 2016

Posted by on In the Lisp language, all code is written in the form of **S-Expressions**, which are essentially expressions in the form of parenthesized lists. For example, the expression of a function `f` being called on arguments `a, b` would be `(f a b)`. The fact that code in Lisp is expressed in the form of the list data structure allows code to be manipulated as data, facilitating easy metaprogramming.

## 15: Context-Free Grammars

October 25, 2016

Posted by on A **context-free** grammar is a formal grammar in which the rewrite rules (or ‘productions’) of the grammar are all from *single variables*. That is, as implied in the name, the rewrite rules of a given variable in a context-free grammar are dependent only on that variable and not on any other ‘context’.

Example:

This is a valid subset of rewrite rules in a context-free grammar

while this is not

The violation of freedom-from-context in the latter example is due to the last rule, where the thing being rewritten is not a single variable.

## 14: Kolmogorov Complexity

October 24, 2016

Posted by on The **Kolmogorov Complexity **of something is, naively speaking, the minimum length of a program which produces that thing as output. Even more naively, it can be thought of as the amount of information necessary in a ‘description’ of an object.

## 13: Schläfli Symbols of Self-Dual Polytopes

October 23, 2016

Posted by on The Schläfli symbol of the dual of a polytope with dimension will simply be the reverse of that polytope’s symbol. Thus, such a polytope is self-dual if and only if its Schläfli symbol is palindromic.

## 12: The Cayley-Hamilton Theorem

October 22, 2016

Posted by on The **Cayley-Hamilton Theorem** states that a linear operator satisfies (is annihilated by) its own characteristic polynomial.

That is, if is a linear operator and

is its characteristic polynomial, then

,

where is the zero transformation.

## Recent Comments