Today I Learned

Some of the things I've learned every day since Oct 10, 2016

Monthly Archives: November 2016

51: Undecidability of Equality of Lambda Expressions

In general, there is no algorithm which determines whether two lambda expressions are equivalent under the reduction rules of the lambda calculus. This is the problem for which the first proof of undecidability was given.

Advertisements

50: Normalization Property (Lambda Calculus)

In a rewrite system, such as \beta-reduction in the lambda calculus, a term may or may not satisfy either the  weak normalization or the strong normalization property.

A term T satisfies the weak normalization property if there exists a sequence of rewrites on T that eventually results in a normal form of T: a term which is irreducible.

A term T satisfies the strong normalization property if every sequence of rewrites on T eventually results in a normal form of T.

If every term in a rewrite system satisfies the [weak | strong] normalization property, then the system itself is said to satisfy the same property.


Examples:

An example of a system which satisfies neither of the normalization properties is the pure untyped lambda calculus. An example of a non-normal term within this system is

(\lambda y . y y) (\lambda y . y y).

Under \beta-reduction, this term reduces to itself, and so it never terminates in an irreducible form.

Conversely, a term within this system which is weakly normal is

(\lambda x . x) x.

Under \beta-reduction, this term reduces to just the variable x, an irreducible term.


There are other forms of the lambda calculus, as well as other similar systems, which are normal. These can be viewed as programming languages in which every program eventually will (or can) terminate. A significant drawback to such a system, however, is that if a system is normal it cannot be Turing complete.

Additionally, in the lambda calculus every normalizing term has a unique normal form.

49: Church Numerals

In Church encoding, an encoding system using lambda calculus, the Church numerals are the representation of the natural numbers. The distinguishing feature of the Church numerals is that the natural numbers are not treated as a primitive type, as they would typically be, but are simply represented by higher-order functions. Each higher-order function \boldsymbol{n} representing the number n takes two arguments — a function f and a second argument to be passed to f — and returns the n-fold composition of f.

Examples:

0 is represented as \boldsymbol{0} = \lambda f . \lambda x . x, which given any function f returns a function which simply returns x without applying f at all.

1 is represented as \boldsymbol{1} = \lambda f . \lambda x . f x, which given any function f returns a function which applies f once to x.

2 is represented as \boldsymbol{2} = \lambda f . \lambda x . f (f x), which given any function f returns a function which applies f twice to x.

3 is represented as \boldsymbol{3} = \lambda f . \lambda x . f (f (f x)), which given any function f returns a function which applies f three times to x.

48: Self-Adjoint Linear Operators

A linear operator T: V \rightarrow V is self-adjoint iff it is its own adjoint, i.e. iff

\langle T(x), y \rangle = \langle x, T(y) \rangle \quad \forall x, y \in V.

This is equivalent to the condition that the matrix of T with respect to any orthonormal basis is Hermitian (the matrix is its own conjugate transpose).

In addition, if T is self-adjoint, then there exists an orthonormal eigenbasis \beta for V such that the matrix representation of T with respect to \beta is a diagonal matrix with real entries.

47: Invariant Distributions as Eigenvectors

Since a stationary distribution \pi of a finite Markov chain X satisfies \pi P = \pi, where P is the transition matrix of X, it can be seen as an eigenvector of eigenvalue \lambda = 1 under the linear transformation by P. Specifically, \pi is the intersection of the eigenspace E_1 with the hyperplane formed by the constraint that \sum _{i = 1} ^n \pi (i) = 1.

(Here the vector space in question is \mathbb{R}^n, where n is the number of states in X.)

46: Absorbing States of Markov Chains

A state S in a Markov chain X is absorbing iff it is impossible for the chain to leave S once it’s there. If an absorbing state is accessible from any arbitrary state in X, then X is likewise said to be absorbing.

45: Transient and Recurrent States of Markov Chains

A state S of a Markov chain X is transient if, given we start from S, there is a non-zero probability that we will never return to S. A state is transient iff the expected number of visits to that state is finite.

Conversely, S is recurrent if, given we start from S, the probability we will eventually return to S is 1. A state is recurrent iff the expected number of visits to that state is infinite.

44: Sophmore’s Dream

Sophmore’s dream refers to the identity

\int _0 ^1 x^{-x} \textrm{d}x = \sum _{n = 1} ^{\infty} n^{-n}.

The name refers to the fact that this is an identity which, although true, seems ‘too good to be true’ in some sense, and looks at first like the naive misapplication of ideas not well-understood by the user. A good second example of this is the freshman’s dream, which refers to the mistaken identity (a + b)^n = a^n + b^n.

43: Aperiodicity of Markov Chains

A state s in a Markov chain X is aperiodic iff there exists an n such that

P(X_{n'} = s | X_0 = s) > 0 \quad \forall n' \geq n.

That is, a state is aperiodic if the chain can always loop back to this state in an arbitrary number of steps.

A Markov chain is aperiodic iff every state in the chain is aperiodic. If the chain is irreducible, then the existence of a single aperiodic state implies that the entire chain is aperiodic.

42: Adjoint Operators

In linear algebra, if T is a linear operator T: V \rightarrow V over a finite vector space V, then there exists a unique T^*: V \rightarrow V, the adjoint operator, such that

\langle T x, y \rangle =  \langle x, T^* y \rangle

for all x, y \in V.