# Today I Learned

Some of the things I've learned every day since Oct 10, 2016

## 127: Linters

Linters in the modern sense generally refer to programs which check code for potential errors (or even just style errors) in a static context (without running the code). Potential such errors might include syntactic errors, “variables being used before being set, division by zero, conditions that are constant, and calculations whose result is likely to be outside the range of values representable in the type used.” [Wikipedia]

The name comes from a program called Lint that performed this function for C code, but has since become a more general term.

Linting can be helpful for debugging interpreted languages like Python, since they don’t have a compilation phase where things like syntactic errors would be caught.

## 126: Principle of Explosion

In logic systems such as classical logic, the principle of explosion is the principle that from a contradiction any proposition can be inferred to be true.

To show this, suppose we’ve arrived at a contradictory statement, $P \wedge \neg P$. Let $Q$ be any proposition we want to prove. By conjunction elimination, we have $P$, and then by disjunction introduction we have

$P \vee Q$.

However, since we also have $\neg P$, it follows that $Q$ is true.

This principle suggests an interesting alternative attitude towards contradiction in systems that exhibit this behavior: contradiction is not undesirable because of any ‘intuitive’ nonsensicalness, but simply because a set of assumptions leading to contradiction gives us no information.

## 125: Duck Typing

Duck typing is a term used to describe programming languages in which, when a method is called on an object, it’s never formally confirmed that the object is of a specific type acceptable as input to the method, just that the object has whatever properties are required to perform the needed operations. In other words, it isn’t formally checked what an object is but rather what it can do.

The name is a reference to the saying, “If it walks like a duck and quacks like a duck, then it’s a duck”, which echoes this same principle.

## 124: Skip List Search Algorithm

The following is an informal description of the algorithm for searching for an element in a skip list:

If you look the shape of the nodes that the algorithm visits on a given example, the general case of the search time being $O(\log{} n)$ makes intuitive sense.

## 123: Skip Lists (Intro)

skip list is a data structure similar to a linked list which stores an ordered sequence of elements and is built specifically for fast search within the sequence. The way it’s commonly implemented is as a hierarchy of linked lists, with the first list containing all the elements of the skip list, and each successive linked list containing a sparser subset of the elements in the previous one.

To search for an element within the skip list one then starts with the sparsest linked list, getting as ‘close’ to the desired element as possible, then moves directly down to that same position in the next linked list and continues searching until the item is found (or not).

This animation depicts the insertion of the element 80 into a skip list of positive integers (insertion being equivalent to searching):

Credit: Artyom Kalinin

In the average case, the skip list uses $O(n)$ space and takes just $O(\log{} n)$ time for searching operations.

## 122: Hotfixes

In software development, a hotfix is similar to a bug fix but differs in that it is

• applied to the system while hot, i.e. currently deployed and running
• commonly not publicly available but instead only released to a small audience of clients who have encountered a given bug in the deployed system
• applied outside of the usual release schedule

As a result of insufficient testing, hotfixes can easily become a source of even further bugs developing, and are thought of as kind of an ’emergency’ measure used when a fix for a client cannot wait until the next scheduled update.

Hotfixes are commonly released in packages called cumulative updates.

## 121: Another Description of Z Order

Another clear description of Z Order, equivalent to the binary description, comes from depth-first traversal of a quadtree.

Say we have a 2-dimensional square with a grid of $4^k$ points on it, and that we organize these points into a complete quadtree of depth $k$. To do this, we start with the complete square as the root of the tree, then assign the 4 subsquares to the 4 child nodes of the root as upper-left $\rightarrow$ child 1, upper-right $\rightarrow$ child 2, lower-left $\rightarrow$ child 3, lower-right $\rightarrow$ child 4. For each subsquare we do this recursively, until we arrive at the smallest subsquares which will become leaf nodes. Then the Z order of the points on the square will just be the order that their associated nodes are visited by depth-first traversal of this quadtree.

## 120: Z Order

Z order is a map between 1-dimensional and n-dimensional space which does a decent job of preserving locality (“closeness” between points).

The calculation of Z order from 1 to 2 dimensions is extremely straightforward. Select $4^k$ points from a 1-dimensional interval, and index them according to order in the obvious way using binary. If working with 64 points for instance, with $k=3$, they’d be labeled 000000, 000001, 000010, …, 111110, 111111. Then recursively split your 2-dimensional square into subsquares $k$ times. Again with $k=3$, you’d have 64 subsquares.

Now given a binary index for a point, read 2 digits at a time, from left to right. If the two digits read are 00, direct the point into the upper-left quadrant. If 01, upper-right. If 10, lower-left. If 11, lower-right. Go on reading the index 2 digits at a time, recursively directing the point into subsquares until you reach the end of the index, and then that subsquare is where the point goes.

A picture, to make this more clear (courtesy of David Eppstein):

I recommend ignoring the stuff in the margins; just look at the labels for points and where they end up.

Here’s what the result of this map looks like for $k = 1, 2, 3, 4$:

The inverse map from 2D to 1D is obvious.

## 119: and, or vs &&, || (Ruby)

In the Ruby programming language, the logical operators && and || have the slight variations $\texttt{and}$ and $\texttt{or}$. These plain-English operators do not behave exactly the same as their counterparts — the difference lying in their priority. While && and || have higher evaluation priority than the assignment operator $\texttt{=}$$\texttt{and}$ and $\texttt{or}$ actually have lower priority. This difference makes them useful as control-flow operators, akin to $\texttt{if}$ or $\texttt{unless}$.

## 118: Clustering in the Hilbert Curve

The Hilbert map between 1-dimensional and 2-dimensional space has the nice property that distances between points in 1D space are (relatively) preserved when those points are mapped into 2D space. Informally, if 2 points are close together on a line segment then they will also be close together when that segment is transformed into a Hilbert curve.

The converse is not always true, but is in some cases.

This property is illustrated in the picture below, which is the result of mapping the color spectrum into a square using the Hilbert mapping (image courtesy of Jason Davies):

Because of this nice locality property, the Hilbert curve lends itself to many applications in computer science. For instance, if one wants to index multi-dimensional data in a way that indices preserve the distance between datapoints, a multi-dimensional Hilbert mapping can be used. (The Hilbert curve has generalizations beyond just 2 dimensions.)