Today I Learned

Some of the things I've learned every day since Oct 10, 2016

122: Hotfixes

In software development, a hotfix is similar to a bug fix but differs in that it is

  • applied to the system while hot, i.e. currently deployed and running
  • commonly not publicly available but instead only released to a small audience of clients who have encountered a given bug in the deployed system
  • applied outside of the usual release schedule

As a result of insufficient testing, hotfixes can easily become a source of even further bugs developing, and are thought of as kind of an ’emergency’ measure used when a fix for a client cannot wait until the next scheduled update.

Hotfixes are commonly released in packages called cumulative updates.

121: Another Description of Z Order

Another clear description of Z Order, equivalent to the binary description, comes from depth-first traversal of a quadtree.

Say we have a 2-dimensional square with a grid of 4^k points on it, and that we organize these points into a complete quadtree of depth k. To do this, we start with the complete square as the root of the tree, then assign the 4 subsquares to the 4 child nodes of the root as upper-left \rightarrow child 1, upper-right \rightarrow child 2, lower-left \rightarrow child 3, lower-right \rightarrow child 4. For each subsquare we do this recursively, until we arrive at the smallest subsquares which will become leaf nodes. Then the Z order of the points on the square will just be the order that their associated nodes are visited by depth-first traversal of this quadtree.

120: Z Order

Z order is a map between 1-dimensional and n-dimensional space which does a decent job of preserving locality (“closeness” between points).

The calculation of Z order from 1 to 2 dimensions is extremely straightforward. Select 4^k points from a 1-dimensional interval, and index them according to order in the obvious way using binary. If working with 64 points for instance, with k=3, they’d be labeled 000000, 000001, 000010, …, 111110, 111111. Then recursively split your 2-dimensional square into subsquares k times. Again with k=3, you’d have 64 subsquares.

Now given a binary index for a point, read 2 digits at a time, from left to right. If the two digits read are 00, direct the point into the upper-left quadrant. If 01, upper-right. If 10, lower-left. If 11, lower-right. Go on reading the index 2 digits at a time, recursively directing the point into subsquares until you reach the end of the index, and then that subsquare is where the point goes.

A picture, to make this more clear (courtesy of David Eppstein):

z order.png

I recommend ignoring the stuff in the margins; just look at the labels for points and where they end up.

Here’s what the result of this map looks like for k = 1, 2, 3, 4:

z ord

The inverse map from 2D to 1D is obvious.

119: and, or vs &&, || (Ruby)

In the Ruby programming language, the logical operators && and || have the slight variations \texttt{and} and \texttt{or}. These plain-English operators do not behave exactly the same as their counterparts — the difference lying in their priority. While && and || have higher evaluation priority than the assignment operator \texttt{=}\texttt{and} and \texttt{or} actually have lower priority. This difference makes them useful as control-flow operators, akin to \texttt{if} or \texttt{unless}.

118: Clustering in the Hilbert Curve

The Hilbert map between 1-dimensional and 2-dimensional space has the nice property that distances between points in 1D space are (relatively) preserved when those points are mapped into 2D space. Informally, if 2 points are close together on a line segment then they will also be close together when that segment is transformed into a Hilbert curve.

The converse is not always true, but is in some cases.

This property is illustrated in the picture below, which is the result of mapping the color spectrum into a square using the Hilbert mapping (image courtesy of Jason Davies):

hilbertcolors

Because of this nice locality property, the Hilbert curve lends itself to many applications in computer science. For instance, if one wants to index multi-dimensional data in a way that indices preserve the distance between datapoints, a multi-dimensional Hilbert mapping can be used. (The Hilbert curve has generalizations beyond just 2 dimensions.)

117: Generality of the 2-Element Boolean Algebra

The behavior of boolean algebras in general is captured by that of 2, the boolean algebra with 2 elements, since a theorem holds for 2 if and only if it also holds for any boolean algebra.

116: Differential Testing (Software)

In software testing, differential testing is the method of giving semantically identical input to multiple different implementations of the same program and looking for where, if at any point, these implementations differ in output. Such a difference would obviously indicate a problem in one or more of the implementations.

As with any testing method that uses a select few inputs out of an often infinite input space, there are various ways of selecting which inputs to use, each with their advantages and disadvantages. One can select inputs completely randomly — known as unguided testing — or use techniques such as evolutionary guidance to make the selection of inputs more likely to detect possible bugs.

115: Race Conditions

In hardware or software systems, a race condition is a point in the system’s behavior which is critically dependent on timing issues.

In a software system, the issue could arise from two processes or threads which modify the same resources running at the same time, resulting in a resource modification error. In other words, two processes can modify the same resource and they try to modify it at the same time: they are “racing” to access it. A typical fix for this type of race condition is simply locking the resource so that only one process can access it at a time.

114: Iterative Deepening Search

Iterative deepening depth-first search is a graph traversal algorithm which visits (new) nodes in effectively the same order as breadth-first search, but with lower memory usage and a runtime of O(b^d) (where b is the branching factor of the graph and d is the depth of the goal node.

The algorithm does pretty much what the name describes: it iteratively performs a typical depth-first search up do a set maximum depth, incrementing the depth with each iteration. In this way, nodes are seen for the first time in the order they would be seen by BFS while using lower amounts of memory. In addition, since it runs on DFS the algorithm uses just O(d) space, where d is the depth of the goal.

113: One Generalization of AND, OR, NOT to Fuzzy Logic

In fuzzy logic, where the set \{ 0, 1 \} of possible values is replaced with the range [ 0, 1], generalizations of the basic AND, OR, and NOT operators is needed which can handle input from the new domain. The only requirement of such generalizations is that they agree with their more specific counterparts when given input from \{ 0, 1 \}. However, there are choices which are more intuitive and advantageous than others.

One such choice is defining

  • AND(a, b) = a*b
  • OR(a, b) = a + b – a*b
  • NOT(a) = 1 – a

The formulas for AND and NOT are intuitive enough, and even though the formula for OR may not be, it’s easily derived from those of the other 2 using the equivalence

OR(a, b) = NOT(AND(NOT(a), NOT(b))).