# Today I Learned

Some of the things I've learned every day since Oct 10, 2016

## 239: Alternative Generation of the Bit-Flips-Per-Increment Sequence

While looking over the sequence representing the number of bit flips required in a sequence of increment operations, I saw that there’s an alternative and very easy way of producing $A_k = (a_1, \dots, a_{2^{k-1}})$, where $a_i$ is the number of bits flipped in increment $i$. We have:

• $A_1 = (1)$
• $A_{k+1} = A_{k} + f(A_{k})$

where $f(A)$ on a non-empty sequence $A$ returns a copy of $A$ with the last element incremented by 1. The first few iterations are:

$A_1 = (1)$

$A_2 = (1, 2)$

$A_3 = (1, 2, 1, 3)$

$A_4 = (1, 2, 1, 3, 1, 2, 1, 4)$

$A_5 = (1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 5)$

The reason for this pattern should be pretty obvious, and from these it’s easy to show the amortized runtime of increment in essentially the same way as in my previous post.

I’m not sure what the term is for this form of iterative sequence generation is, but it’s vaguely reminiscent of Lindenmayer systems.

## 238: Hamming Weight

The Hamming weight (commonly known as population count or popcount)of a string is simply the number of non-‘zero’ symbols in that string. For instance, the Hamming weight of the base-10 string 0108303809 is 6.

Most commonly, though, Hamming weight is taken over binary strings (or bit ‘vectors’ or some other representation of the same thing). In this context the weight is simply the number of 1’s present. Note this is equivalent to both the $l_1$ norm and $l_0$ ‘norm’ of a binary vector.

Note that it also follows immediately, almost by definition, that the Hamming distance of 2 binary strings $S, T$ is simply  $\text{popcount}(S \text{ xor } T)$.

Popcount has a lot of different applications, ranging from picking public keys within the RSA scheme to efficient evaluation of game states in board games using ‘bitboard’ representations. As a result, how to efficiently perform the operation has been subject to substantial research, which I’ll hopefully be able to write a post about later.

## 237: Newton-Raphson Division

Naive division (schoolbook “long division”) runs in $O(n^2)$ time, where $n$ is the number of bits in the numbers involved, just like naive multiplication algorithms. However, there are faster algorithms for not only $n$-bit multiplication but for $n$ -bit division as well.

One of these — the Newton-Raphson algorithm — runs in $O(M(n))$, where $M(n)$ is the complexity of any multiplication algorithm used, precisely by reducing division to multiplication.

I’ll try to write a more detailed post later once I fully understand the details, but a high level overview is that the algorithm, given numbers $N$ and $D$, finds the quotient $Q$ such that $Q = \frac{N}{D}$ by:

1. Finding the reciprocal $X = D^{-1}$of $D$ (by using Newton’s method to find the root of the equation $f(X) = \frac{1}{X} - D$ (by performing the multiplicative iteration $X_{i + 1} = X_i(2 - DX_i)$)).
2. Multiplying $X$ by $N$.

## 236: Amortized Runtime of Increment

What’s the runtime of incrementing a binary number? When writing my last post I learned that the amortized runtime of $n$ successive insertions into a binomial heap is $O(1)$ precisely because that’s the amortized runtime of incrementing a binary number $n$ times. When I asked myself why the latter was true, though, it didn’t seem immediately obvious, so I looked into getting an intuition for this upper bound.

(Below I make the assumption that in performing $n$ incrementations we start with 0 and end with $n$, and also that $n$ is a power of 2. This is just to make things simpler — it isn’t hard to see why the more general cases follow.)

The most obvious upper bound one can find on the number of operations (checking bits, flipping bits) to increment a number $k$ is $\log _2 (k)$: the number of bits in the number. Using this upper bound, though, the maximum runtime of incrementing 0 $n$ times would be

$O(\log _2(0) + \log _2(1) + \log _2(2) + \log _2(3) + \dots + \log _2(n)) = O(n \log(n))$

resulting in an amortized runtime of $O(log(n))$. How do we get a tighter upper bound?

Looking closer, we see that the number of bits we need to look at or modify in an incrementation is in general less than the number of bits representing the number, because the ‘carry’ operation will usually not go all the way through to the leftmost bit. In fact, exactly half the time the rightmost bit will be 0, meaning incrementation will require looking at and modifying only that single bit.

Looking more generally at this pattern, we also see that the next bit (from the right) gets looked at and swapped exactly every 2 incrementations, the next every 4 incrementations, and so on:

 k | swap bit0 | bit1 | bit2 | bit3 | bit4
1 | 1
2 | 1         | 1
3 | 1
4 | 1         | 1    | 1
5 | 1
6 | 1         | 1
7 | 1
8 | 1         | 1    | 1    | 1
9 | 1
10| 1         | 1
11| 1
12| 1         | 1    | 1
13| 1
14| 1         | 1
15| 1
16| 1         | 1    | 1    | 1    | 1



Summing the total number of operations over columns (the different bits on which they can be done) we find the the actual bound to be

$O(1 + 2 + 4 + 8 + \dots + n) = O(2n) = O(n)$.

This gives us what we want: an amortized per-operation runtime of $O(1)$.

## 235: Binomial Heaps

Following up on the post on binomial trees, here’s an overview of binomial heaps and the mechanisms behind their various operations.

## Structure

A binomial heap is merely a collection of binomial trees, each of which satisfy the heap invariant, having either 0 or 1 tree for each order $k \geq 0$. A typical way to represent this is collection is a linked list where the $k$th node has as its value either null or a binomial tree of order $k$.

Recall that a binomial tree of order $k$ has exactly $2^k$ nodes and depth $k$. This results in a direct analogy between the representation of a number $n$ in binary and the structure of the binomial heap with $n$ nodes. For instance, the number 13 has the unique binary representation 1101. This means that the binomial heap with 13 nodes is uniquely determined to have 1 binomial tree each of orders 0, 2, and 3. ($2^0 + 2^2 + 2^3 = 13$.) An example of such a heap is shown below (minus the linked list pointers):

Source: https://en.wikipedia.org/wiki/Binomial_heap

Note that this also implies that a heap with $n$ nodes with be represented by $O(log_2(n))$ trees.

## Operations

### Merge

The $O(log(n))$ operation of merging two binomial heaps is not only their main advantage over traditional binary heaps, but is essential to how most other operations are accomplished.

The crucial aspect of how the merge operation works is that merging two binomial trees of order $k$, each being heaps, takes $O(1)$ time. You can simply add the one with a larger root value as a child of the other to get a valid tree-heap of order $k + 1$. This is illustrated below with two trees of order 2:

Source: https://en.wikipedia.org/wiki/Binomial_heap

Because of this, one can merge 2 binomial heaps with $n$ elements in $log(n)$ time, following essentially the same procedure as adding 2 binary numbers (keeping track of a ‘carry’ tree, analogous to a carry digit). The figure below illustrates the merging of two binomial heaps with 11 and 3 elements, respectively.

Source: https://en.wikipedia.org/wiki/Binomial_heap

The analogous binary addition would be adding the numbers 1011 and 0011 to get 1110.

### Insert

Insertion of a new element into a heap can be simply accomplished by creating a new heap with just the one tree/element and merging it with the original. This takes $O(log(n))$ time for a single operation, but amortized $O(1)$ time when taken over the course of $n$ consecutive insertions. (Think of the time to increment a binary number by 1 $n$ times.)

### Find Min (Peek)

Since there are at most $log_2(n)$ trees in a heap with $n$ nodes, finding the min by just iterating over the trees takes $O(log(n))$ time. This is the main disadvantage to a binary heap.

### Remove Min (Pop)

However, removing the minimum element takes $O(log(n))$ time, just as with a binary heap. Having found the tree with the min as its root, you can split up its children into a separate binomial heap and simply merge this with the original.

### Set Priority

Setting/modifying the value of an element takes $O(log(n))$ time, since you just float it up or sink it down its tree as you would with an ordinary heap, and each tree has depth $O(log(n))$. However, this assumes you have a pointer to where the element is stored.

### Delete Element

To delete an element (again assuming you know where it is), you can simply set its priority to $- \inf$ and remove the min from the heap.

## 234: Binomial Trees

A Binomial Tree is an abstract data structure which is useful for including making heap-like data structures (binomial heaps) which are easily mergeable. For $k \geq 0$, the $k$th-order binomial tree $B_k$ can be defined recursively as follows:

• $B_0$ is a tree with a single root node and no children.
• $B_k$ has a root with its children being $B_{k-1}, \dots, B_0$.

To illustrate, below are the binomial trees of orders 0, 1, 2, and 3:

Source: https://en.wikipedia.org/wiki/Binomial_heap

A few properties of this structure that are fairly self-evident are that $B_k$ is the binomial tree with depth $k$ as well as having $k$ children. Another property which is less evident but easily shown by induction is that $B_k$ has $2^k$ nodes.

Related to this last property is the fact that you can alternatively form $B_k$ by taking two copies of $B_{k-1}$ and assigning one as the leftmost child of the other. This is apparently the property which makes them useful for merging two of them in the context of binomial heaps, but I’ll have to get to that in a later post.

As a final note, the name binomial tree results from the fact that $B_k$ has ${k \choose d}$ nodes at depth $d$.

## 233: typedefs in C

The typedef keyword in C is (from what I can tell; I may be off) used to create an alias for a given type. The general syntax is

typedef <previously-defined type> <alias>;

For instance, the line

typedef int size;


simply lets the compiler know that the word size refers to the type already defined as int. You can then compile the code

typedef int size;
size k = 10;
printf("The size is %d\n", k);
printf("The size of the size datatype is %lu\n", sizeof(size));


which will print out

The size is 10
The size of the size datatype is 4


In the context of C structs, typedefs can be used to simplify syntax in a way that’s a bit more reminiscent of Java syntax for classes and objects. For instance, if you had a struct defined as

struct person {
char * name;
int age;
}


To declare a variable of this type you would have to say

struct person p;


which is a little bit awkward. With typedefs, you have the opportunity to trade that off with a more awkward struct declaration

typedef struct {
char * name;
int age;
} person;


which then gives you the option to simply declare your variable with

person me;


## 232: *, & (C)

Before today I’d been loosely familiar with the use of the dereferencing operator * in the C language, but today as I’ve begun a refresher (really a primer) in C I find myself actually understanding this operator and its counterpart & for the first time.

When a pointer variable p​ is assigned with int *p;  p = &x;, where x is a previously declared integer variable, p itself becomes a new 8-byte variable which has as its value the address of the variable x it points to. The declaration and assignment say, respectively, “p is a variable which has as its value the address of an integer” and “take the address of the variable x (which the program has to keep track of anyway) and assign it as the value of p.”

You can have pointers-to-pointers-to-pointers, if you want to, since pointers are themselves variables and have their own addresses. Thus the following code

int main()
{
int x = 5;
int *p;
int **q;
int ***r;
p = &x;
q = &p;
r = &q;
printf(x == *p ? "true\n" : "false\n");
printf(x == **q ? "true\n" : "false\n");
printf(p == **r ? "true\n" : "false\n");

return 0;
}


will print

true
true
true


However, while you can dereference a pointer multiple times (e.g. r***), if you try to evaluate the expression

&&x


it will throw an error. This is just because &x will give you the literal value of the address of x (which has no address), and trying to get the address of a literal is nonsensical.

## 231: Min Heap Using Dynamically-Sized Array

Today I learned how to implement a Min Heap using a dynamically-sized array and the typical representation of a complete binary tree as an array where the children of index $k$ are $2k + 1$ and $2k + 2$, respectively.

Specifically, I used Python’s List data structure for an impromptu implementation of a basic min heap which uses its items’ values themselves for comparisons. Below is my code (which I wrote in like 5 minutes so it’s not super pretty):

class MyMinHeap(object):
"""
Implementation of Min Heap using dynamic array
"""
def __init__(self):
self.heap = []
self.size = 0

def peek(self):
if self.size == 0:
return None
return self.heap[0]

def push(self, val):
self.heap.append(val)
self.size += 1

# float up
curr = self.size - 1
while curr != 0:
parent = (curr - 1) // 2
# if min heap invariant violated, swap with parent
if self.heap[curr] < self.heap[parent]:
self.heap[curr], self.heap[parent] = (self.heap[parent], self.heap[curr])
curr = parent
else:
break

def pop(self):
if self.size == 0:
return None

# replace root of tree with last child, maintaining completeness
return_val = self.heap[0]
self.heap[0] = self.heap[-1]
del self.heap[-1]
self.size -= 1

# sink down
curr = 0
while 2 * curr + 1 <= self.size:
left = 2 * curr + 1
right = 2 * curr + 2
# pick which child has smallest value
if right <= self.size and self.heap[right] < self.heap[left]:
child = right
else:
child = left

#swap with child if necessary to maintain min heap invariant
if self.heap[child] < self.heap[curr]:
self.heap[curr], self.heap[child] = (self.heap[child], self.heap[curr])
curr = child
else:
break

return return_val

## 230: ‘Product of Array Except at Each Index’ O(n) Algorithm

Problem:

Given an array $A = [a_0, \dots, a_{n - 1}]$ of $n$ integers, find an $O(n)$ time algorithm for producing a new array $P$ also of length $n$, where $p_i = \prod _{j \neq i} a_j$. That is, where $p_i$ is the product of all elements of $A$ except $a_i$.

Solution:

Considering how we can break down the problem of finding $p_i$ into subproblems, we note that this product is that of all the $a_j$ with $j < i$ times that of the $a_j$ with $j > i$. In the case of $i = 0, n - 1$ we can consider the product of zero numbers to be 1.

Now that we’ve broken the problem into two subproblems which are identical by symmetry, finding a simple $O(n)$ dynamic programming algorithm to solve this subproblem and thus the original is fairly obvious.

For each $i = 0, \dots, n - 1$, the product $L_i$ of all $a_j$ with $j < i$ is inductively $a_{i - 1} L_{i - 1}$, where as our base case we define $L_0 = 1$.

Working in the other direction from right to left, for each $i = n - 1, \dots, 0$ the product $R_i$ of all $a_j$ with $j > i$ is $a_{i + 1} R_{i + 1}$ with $R_{n -1 } = 1$.

Now we have after these 2 linear-time passes of $A$ that $p_i = L_i R_i$ for each $i$.

Because of the properties of multiplication there are several ways you can implement this, but the simplest I’ve seen is in the Java code below (courtesy of this Leetcode submission) which additionally uses only $O(1)$ extra space:

public int[] productExceptSelf(int[] A) {
int[] P = new int[A.length];
int p;

// forward pass
p = 1;
for (int i = 0; i < A.length; i++) {
P[i] = p;
p *= A[i];
}

// backward pass
p = 1;
for (int i = A.length - 1; i >= 0; i--) {
P[i] *= p;
p *= A[i]
}

return P;
}