Today I Learned

Some of the things I've learned every day since Oct 10, 2016

Category Archives: algorithms

157: Dijkstra’s and Negative Weights

Dijkstra’s algorithm, commonly used to find the path of minimum weight to any given node from a starting node in a weighted graph, does not work if negative weights are allowed.

Intuitively, the reason is that Dijkstra’s makes the assumption while working through the graph that paths can only become ‘heavier’, and so while visiting a node does not consider neighbors which themselves have already been visited. For instance, consider running the algorithm on the following graph with A as the starting node:

dijkstra

Image: https://stackoverflow.com/q/13159337

The algorithm will look at as a neighbor of A and set its minimum known distance to 2. It will then look at B and set its to 5. It will then move on to visit C without any changes since C has no neighbors. It will then visit B, but will not consider the edge from to C because C has already been visited. So the algorithm will wrongly conclude that the shortest path to C from A is 2, not -5.

One adaptation which does work on certain types of graphs with negative weights is Johnson’s algorithm, which itself uses Dijkstra’s as a subroutine.

 

Advertisements

149: Floyd’s Cycle-Finding Algorithm

Suppose you want to detect a cycle in a linked list. More specifically, you want to find the node N where the cycle begins. One way to do this is Floyd’s cycle-finding algorithm, sometimes referred to as the “Tortoise and the Hare” algorithm.

Informally, the way it works is:

  • Initialize two pointers P, Q, with P = next(head), Q = next(next(head)).
  • While P \neq Q, increment P by 1 node and Q by 2. (P is the “slow” pointer and Q is the “fast” one. At each step of this loop the distance between the two will increment by 1.)
  • Initialize counter n = 0, and reset P to point to the head of the list.
  • While P \neq Q, increment each of P, Q by 1 node and n by 1.
  • Return node N which is n nodes after the head of the list.

The runtime of this algorithm is O(n + \lambda), where \lambda is the period of the cycle being found.

124: Skip List Search Algorithm

The following is an informal description of the algorithm for searching for an element in a skip list:

 

alg

If you look the shape of the nodes that the algorithm visits on a given example, the general case of the search time being O(\log{} n) makes intuitive sense.

123: Skip Lists (Intro)

skip list is a data structure similar to a linked list which stores an ordered sequence of elements and is built specifically for fast search within the sequence. The way it’s commonly implemented is as a hierarchy of linked lists, with the first list containing all the elements of the skip list, and each successive linked list containing a sparser subset of the elements in the previous one.

To search for an element within the skip list one then starts with the sparsest linked list, getting as ‘close’ to the desired element as possible, then moves directly down to that same position in the next linked list and continues searching until the item is found (or not).

This animation depicts the insertion of the element 80 into a skip list of positive integers (insertion being equivalent to searching):

skiplist

Credit: Artyom Kalinin

 

In the average case, the skip list uses O(n) space and takes just O(\log{} n) time for searching operations.

114: Iterative Deepening Search

Iterative deepening depth-first search is a graph traversal algorithm which visits (new) nodes in effectively the same order as breadth-first search, but with lower memory usage and a runtime of O(b^d) (where b is the branching factor of the graph and d is the depth of the goal node.

The algorithm does pretty much what the name describes: it iteratively performs a typical depth-first search up do a set maximum depth, incrementing the depth with each iteration. In this way, nodes are seen for the first time in the order they would be seen by BFS while using lower amounts of memory. In addition, since it runs on DFS the algorithm uses just O(d) space, where d is the depth of the goal.