**Problem**:

Given an array of integers (or reals, equivalently), with , return an array with such that is the smallest value such that if such a exists, and otherwise is 0.

**Solution**:

Set . For , initialize . While and [using short-circuiting on this boolean], update . After this loop terminates, set if and otherwise.

**Reason it works**:

If we’ve found a such that , we’re done and will be the distance we want. Otherwise, we know and thus the next element higher than will at the soonest be at , so we can safely skip ahead by that amount. If we find a at and , we know we’re never going to find what we’re looking for and so we can stop looking.

**Runtime** **(the weird part)**:

At first glance it’s not clear why this algorithm gives runtime better than . After all, it seems in its worst case to approximate a basic nested . However, if you denote to be the number of indices such that and to be the number such that , and consider that , it becomes clear that the only indices for which the update will do a non-constant amount of work is on a subset of . Based on this, you can actually argue that the number of updates over the course of the algorithm is .

This algorithm is a neat example of a non-vanilla dynamic programming solution where the solution building step seems to be doing a non-constant amount of work, but in fact over the course of the algorithm the amortized cost of each step is .