Cycle finding

Given a directed graph, how do we detect if it has a cycle?

You probably already knew one approach, which is to run a toposort algorithm on the graph, and check if it failed. The toposort algorithm fails if and only if there is a cycle in the graph, hence this correctly solves our problem! But if you were asked to actually find a cycle, then it could get tricky depending on the toposort algorithm used.

But it's still worthwhile to discuss additional approaches to this problem. For instance, a simple algorithm arises from the following insight: there is a cycle if and only if there is a back edge in the DFS forest. (Why?) Thus, we can detect a cycle by performing a DFS (like above) and stopping once we find a back edge! Another advantage of this is that it's easy to actually find a cycle if one is detected. (How?) In fact, by pursuing this idea further, you can use a DFS to actually extract a toposort of a DAG: Just order the nodes by decreasing finishing time! Think about why that works.

One can also detect (and find) a cycle using BFS, although we will leave it to you to discover.

Floyd's cycle finding algorithm

Our goal to a special kind of graph. Specifically, let's only consider graphs where each node has exactly one outgoing edge. Let's call such graphs function graphs because on such a graph, we can define a function $f : V \rightarrow V$ where $f(x) = y$ if and only if $x \rightarrow y$ is an edge. (We can also consider more general graphs with at most one outgoing edge per node. We can convert such graphs into function graphs by adding a new node, say $\mathrm{trash}$, and pointing all nodes without an outgoing edge to $\mathrm{trash}$ (including $\mathrm{trash}$ itself).) Since there is exactly one outgoing edge from each node, $f$ is well-defined. Conversely, every function $f : S \rightarrow S$ corresponds to a function graph whose node set is $S$ and whose edges are $\{(x, f(x)) : x \in S\}$ (we're implicitly allowing self-loops here, but that's fine).

Now, starting at any node, there's only one path we can follow. In terms of $f$, starting at some node $x$ and following the (only) path corresponds to iteratively applying $f$ on $x$, thus, the sequence of nodes we visit is:

\[ (x, f(x), f(f(x)), \ldots, f^{(n)}(x), \ldots)\]

(Here, $f^{(n)}(x)$ is $f$ applied to $x$ a total of $n$ times.)

Now, if we assume that our node set is finite, then (by the pigeonhole principle) this will eventually repeat, i.e., there will be indices $0 \le i \lt j$ such that $f^{(i)}(x) = f^{(j)}(x)$. (Note that this is no longer true if the graph is infinite. Why?) In fact, once this happens, the subsequence $(f^{(i)}(x), f^{(i+1)}(x), \ldots, f^{(j-1)}(x))$ will repeat forever. This always happens regardless of which node you start at.

This gives rise to a natural question: Given a starting point $x$, when is the first time that a repeat happens? Furthermore, how long is the cycle? To make the question more interesting, suppose we don't know anything about $f$ apart from:

We can formalize the cycle-finding problem as: Given a function $f$ with the above properties, and a starting point $x$, compute the following:

We can visualize these values with the following:

A simple approach is to use BFS or DFS, which is equivalent to just following the (only) path and storing the visited nodes until we encounter a node we've already visited.

  1. // Just-walk algorithm
  2. function cycle_find(x):
  3. visit_time = new empty map
  4. time = 0
  5. s = x
  6. while not visit_time.has_key(s):
  7. visit_time[s] = time++
  8. s = f(s)
  9. s_cycle = s
  10. l_tail = visit_time[s]
  11. l_cycle = time - l_tail
  12. return (s_cycle, l_cycle, l_tail)

Assuming no preprocessing and no specialized knowledge on $f$, this is probably close to the fastest we can do. It needs $O(l_\text{tail} + l_\text{cycle})$ time.

It also needs $O(l_\text{tail} + l_\text{cycle})$ memory, but one might wonder if it can be improved upon. Amazingly, there's actually a way to compute it using $O(1)$ memory, called Floyd's cycle-finding algorithm!

Now, you might ask, why the need for a fancy algorithm? Surely it's trivial to find an $O(1)$-memory solution. Here's one:

  1. // Bogo-cycle-finding algorithm
  2. function cycle_find(x):
  3. for time in 1,2,3,4...
  4. s = x
  5. for i = 1..time:
  6. s = f(s)
  7. s_cycle = s
  8. s = x
  9. l_tail = 0
  10. while s_cycle != s:
  11. s = f(s)
  12. l_tail++
  13. if l_tail < time:
  14. l_cycle = time - l_tail
  15. return (s_cycle, l_cycle, l_tail)

Let's call this the bogo-cycle-finding algorithm. Although it might not be obvious why this works, clearly this is $O(1)$ memory! Well, that's certainly true, but this is an incredibly slow solution! The idea is to use only $O(1)$ memory without sacrificing running time.

Let's discuss Floyd's algorithm. This is also sometimes called the tortoise and the hare algorithm, since we will only use two pointers, called the tortoise and the hare, respectively.

The idea is that both the tortoise and the hare begin walking at the starting point, but the hare is twice as fast. This means that at the beginning, the hare might be quite ahead of the tortoise, but once they both enter the cycle, they will eventually meet. Once we they meet, they stop, and the hare teleports back to the starting point. They then proceed walking at the same speed and stop once they meet. This meeting point will be $s_\text{cycle}$!

Once we get $s_\text{cycle}$, $l_\text{tail}$ and $l_\text{cycle}$ can easily be computed, e.g., $l_\text{cycle}$ can be computed by going around the cycle once. Here's the pseudocode:

  1. // Floyd's cycle-finding algorithm
  2. function cycle_find(x):
  3. tortoise = hare = x
  4. do:
  5. tortoise = f(tortoise)
  6. hare = f(f(hare))
  7. while tortoise != hare
  8. // teleport, and walk at the same speed
  9. hare = x
  10. l_tail = 0
  11. while tortoise != hare:
  12. tortoise = f(tortoise)
  13. hare = f(hare)
  14. l_tail++
  15. s_cycle = hare
  16. // compute l_cycle by walking around once.
  17. l_cycle = 0
  18. do:
  19. hare = f(hare)
  20. l_cycle++
  21. while tortoise != hare
  22. return (s_cycle, l_cycle, l_tail)

This clearly uses $O(1)$ memory. We've also mentioned that this runs in $O(l_\text{tail} + l_\text{cycle})$, but I didn't provide a complete convincing proof. It's not even clear why this correctly computes “$s_\text{cycle}$”. We leave it to you as an exercise to prove the running time and correctness of this algorithm!

Cycle-finding and factorization: Pollard's $\rho$ algorithm

An interesting application of Floyd's algorithm (or at least its idea) is with integer factorization. Pollard's $\rho$ algorithm ($\rho$ is pronounced “rho”) is a factorization algorithm that can sometimes factorize numbers faster than trial-and-error division. The name comes from the shape of the path when starting at some value:

The algorithm accepts $N$, the number to be factorized, along with two additional parameters, a starting value $s$, and a function $f : \{0, 1, \ldots, N - 1\} \rightarrow \{0, 1, \ldots, N - 1\}$, which must be a polynomial modulo $N$. The algorithm then attempts to find a divisor of $N$. One issue with this algorithm is that it's not guaranteed to succeed, so you may want to run the algorithm multiple times, with differing $s$ (and possibly $f$).

Suppose we want to factorize a large number $N$. Also, suppose $s = 2$ and $f(x) = (x^2 + 1) \bmod N$. Here is the pseudocode of Pollard's $\rho$-algorithm.

  1. function try_factorize(N):
  2. x = y = 2 // starting point
  3. do:
  4. x = f(x)
  5. y = f(f(y))
  6. d = gcd(|x - y|, N)
  7. while d == 1
  8. if d == N:
  9. failure
  10. else:
  11. return d

One can clearly see $f$ being used as the iteration function, and $x$ and $y$ are assuming roles that are similar to the tortoise and hare, respectively. If this fails, you could possibly try again with a different starting point, or perhaps a different $f$. (Of course, this will always fail if $N$ is prime.)

A more thorough explanation of why this works, and why cycle-finding appears, can be seen on its Wikipedia page.