Here we'll describe another aspect of connectivity in undirected graphs.
We say that a connected undirected graph is $2$edgeconnected if removing any edge doesn't disconnect the graph. Alternatively, an undirected graph is $2$edgeconnected if it is connected and doesn't have any bridges.
We say that a connected undirected graph is biconnected if removing any vertex doesn't disconnect the graph. Alternatively, an undirected graph is biconnected if it is connected and doesn't have any articulation points.
Note that these are stronger notions than mere connectivity. Having these properties tells us that the graph is more interconnected than usual.
Here's an example of a graph that's both $2$edgeconnected and biconnected:
The two notions are similar, but be careful not to confuse the two! They're not exactly the same. (Why?)
We can generalize the definitions above naturally:
Being biconnected and $2$connected are the same except that connected graphs of $\le 2$ nodes are considered biconnected but not $2$connected. Also, note that being $1$connected is the same as being connected (except when the graph has a single node).
There's a nice result concerning $k$edgeconnectivity and $k$vertexconnectivity called Menger's theorem. The theorem provides two results:
Aren't these results nice? Even nicer, they hold for both directed and undirected graphs!
In particular, they imply that:
We won't be proving these properties for now, but at least you know them; they could be useful.
There are other similar results like that all throughout graph theory, such as the mincut maxflow theorem (the minimum cut equals the maximum flow) and König's theorem (the size of the minimum vertex cover equals the size of the maximum matching in bipartite graphs). (Unsurprisingly, these results are all related.)
Another interesting fact about $2$edgeconnected graphs is that you can orient the edges of the graph such that it becomes strongly connected. (To “orient” an edge means to choose its direction, hence the edge becomes directed.) Such graphs are called strongly orientable. In fact, the converse is true as well: every strongly orientable graph is $2$edgeconnected. This equivalence is called Robbins' theorem and is not that difficult to prove.
If a graph has a bridge, then obviously a strong orientation is impossible. But if a graph has no bridges, then let's perform a DFS and orient the edges according to the way we traversed it; i.e., tree edges point away from the root, and back edges point up the tree. (Remember that there are no forward and cross edges since this is an undirected graph.) Since there are no bridges, it means that for every tree edge $(a, b)$ in the DFS forest, ${\mathrm{low}}[b] \le {\mathrm{disc}}[a]$. This means that from any node $b$, one can always climb to an ancestor of $b$ (using at least one back edge). By repeating this process, one can reach the root from any node $b$. Since the root can reach all the other nodes as well (just using tree edges), it means the graph is strongly connected! As a bonus: this DFSbased proof can easily be converted into a DFSbased algorithm to compute a strong orientation of the graph.
A $2$edgeconnected component is a maximal subgraph that is $2$edgeconnected. Given a graph, a natural question is: How can we find the $2$edgeconnected components, and what can we say about their structure? Let's assume the graph is connected for simplicity; we can simply do the same algorithm for each connected component otherwise.
Well, by definition, a $2$edgeconnected component cannot have bridges, so let's say we remove the bridges in the original graph first. (We can find those with DFS.) Then what we're left with are subgraphs that don't contain any bridges, hence are $2$edgeconnected! (Actually, it doesn't follow from the definition that removing bridges doesn't introduce new bridges, but it shouldn't be hard to convince oneself of this.)
Furthermore, suppose we “shrink” each $2$edgeconnected component into a single node. Then, observing that a bridge is not a part of any cycle (otherwise it couldn't have been a bridge at all), we find that the resulting graph is actually a tree!
We can call this resulting tree the bridge tree of the graph.
As just described, constructing this tree is straightforward:
Like before, the “shrinking” process might be too vague, but an engineer approach will work just as well:
Congratulations! You have now constructed the bridge tree and you may now use it to solve some problems.
A biconnected component, or BCC, is a maximal subgraph that is biconnected. The natural question is: How can we find the BCCs, and what can we say about their structure? Again, assume the graph is connected for simplicity.
The structure of the BCCs of a graph is quite different from the structure of the $2$edgeconnected components. In particular, BCCs can overlap! See the following picture:
So given this complication, how can we compute the BCCs?
The natural first step is to compute all the articulation points. After that, notice that pairs of BCCs can only overlap on at most one node, and that node must be an articulation point.
However, it looks like we're stuck. Even given that information, there doesn't seem to be a simple way to compute the BCCs.
Fortunately, we can actually modify DFS (again!) to compute the BCCs alongside the articulation points! The key here is to think of a BCC as a set of edges, rather than a set of nodes; this way, each edge belongs to exactly one BCC, which is very helpful. Furthermore, similar to Tarjan's SCC algorithm, we can again collect the edges that belong to the same BCC on a stack, and pop whenever we detect an articulation point!
 function DFS(i, p):
 disc[i] = low[i] = time++

 children = 0
 has_low_child = false
 for j in adj[i]:
 if disc[j] == 1:
 stack.push(edge(i, j))
 DFS(j, i)

 low[i] = min(low[i], low[j])
 children++

 if low[j] >= disc[i]:
 has_low_child = true
 get_bcc(edge(i, j))

 else if j != p:
 low[i] = min(low[i], disc[j])

 if (p == 1 and children >= 2) or (p != 1 and has_low_child):
 mark i as an articulation point

 function get_bcc(e):
 // pop the stack until you pop e, and collect those as a BCC
 bcc = new empty vector
 do:
 E = stack.pop()
 bcc.push(E)
 while E != e
 bccs.push(bcc)

 function articulation_points_and_BCCs():
 stack = new empty vector
 bccs = new empty vector of vectors
 time = 0
 for i = 0..n1:
 disc[i] = 1

 for i = 0..n1:
 if disc[i] == 1:
 DFS(i, 1)
 return bccs
After this $O(n + e)$ process, we now have the articulation points and the BCCs of the graph!
The next natural question is: what structure do the BCCs have? Remember that they can only overlap in at most one node, and this must be an articulation point. Therefore, the articulation points somehow serve the role of “edges” in the same way the bridges were the “edges” in the bridge tree.
The natural graph structure we can form, then, is to compress each BCC into a single node, and declare that two BCCs are adjacent iff they share an articulation point in common. This is called the block graph, and it's easy to see that this forms a valid connected graph structure on the BCCs, and that it encodes a bit of information about the connectivity of the graph as a whole. Constructing the block graph from this definition should be straightforward.
Here's an example of a block graph:
Unfortunately, as you can see above, the block graph is not (always) a tree! That's sad :( And that's the reason it's not called a “block tree”.
In fact, what's even sadder is that this can fail very badly. Specifically, a block graph formed from a graph with $n$ nodes and $O(n)$ edges can have up to $\Omega(n^2)$ edges! So that's really sad. (Recall that $f(n) = \Omega(g(n))$ is the same as $g(n) = O(f(n))$. Informally, you can think of $O$ as “asymptotically at most” and $\Omega$ as “asymptotically at least”.)
Fortunately, there's an alternative structure on the BCCs where the number of edges doesn't explode. The key is to represent the articulation points as nodes in their own right. Thus, in our compressed graph, we have a node $a_x$ for each articulation point $x$ and a node $b_Y$ for each BCC $Y$. (Note that this “compressed” graph can actually be larger than the original graph!) We say there is an edge between $a_x$ and $b_Y$ if the BCC $Y$ contains the articulation point $x$. It can be seen that this structure is a tree (why?), and this is more commonly known as the blockcut tree. (“Block” stands for BCC and “cut” stands for cut vertex.)
Here's an example of a blockcut tree:
Thankfully, the blockcut tree is indeed a tree, thus it doesn't have that many more nodes and edges than the original one. And it still encodes a good amount of information about the connectivity between the BCCs. As an added bonus, the articulation points are represented as well!
Constructing a blockcut tree from definition should be straightforward.