Given an undirected graph $G(V, E)$, we need to partition the edge set $E$ into several forests, i.e., edge sets which are acyclic. Specifically, we are interested in the minimum number of forests into which the set of edges can be partitioned, which is defined as the arboricity of the graph. It turns out that computing the arboricity of an undirected graph, or giving a specific edge partition into a minimum number of forests, are both solvable in polynomial time.

1. The Algorithm

We present a polynomial algorithm that solves this problem. The algorithm adds the edges one by one and maintains a set of forests in an incremental fashion. Note that adding an edge increases the minimum number of forests by at most 1. The framework of the algorithm is as follows.

Initialize the set of forests $\mathcal{F}$ to $\varnothing$;

For each edge $e \in E(G)$, if it is possible to add $e$ by not increasing the minimum number of forests, then add it; otherwise, let $\mathcal{F} \leftarrow \mathcal{F} + \{e\}$.

The hard part is to determine if it is possible to add an edge without increasing the number of forests. Note that simply trying to add the edge into every forest is incorrect because it is possible to move some edges between the forests to make room for the new edge to fit in some forest.

The algorithm is to build an auxiliary directed graph and find an augmenting path. The vertex set of the auxiliary graph is $\mathcal{F} \cup E’ \cup \{e\}$, where $E’$ is the set of edges that are already added. There is an edge $e_1 (\in E) \rightarrow f (\in \mathcal{F})$ if $f \cup \{e\}$ is acyclic, and there is an edge $e_1 (\in E’) \rightarrow e_2 (\in E’)$ if replacing $e_2$ with $e_1$ in the forest of $e_2$ yields a valid forest. If there exists a directed path from $e$ to any $f$, then performing all operations represented by these edges simultaneously yields a set of forests of the same size; otherwise, we must increment the number of forests.

2. Remark

This problem is a special case of the matroid partition problem, which asks the minimum number of independent sets to which a matroid can be partitioned. The general problem is also polynomially solvable; an algorithm similar to the one presented above works.

We show that the Range Minimum Query (RMQ) problem and Lowest Common Ancestor (LCA) problem can both be solved in O(1) per query after O(n) preprocessing time. The method is to reduce the two problems to +1/-1 RMQ, and solve the +1/-1 RMQ problem in O(n)/O(1) time.

1. The Three Problems

Problem 1 (Range Minimum Query, RMQ): Given an integer array of size $n$: $a[1], …, a[n]$. A range minimum query $\text{RMQ}_a(l, r) = (\arg) \min_{l \leq i \leq r} a[i]$, returns the value or the position of the minimum element in subarray $a[l…r]$.

Problem 2 (Lowest Common Ancestor, LCA): Given a rooted tree $T$. A lowest common ancestor query $\text{LCA}_T(u, v)$ returns a lowest node that has both $u, v$ as descendants.

Problem 3 (+1/-1 RMQ): +1/-1 RMQ is a special case of RMQ, where the difference of two adjacent elements in the given array is either +1 or -1.

2. Equivalence Between the Three Problems

In this section, we show that the three problems are mutually reducible in linear time. Note that the reduction from +1/-1 RMQ to RMQ is immediate.

2.1 Reduction from RMQ to LCA

The reduction from RMQ to LCA is to convert the array into Cartesian tree. The Cartesian tree of an array is a binary tree with heap property, and the in-order traversal gives the original array. Note that the minimum element in $a[l…r]$ corresponds to the LCA of $l$ and $r$ in Cartesian tree, and vice versa.

We may convert the array into Cartesian tree in linear time. Just add the element one by one, and maintain a pointer to the rightmost element of the current tree. When a new element comes, moves the pointer up until the number of current node is less than the new element, or until reaching the root of the tree. Then insert a node here, and place the original child subtree as the left subtree of the new node. Define the potential as the depth of the pointee, so the amortized time is $O(1)$ per insertion, and the total time complexity is $O(n)$.

2.2 Reduction from LCA to +1/-1 RMQ

To reduce the LCA problem into +1/-1 RMQ problem, we first label each vertex its depth. Then we run depth first search, and output the depth of the current node before visiting any child node and after visiting every child node. The resulting sequence is the Euler tour traversal of the original tree. To compute the LCA of $u$ and $v$, just find any occurrence position of $u$ and $v$, and find the range minimum between them. Since the depth of two adjacent nodes in Euler tour differ by at most 1, this is a +1/-1 RMQ problem.

Note that the number of occurrences of each node is the number of its children, plus 1. The length of the resulting sequence is $2n-1$, which is still linear.

3. Solving RMQ in O(n log n)/O(1) Time via Sparse Table

To achieve O(1) query time, a naive solution is to precalculate all $O(n^2)$ queries, which takes too much preprocessing time. In fact, we do not need to preprocess so many answers. The sparse table technique only preprocesses the RMQ of intervals of length power of 2. To answer RMQ query $\text{RMQ}(l, r)$, just return $\min\{\text{RMQ}(l, l+2^k-1), \text{RMQ}(r-2^k+1, r)\}$, where $k$ is the maximum integer such that $2^k \leq r – l + 1$. This actually uses two intervals to cover the entire range; due to the idempotence of minimum operation, the overlapped part does not affect the answer. There are at most $O(\log n)$ powers of 2 not exceeding $n$, and for each power of 2 we have $O(n)$ intervals to preprocess, so the total preprocess time is $O(n \log n)$.

4. Solving +1/-1 RMQ in O(n)/O(1) Time via Indirection

Our final step is to shave off the log factor. We use the indirection technique to remove the log factor, but only for +1/-1 RMQ.

We split the original array $a$ into segments of length $\frac{1}{2} \log n$. For each segment, we replace it with the minimum value, and we obtain a new sequence of length $O(\frac{n}{\log n})$. Now every interval that spans multiple segments can generally be decomposed into several contiguous segments and two intervals entirely within some segment. Using sparse table technique we may answer the minimum value in some contiguous segments in $O(1)$ time, with $O(n)$ preprocessing time (the log factor cancels). The remaining part is to answer the RMQ for intervals within segments.

Note that the minimum operation distributes over addition: $\min\{a, b\} + c = \min\{a + c, b + c\}$. Hence, for RMQ problem, the first element of the array doesn’t matter; only the difference matters. How many essentially difference +1/-1 RMQ instances of length $\frac{1}{2}\log n$ are there? Only $O(\sqrt{n})$, since there are only so many different difference arrays. And, for each array of length $\frac{1}{2} \log n$, there are only $O(\log^2 n)$ different intervals. We may set up a lookup table for all intervals of all possible instances of size $\frac{1}{2} \log n$ in $O(\sqrt{n} \log^2 n)$ time. This is dominated by the sparse table part, so the total preprocessing time is still $O(n)$.

For queries that entirely lies in some segment, just read the corresponding value in the lookup table; otherwise, the interval is decomposed into several contiguous segments, which can be answered by sparse table in constant time, and two intervals within some segments, which can be answered by lookup table. The total time per query is therefore $O(1)$.

Given a graph $G=(V, E)$, an independent set $I$ of $G$ is a subset of $V$, such that every two vertices in $I$ are not adjacent. We give an algorithm to count the number of independent sets of a given graph on $n$ vertices. The main idea of the algorithm is divide and conquer.

Counting independent sets of $G$: CIS(G)

If $G$ contains multiple connected components, count the independent sets of each component and multiply the numbers.

If $G$ contains only one vertex, return 1.

Otherwise, arbitrarily select a vertex $v$. Remove $v$ and count the number of independent sets of the remaining graph. Remove $v$ and its neighbors and count the number of independent sets of the remaining graph. Return the sum of two.

To see the O(1.619^n) running time, note that in step 3, removing $v$ decreases the number of vertices by one, while removing $v$ and its neighbors decreases the number of vertices by at least two. The recurrent is identical to Fibonacci sequence.

There are many practical applications that involve solving sparse linear system $\mathbf{A}\mathbf {x} = \mathbf{b} $, such as random walk in sparse graphs. Traditional techniques such as Gauss elimination or LU decomposition applies for general linear systems, and they run in cubic time. Fastest known algorithms for general matrix run in about $O(n^{2.373})$ time. However, for sparse matrix, we may achieve a much faster result. Specifically, we may solve the system in $O(mn)$ time, where $m$ is the number of nonempty entries in $\mathbf{A}$. We assume that the matrix $\mathbf{A} $ is non-singular (i.e., invertible) throughout this passage.

1. Inverting a matrix from its annealing polynomial

Given polynomial $p(x)$, if $ p(\mathbf{A}) = \mathbf{0}$ for matrix $\mathbf{A}$, then we say $p(x)$ is an annealing polynomial for matrix $ \mathbf{A} $. Due to Hamilton-Cayley theorem, the characteristic polynomial $\det( x\mathbf{I} – \mathbf{A} ) $ is an annealing polynomial for $ \mathbf{A} $. Among all annealing polynomials for matrix $ \mathbf{A} $, the one with minimum degree is called the minimal polynomial for $ \mathbf{A} $. It can be proved that the minimum polynomial for given matrix is unique up to a constant factor, and any other annealing polynomial is a polynomial multiple of the minimum polynomial.

We can invert a matrix $ \mathbf{A} $ if we have a minimal polynomial for $ \mathbf{A} $ : \[ a_0 I + a_1 \mathbf{A} + a_2 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^k = 0 \tag{*} \]Since $ \mathbf{A} $ is invertible, 0 is never a root of its characteristic polynomial, hence we must have $a_0 \neq 0 $. Multiplying $ \mathbf{A}^{-1} $ on both sides yields \[ \mathbf{A}^{-1} = -\frac{a_1 I + a_2 \mathbf{A} + a_3 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^{k-1}}{a_0} \]this means that we may represent the inversion of $ \mathbf{A} $ by the linear combination of powers of $ \mathbf{A} $.

2. The Berlekamp-Massey algorithm

Berlek-Massey algorithm solves the following problem in $O(n^2)$ time:

Given a finite sequence $\{x_i\}_{i=1}^n$, find a minimum order linear recurrence consistent with the given sequence. Formally, find a shortest sequence $c_0 = 1, c_1, \cdots, c_{k-1}$, such that $\sum_{l=0}^{k-1} x_{j-l} c_l = 0$ holds for all possible $j$.

This algorithm has many real world applications. The most typical one is to find the shortest linear feedback shift register for a given binary sequence. Also, it can be viewed as interpolating a sequence with exponential terms. One important fact for Berlekamp-Massey algorithm is, for an order-$r$ linearly recurrent sequence, taking the first $2r$ elements as the input of the algorithm suffices to recover the recurrence.

3. Finding the minimum polynomial

Note that the annealing polynomial is exactly the linear recurrence of powers of a matrix. However, it is infeasible to compute the minimum polynomial from the powers of $\mathbf{A}$. However, we may randomly pick vectors $\mathbf{u}$ and $\mathbf{v}$ and compute the minimum polynomial from $\mathbf{u}^T \mathbf{A}^i \mathbf{v}$. We claim without proof that, with high probability, the coefficients of the recurrence of the sequence are exactly those of the minimum polynomial. The sequence can be computed in $O(mn)$ time by iteratively doing sparse matrix-vector multiplication in $O(m)$ time. Finally, apply Berlekamp-Massey algorithm to the given sequence.

4. Solving the linear system

Since the inverse of a sparse matrix is generally not sparse, we won’t actually compute the inverse of $\mathbf{A}$. Actually, we can compute $\mathbf{A}^{-1}\mathbf{b}$ via formula (*) in $O(mn)$ time. The procedure is exactly the same as in finding minimum polynomial: just iteratively perform spare matrix-vector multiplication.