There are many practical applications that involve solving sparse linear system \( \mathbf{A}\mathbf {x} = \mathbf{b} \), such as random walk in sparse graphs. Traditional techniques such as Gauss elimination or LU decomposition applies for general linear systems, and they run in cubic time. Fastest known algorithms for general matrix run in about \( O(n^{2.373}) \) time. However, for sparse matrix, we may achieve a much faster result. Specifically, we may solve the system in \(O(mn)\) time, where \(m\) is the number of nonempty entries in \(\mathbf{A}\). We assume that the matrix \( \mathbf{A} \) is non-singular (i.e., invertible) throughout this passage.

### 1. Inverting a matrix from its annealing polynomial

Given polynomial \( p(x) \), if \( p(\mathbf{A}) = \mathbf{0} \) for matrix \( \mathbf{A} \), then we say \(p(x)\) is an **annealing polynomial** for matrix \( \mathbf{A} \). Due to **Hamilton-Cayley theorem**, the **characteristic polynomial** \(\det( x\mathbf{I} – \mathbf{A} ) \) is an annealing polynomial for \( \mathbf{A} \). Among all annealing polynomials for matrix \( \mathbf{A} \), the one with minimum degree is called the **minimal polynomial** for \( \mathbf{A} \). It can be proved that the minimum polynomial for given matrix is unique up to a constant factor, and any other annealing polynomial is a polynomial multiple of the minimum polynomial.

We can invert a matrix \( \mathbf{A} \) if we have a minimal polynomial for \( \mathbf{A} \) :

\[ a_0 I + a_1 \mathbf{A} + a_2 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^k = 0 \tag{*} \]Since \( \mathbf{A} \) is invertible, 0 is never a root of its characteristic polynomial, hence we must have \(a_0 \neq 0 \). Multiplying \( \mathbf{A}^{-1} \) on both sides yields

\[ \mathbf{A}^{-1} = -\frac{a_1 I + a_2 \mathbf{A} + a_3 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^{k-1}}{a_0} \]this means that we may represent the inversion of \( \mathbf{A} \) by the linear combination of powers of \( \mathbf{A} \).

### 2. The Berlekamp-Massey algorithm

Berlek-Massey algorithm solves the following problem in $O(n^2)$ time:

Given a finite sequence \(\{x_i\}_{i=1}^n\), find a minimum order linear recurrence consistent with the given sequence. Formally, find a shortest sequence \(c_0 = 1, c_1, \cdots, c_{k-1}\), such that \(\sum_{l=0}^{k-1} x_{j-l} c_l = 0\) holds for all possible \(j\).

This algorithm has many real world applications. The most typical one is to find the shortest linear feedback shift register for a given binary sequence. Also, it can be viewed as interpolating a sequence with exponential terms. One important fact for Berlekamp-Massey algorithm is, for an order-\(r\) linearly recurrent sequence, taking the first \(2r\) elements as the input of the algorithm suffices to recover the recurrence.

### 3. Finding the minimum polynomial

Note that the annealing polynomial is exactly the linear recurrence of powers of a matrix. However, it is infeasible to compute the minimum polynomial from the powers of \(\mathbf{A}\). However, we may randomly pick vectors \(\mathbf{u}\) and \(\mathbf{v}\) and compute the minimum polynomial from \(\mathbf{u}^T \mathbf{A}^i \mathbf{v}\). We claim without proof that, with high probability, the coefficients of the recurrence of the sequence are exactly those of the minimum polynomial. The sequence can be computed in \(O(mn)\) time by iteratively doing sparse matrix-vector multiplication in $O(m)$ time. Finally, apply Berlekamp-Massey algorithm to the given sequence.

### 4. Solving the linear system

Since the inverse of a sparse matrix is generally not sparse, we won’t actually compute the inverse of \(\mathbf{A}\). Actually, we can compute \(\mathbf{A}^{-1}\mathbf{b}\) via formula (*) in \(O(mn)\) time. The procedure is exactly the same as in finding minimum polynomial: just iteratively perform spare matrix-vector multiplication.