Solving Sparse Linear System in O(nm) Time

There are many practical applications that involve solving sparse linear system $$\mathbf{A}\mathbf {x} = \mathbf{b}$$, such as random walk in sparse graphs. Traditional techniques such as Gauss elimination or LU decomposition applies for general linear systems, and they run in cubic time. Fastest known algorithms for general matrix run in about $$O(n^{2.373})$$ time. However, for sparse matrix, we may achieve a much faster result. Specifically, we may solve the system in $$O(mn)$$ time, where $$m$$ is the number of nonempty entries in $$\mathbf{A}$$.  We assume that the matrix $$\mathbf{A}$$ is non-singular (i.e., invertible) throughout this passage.

1. Inverting a matrix from its annealing polynomial

Given polynomial $$p(x)$$, if $$p(\mathbf{A}) = \mathbf{0}$$ for matrix $$\mathbf{A}$$, then we say $$p(x)$$ is an annealing polynomial for matrix $$\mathbf{A}$$. Due to Hamilton-Cayley theorem, the characteristic polynomial $$\det( x\mathbf{I} – \mathbf{A} )$$ is an annealing polynomial for $$\mathbf{A}$$. Among all annealing polynomials for matrix $$\mathbf{A}$$, the one with minimum degree is called the minimal polynomial for $$\mathbf{A}$$. It can be proved that the minimum polynomial for given matrix is unique up to a constant factor, and any other annealing polynomial is a polynomial multiple of the minimum polynomial.

We can invert a matrix $$\mathbf{A}$$ if we have a minimal polynomial for $$\mathbf{A}$$ :
$a_0 I + a_1 \mathbf{A} + a_2 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^k = 0 \tag{*}$Since $$\mathbf{A}$$ is invertible, 0 is never a root of its characteristic polynomial, hence we must have $$a_0 \neq 0$$. Multiplying $$\mathbf{A}^{-1}$$ on both sides yields
$\mathbf{A}^{-1} = -\frac{a_1 I + a_2 \mathbf{A} + a_3 \mathbf{A}^2 + \cdots + a_k \mathbf{A}^{k-1}}{a_0}$this means that we may represent the inversion of $$\mathbf{A}$$ by the linear combination of powers of  $$\mathbf{A}$$.

2. The Berlekamp-Massey algorithm

Berlek-Massey algorithm solves the following problem in $O(n^2)$ time:

Given a finite sequence $$\{x_i\}_{i=1}^n$$, find a minimum order linear recurrence consistent with the given sequence. Formally, find a shortest sequence $$c_0 = 1, c_1, \cdots, c_{k-1}$$, such that $$\sum_{l=0}^{k-1} x_{j-l} c_l = 0$$ holds for all possible $$j$$.

This algorithm has many real world applications. The most typical one is to find the shortest linear feedback shift register for a given binary sequence. Also, it can be viewed as interpolating a sequence with exponential terms. One important fact for Berlekamp-Massey algorithm is, for an order-$$r$$ linearly recurrent sequence, taking the first $$2r$$ elements as the input of the algorithm suffices to recover the recurrence.

3. Finding the minimum polynomial

Note that the annealing polynomial is exactly the linear recurrence of powers of a matrix. However, it is infeasible to compute the minimum polynomial from the powers of $$\mathbf{A}$$. However, we may randomly pick vectors $$\mathbf{u}$$ and $$\mathbf{v}$$ and compute the minimum polynomial from $$\mathbf{u}^T \mathbf{A}^i \mathbf{v}$$. We claim without proof that, with high probability, the coefficients of the recurrence of the sequence are exactly those of the minimum polynomial. The sequence can be computed in $$O(mn)$$ time by iteratively doing sparse matrix-vector multiplication in $O(m)$ time. Finally, apply Berlekamp-Massey algorithm to the given sequence.

4. Solving the linear system

Since the inverse of a sparse matrix is generally not sparse, we won’t actually compute the inverse of $$\mathbf{A}$$.  Actually, we can compute $$\mathbf{A}^{-1}\mathbf{b}$$ via formula (*) in $$O(mn)$$ time. The procedure is exactly the same as in finding minimum polynomial: just iteratively perform spare matrix-vector multiplication.

[Codeforces Round #513] H. Sophisticated Device

题目描述

1. ADD %dest, %src1, %src2：R[dest] = R[src1] + R[src2]
2. POW %d, %s：R[dest] = pow(R[src], d)

2018牛客暑期ACM多校训练营（第一场）题解

A. Monotonic Matrix

• $$a_{i,j} \in \{0, 1, 2\}$$
• 每一行从左到右和每一列从上到下单调不减

F. Sum of Maximum

$\sum_{x_1=1}^{a_1} \sum_{x_2=1}^{a_1} \cdots \sum_{x_n=1}^{a_n} \max\{x_1, x_2, \cdots, x_n\}$

线性代数相关OJ题选讲

Fast Matrix Calculation

2014 Multi-University Training Contest 9, available at HDUOJ 4965

Step 1. 计算 $$N \times N$$ 的矩阵 $$C = AB$$.
Step 2. 计算 $$M = C^{N \times N} \bmod 6$$.
Step 3. 输出 $$M$$ 中所有元素的和.

$M = (AB)^{N \times N} = A (BA)^{N \times N – 1} B$

Matrix Multiplication

2014 Multi-University Training Contest 5, available at HDUOJ 4920

$AB = (A_1 – A_2)(B_1 – B_2) = A_1B_1 – A_1B_2 – A_2B_1 + A_2B_2$

（当然，使用其他一些优秀的卡常方法也能过这题）

Matrix Revolution

HDUOJ 1759

$A + A^2 + \cdots + A^k$

$$A$$ 是稀疏矩阵，用 $$m$$ 个三元组 $$(a, b, c)$$ 表示，每个三元组代表 $$A_{ab} = c$$。此外，保证主对角线上的元素全为 1。其他元素均为0。

$$(0 \leq n \leq 1000, 0 \leq m \leq 10n, n < k < 10^{100}, 0 < c < 10^9)$$

Matrix Multiplication

POJ 3318

（当然，使用三次方的算法加一些优秀的卡常，也许也能过）

Remark:

Square

UVA 11542, also available at vjudge

$$1 \leq T \leq 30$$, $$1 \leq n \leq 100$$， 所有整数在 1 和 $$10^{15}$$ 之间。整数的素因子最多为 500。