Approaching Optimality for Solving SDD Linear Systems

We present an algorithm that on input of an $n$-vertex $m$-edge weighted graph $G$ and a value $k$, produces an {\em incremental sparsifier} $\hat{G}$ with $n-1 + m/k$ edges, such that the condition number of $G$ with $\hat{G}$ is bounded above by $\tilde{O}(k\log^2 n) $, with probability $1-p$. The algorithm runs in time $$\tilde{O}((m \log{n} + n\log^2{n})\log(1/p)).$$ As a result, we obtain an algorithm that on input of an $n\times n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$, computes a vector ${x}$ satisfying $| |{x}-A^{+}b| |_A


Introduction
Fast algorithms for solving linear systems and the related problem of finding a few fundamental eigenvectors is possibly one of the most important problems in algorithm design.It has motivated work on fast matrix multiplication methods, graph separators, and more recently graph sparsifiers.For most applications the matrix is sparse, and thus one would like algorithms whose run time is efficient in terms of the number of non-zero entries of the matrix.Little is known about how to efficiently solve general sparse systems, Ax = b.But substantial progress has been made in the case of symmetric and diagonally dominant (SDD) systems, where A ii ≥ j =i |A ij |.In a seminal work, Spielman and Teng showed that SDD systems can be solved in nearly-linear time [ST04,EEST05,ST06].
Recent research, largely motivated by the Spielman and Teng solver (ST-solver), demonstrates the power of SDD solvers as an algorithmic primitive.The ST-solver is the key subroutine of the fastest known algorithms for a multitude of problems that include: (i) Computing the first non-trivial (Fiedler) eigenvector of the graph, or more generally the first few eigenvectors, with well known applications to the sparsest-cut problem [Fie73,ST96,Chu97]; (ii) Generating spectral sparsifiers that also act as cutpreserving sparsifiers [SS08]; (iii) Solving linear systems derived from elliptic finite elements discretizations of a significant class of partial differential equations [BHV04].(iv) Generalized lossy flow problems [SD08]; (v) Generating random spanning trees [KM09]; and (vi) Several optimization problems in computer vision [KMT09,KMST09b] and graphics [MP08, JMD + 07]; A more thorough discussion of applications of the solver can be found in [Spi10,Ten10].
The ST-solver is an iterative algorithm that produces a sequence of approximate solutions converging to the actual solution of the input system Ax = b.The performance of iterative methods is commonly measured in terms of the time required to reduce an appropriately defined approximation error by a constant factor.Even including recent improvements on some of its components, the time complexity of the ST-solver is at least O(m log 15 n).The large exponent in the logarithm is indicative of the fact that the algorithm is quite complicated and lacks practicality.The design of a faster and simpler solver is a challenging open question.
In this paper we present a conceptually simple and possibly practical iterative solver that runs in time Õ(m log 2 n).Its main ingredient is a new incremental graph sparsification algorithm, which is of independent interest.The paper is organized as follows.In Section 2 we review basic notions and we introduce notation.In Section 3 we discuss the development of SDD solvers, the algorithmic questions it motivates, and the progress on them, with an emphasis on the graph sparsification problem.In Section 4 we present a high level description of our approach and discuss implications of our solver for the graph sparsification problem.The incremental sparsifier is presented and analyzed in Sections 5 and 6.In Section 7 we explain how it can be used to construct the solver.Finally, in the Appendix we give pseudocode for the complete solver.

Preliminaries
In this Section we briefly recall background facts about Laplacians of weighted graphs.For more details, we refer the reader to [RG97] and [BH03].Throughout the paper, we discuss connected graphs with positive edge weights.We use n and m to denote |V | and |E|.
A symmetric matrix A is positive semi-definite if for any vector x, x T Ax ≥ 0. For such semi-definite matrices A, we can also define the A-norm of a vector x by Fix an arbitrary numbering of the vertices and edges of a graph G. Let w i,j denote the weight of the edge (i, j).The Laplacian L G of G is the matrix defined by: (i) L G (i, j) = −w i,j , (ii) L G (i, i) = i =j w i,j .For any vector x, one can check that It follows that L G is positive semi-definite and L G -norm is a valid norm.We also define a partial order on symmetric semi-definite matrices, where By the definition of from above, this relationship is equivalent to x T L H x ≤ x T L G x ≤ κx T L H x for all vectors x.This implies that the condition number of the pair (L G , L H ) is upper bounded by κ.The condition number is an algebraically motivated notion; upper bounds on it are used to predict the convergence rate of iterative numerical algorithms.

Prior work on SDD solvers and related graph theoretic problems
Symmetric diagonally dominant systems are linear-time reducible to linear systems whose matrix is the Laplacian of a weighted graph via a construction known as double cover that only doubles the number of non-zero entries in the system [GMZ95,Gre96].The one-to-one correspondence between graphs and their Laplacians allows us to focus on weighted graphs, and interchangeably use the words graph and Laplacian.
In a ground-breaking approach, Vaidya [Vai91] proposed the use of spectral graph-theoretic properties for the design of provably good graph preconditioners, i.e. graphs that -in some sense-approximate the input graph, but yet are somehow easier to solve.Many authors built upon the ideas of Vaidya, to develop combinatorial preconditioning, an area on the border of numerical linear algebra and spectral graph theory [BGH + 05].The work in the present paper as well as the Spielman and Teng solver is based on this approach.It is worth noting that combinatorial preconditioning is only one of the rich connections between combinatorics and linear algebra [Chu97,RG97].
Vaidya originally proposed the construction of a preconditioner for a given graph, based on a maximum weight spanning tree of the graph and its subsequent augmentation with graph edges.This yielded the first non-trivial results, an O((dn) 1.75 ) time algorithm for maximum degree d graphs, and an O((dn) 1.2 ) algorithm for maximum degree d planar graphs [Jos97].
Later, Boman and Hendrickson [BH03] made the crucial observation that the notion of stretch (see Section 6 for a definition) is crucial for the construction of a good spanning tree preconditioner; they showed that if the non-tree edges have average stretch s over a spanning tree, the spanning tree is an O(sm)-approximation of the graph.Armed with this observation and the low-stretch tree of Alon et al.
The utility of low-stretch trees in SDD solvers motivated further research on the topic.Elkin et al. [EEST05] gave an O(m log 2 n) time algorithm for the computation of spanning trees with total stretch Õ(m log 2 n).More recently, Abraham et.al. presented a nearly tight construction of low-stretch trees [ABN08], giving an O(m log n + n log 2 n) time algorithm that on input a graph G produces a spanning tree of total stretch Õ(m log n).The algorithm of [EEST05] is a basic component of the ST-solver.While the algorithm of [ABN08] didn't improve the ST-solver, it is indispensable to our upper bound.
The major new notion introduced by Spielman and Teng [ST04] in their nearly-linear time algorithm was that of a spectral sparsifier, i.e. a graph with a nearly-linear number of edges that α-approximates a given graph for a constant α.Before the introduction of spectral sparsifiers, Benczúr and Karger [BK96] had presented an O(m log 3 n) algorithm for the construction of a cut-preserving sparsifier with O(n log n) edges.A good spectral sparsifier is a also a good cut-preserving sparsifier, but the opposite is not necessarily true.
The ST-solver [ST04] consists of a number of major algorithmic components.The base routine is a local partitioning algorithm which is the main subroutine of a global nearly-linear time partitioning algorithm.The partitioning algorithm is used as a subroutine in the construction of the spectral sparsifier.Finally, the spectral sparsifier is combined with the O(m log 2 n) total stretch spanning trees of [EEST05] to produce a (k, O(k log c n)) ultrasparsifier, i.e. a graph Ĝ with n − 1 + (n/k) edges which O(k log c n)-approximates the given graph, for some c > 25.The bottleneck in the complexity of the ST-solver lies in the running time of the ultra-sparsification algorithm and the approximation quality of the ultrasparsifier.
In the special case of planar graphs the ST-solver runs in time Õ(n log 2 n).An asymptotically optimal linear work algorithm for planar graphs was given in [KM07].The key observation in [KM07] was that despite the fact that planar graphs don't necessarily have spanning trees of average stretch less than O(log n), they still have (k, ck log k) ultrasparsifiers for a large enough constant c; they can be obtained by finding ultrasparsifiers for constant size subgraphs that contain most of the edges of the graph, and conceding the rest of the edges in the global ultrasparsifier.In addition, a more practical approach to the construction of constant-approximation preconditioners for the case of graphs of bounded average degree was given in [KM08].To this day, the only known improvement for the general case was obtained by Andersen et.al [ACL06] who presented a faster and more effective local partitioning routine that can replace the partition routine of the spectral sparsifier, improving the complexity of the solver as well.
Significant progress has been made on the spectral graph sparsification problem.Spielman and Srivas-tava [SS08] showed how to construct a much stronger spectral sparsifier with O(n log n) edges, by sampling edges with probabilities proportional to their effective resistance, if the graph is viewed as an electrical network.While the algorithm is conceptually simple and attractive, its fastest known implementation still relies on the ST-solver.Leaving the envelope of nearly-linear time algorithms Batson, Spielman and Srivastava [BSS09] presented a polynomial time algorithm for the construction of a "twice-Ramanujan" spectral sparsifier with a nearly optimal linear number of edges.Finally, Kolla et al. [KMST09a] gave a polynomial time algorithm for the construction of a nearly-optimal (k, Õ(k log n)) ultrasparsifier.

Our contribution
In an effort to design a faster sparsification algorithm, we ask: when and why the much simpler faster cut-preserving sparsifier of [BK96] fails to work as a spectral sparsifier?Perhaps the essential example is that of the cycle and the line graph; while the two graphs have roughly the same cuts, their condition number is O(n).The missing edge has a stretch of O(n) through the rest of the graph, and thus it has high effective resistance; the effective resistance-based algorithm of Spielman and Srivastava would have kept this edge.It is then natural to try to design a sparsification algorithm that avoids precisely to generate a graph whose "missing" edges have a high stretch over the rest of the original graph.This line of reasoning leads us to a conceptually simple sparsification algorithm: Find a low-stretch spanning tree with a total stretch of O(m log n).Scale it up by a factor of k so the total stretch is O(m log n/k) and add the scaled up version to the sparsifier.Then over-sample the rest of the edges with probability proportional to their stretch over the scaled up tree, taking Õ(m log2 n/k) samples.In Sections 5 and 6 we analyze a slight variation of this idea and we show that while it doesn't produce an ultrasparsifier, it produces what we call an incremental sparsifier which is a graph with n − 1 + m/k edges that Õ(k log 2 n)-approximates the given graph 2 .Our proof relies on the machinery developed by Spielman and Srivastava [SS08].
As we explain in Section 7 the incremental sparsifier is all we need to design a solver that runs in the claimed time.Precisely, we prove the following.
Theorem 4.1 On input an n × n symmetric diagonally dominant matrix A with m non-zero entries and a vector b, a vector x satisfying ||x−A + b|| A < ǫ||A + b|| A , can be computed in expected time Õ(m log 2 n log(1/ǫ)).

Implications for the graph sparsification problem
The only known nearly-linear time algorithm that produces a spectral sparsifier with O(n log n) edges is due to Spielman and Srivastava [SS08] and it is based on O(log n) calls to a SDD linear system solver.Our solver brings the running time of the Spielman and Srivastava algorithm to Õ(m log 3 n).It is interesting that this algebraic approach matches up to log log n factors the running time bound of the purely combinatorial algorithm of Benczúr and Karger [BK96] for the computation of the (much weaker) cut-preserving sparsifier.We note however that an Õ(m + n log 4 n) time cut-preserving sparsification algorithm was recently announced informally [HP10].
Sparsifying once with the Spielman and Srivastava algorithm and then applying our incremental sparsifier gives a (k, O(k log 3 n)) ultrasparsifier that runs in Õ(m log 3 n) randomized time.Within the envelope of nearly-linear time algorithms, this becomes the best known ultrasparsification algorithm in terms of both its quality and its running time.Our guarantee on the quality of the ultrasparsifier is off by a factor of O(log 2 n) comparing to the ultrasparsifier presented in [KMST09a].In the special case where the input graph has O(n) edges, our incremental sparsifier is a (k, O(k log 2 n)) ultrasparsifier.

Sparsification by Oversampling
In this section we revisit a sampling scheme proposed by Spielman and Srivastava for sparsifying a graph [SS08].Consider the following general sampling scheme: Sample one e ∈ E with probability of picking e being p e .

7:
Add e to E ′ with weight w ′ e = w e /p e 8: end for 9: For all e ∈ E ′ , let w e ′ := w e /q 10: return G ′ Spielman and Srivastava pick p ′ e = w e R e where R e is the effective resistance of e in G, if G is viewed as an electrical network with resistances 1/w e .This choice returns a spectral sparsifier.A key to bounding the number of required samples is the identity e p ′ e = n − 1. Calculating good approximations to the effective resistances seems to be at least as hard as solving a system, but as we will see in Section 6, it is easier to compute numbers p ′ e ≥ (w e R e ), while still controlling the size of t = e p ′ e .The following Theorem considers a sampling scheme based on p ′ e 's with this property.
Theorem 5.1 (Oversampling) Let G = (V, E, w) be a graph.Assuming that p ′ e ≥ w e R e for each edge e ∈ E, and ξ ∈ Ω(1/n), the graph with probability at least 1 − ξ.
The proof follows closely that Spielman and Srivastava [SS08], with only a minor difference in one calculation.Let us first review some necessary lemmas.
If we assign arbitrary orientations on the edges, then we can define the incidence matrix Γ ∈ ℜ m×n as follows: if u is the tail of e 0 otherwise If we let W be the diagonal matrix containing edge weights, then W 1/2 is a real positive diagonal matrix as well since all edge weights are positive.The Laplacian L can be written in terms of W and Γ as Algorithm Sample forms a new graph by multiplying each edge e by a nonnegative number s e .If S is the diagonal matrix with S(e, e) = s e , the Laplacian of the new graph can be seen to be equal to Let L + denote the Moore-Penrose of L, i.e. the unique matrix sharing with L its null space, and acting as the inverse of L in its range.The key to the proofs of [SS08] is the m × m matrix Π = W 1/2 ΓL + Γ T W 1/2 , for which the following lemmas are proved.
Lemma 5.3 (Lemma 4 in [SS08]) We also use Lemma 5.4 below, which is Theorem 3.1 from Rudelson and Vershynin [RV07].The first part of the Lemma was also used as Lemma 5 in [SS08] in a similar way.
Lemma 5.4 Let p be a probability distribution over Ω ⊆ R d such that sup y∈Ω ||y|| 2 ≤ M and ||E p (yy T )|| 2 ≤ 1.Let y 1 . . .y q be independent samples drawn from p, and let a = CM log q q . Then: 1.
Here C and c are fixed constants.
Proof (of Theorem 5.1) Following the pseudocode of Sample, let t = e p ′ e and q = C s t log t log(1/ξ).It can be seen that where the y i are drawn from the distribution (5.1) The last inequality follows from the assumption about the p ′ e .Recall now that we have log(1/ξ) ≤ log n by assumption, t ≥ e w e R e by construction, and e w e R e = n − 1 by Lemma 3 in [SS08].Combining these facts and setting q = c S t log t log(1/ξ) for a proper constant c S , part 1 of Lemma 5.4 gives a ≤ 4 c log(2/ξ) .
Now substituting x = 1 2 into part 2 of Lemma 5.4, we get It follows that with probability at least 1 − ξ we have The theorem then follows by Lemma 5.3.Note.The upper bound for M in inequality 5.1 is in fact the only place where our proof differs from that of [SS08].In their case the last inequality is replaced by an exact inequality, which is possible because the exact values for w e R e are used.In our case, by using inexact values we get a weaker upper bound which reflects in the density (depending on m, not n) of the incremental sparsifier.It is however enough for the solver.

Incremental Sparsifier
Consider a spanning tree T of G = (V, E, w).Let w ′ (e) = 1/w(e).If the unique path connecting the endpoints of e consists of edges e 1 . . .e k , the stretch of e by T is defined to be .
Let R e denote the effective resistance of e in G and RT e denote the effective resistance of e in T .We have RT e = k i=1 1/w(e i ).Thus stretch T (e) = w e RT e .By Rayleigh's monotonicity law [DS00], we have RT e ≥ R e , so stretch T (e) ≥ w e R e .As the numbers stretch T (e) satisfy the condition of Theorem 5.1, we can use them for oversampling.But at the same time we want to control the total stretch, as it will directly affect the total number of samples required in SAMPLE.This leads to taking T to be a low-stretch tree, with the guarantees provided by the following result of Abraham, Bartal, and Neiman [ABN08].
Our key idea is to scale up the low-stretch tree by a factor of κ, incurring a condition number of κ but allowing us to sample the non-tree edges aggressively using the upper bounds on their effective resistances given by the tree.The details are given in algorithm IncrementalSparsify.

IncrementalSparsify
Input: Graph G, reals κ, 0 < ξ < 1 Output: Graph H 1: T := LowStretchTree(G) 2: Let T ′ be T scaled by a factor of κ 3: Let G ′ be the graph obtained from G by replacing T by T ′ 4: for e ∈ E do 5: Calculate stretch T ′ (e) 6: end for 7: H := Sample(G ′ , stretch T ′ , 1/2ξ) 8: return 2H Theorem 6.2 Given a graph G with n vertices, m edges and any values κ < m, ξ ∈ Ω(1/n), Incremen-talSparsify computes a graph H such that: Proof We first bound the condition number.Since the weight of an edge is increased by at most a factor of κ, we have G G ′ κG.Furthermore, the effective resistance along the tree of each non-tree edge decreases by a factor of κ.Thus IncrementalSparsify sets p ′ e = 1 if e ∈ T and stretch T (e)/κ otherwise, and invokes Sample to compute a graph H such that with probability at least 1 − ξ, we get We next bound the number of non-tree edges.Let t ′ = e / ∈T stretch T ′ (e), so t ′ = Õ((m/κ) log n).Then for the number t used in Sample we have t = t ′ + n − 1 and q = C s t log t log(1/ξ) is the number of edges sampled in the call of Sample.Let X i be a random variable which is 1 if the i th edge picked by Sample is a non-tree edge and 0 otherwise.The total number of non-tree edges sampled is the random variable X = q i=1 X i , and its expected value can be calculated using the fact P r(X i = 1) = t ′ /t: A standard form of Chernoff's inequality is: .
Letting δ = 2, and using the assumption k < m, we get P r(X > 3E[X]) < (e 2 /27) E[X] < 1/n c , for any constant c.Hence, the probability that IncrementalSparsify succeeds, with respect to both the number of non-tree edges and the condition number, is at least 1 − ξ.
We now consider the time complexity.We first generate a low-stretch spanning tree in O(m log n + n log 2 n) time.We then compute the effective resistance of each non-tree edge by the tree.This can be done using Tarjan's off-line LCA algorithm [Tar79], which takes O(m) time [GT83].We next call SAMPLE with parameters that make it draw Õ((n + m/κ log n) log n log(1/ξ)) samples (precisely, O(t log t log(1/ξ)) samples where t = Õ(n + m/κ log n)).To compute each sample efficiently, we assign each edge an interval on the unit interval [0, 1] with length corresponding to its probability, so that no two intervals overlap.At each sampling iteration we pick a random value in [0, 1] and do a binary search in order to find the interval that contains it in O(log n) time.Thus the cost of a call to SAMPLE is Õ((n log 2 n+m/κ log 3 n) log(1/ξ)).

Solving using Incremental Sparsifiers
The solver of Spielman and Teng [ST06] consists of two phases.The preconditioning phase builds a chain of progressively smaller graphs C = {A 1 , B 1 , A 2 , . . ., A d } starting with A 1 = A. The process for building C alternates between calls to a sparsification routine UltraSparsify which constructs B i from A i and a routine GreedyElimination (following below) which constructs A i+1 from B i .The preconditioning phase is independent from the b-side of the system L A x = b.
greedily remove all degree-1 nodes from Ĝ 4: replace (u 1 , v, u 2 ) by an edge of weight w ′ in Ĝ 7: end if 8: until there are no nodes of degree 1 or 2 in Ĝ 9: return Ĝ The solve phase passes C, b and a number of iterations t (depending on a desired error ǫ) to the recursive preconditioning algorithm R-P-Chebyshev, described in Section 9.The time complexity of the solve phase depends on ǫ, but more crucially on the quality of C, which is a function of the sparsifier quality.Definition 7.1 (κ(n)-good chain) Let κ(n) be a monotonically non-decreasing function of n.Let C = {A = A 1 , B 1 , A 2 , . . ., A d } be a chain of graphs, and denote by n i and m i the numbers of nodes and edges of A i respectively.We say that C is κ(n)-good for A, if: Spielman and Teng analyzed a recursive preconditioned Chebyshev iteration and showed that a κ(n)good chain for A can be used to solve a system on L A .This is captured by the following Lemma, adapted from Theorem 5.5 in [ST06].
For our solver, we follow the approach of Spielman and Teng.The main difference is that we replace their routine UltraSparsify with our routine IncrementalSparsify, which is not only faster but also constructs a better chain which translates into a faster solve phase.We are now ready to state our algorithm for building the chain.In what follows we write v := O(g(n i )) to mean 'v := f (n i ) for some explicitly return FAILURE Proof Assume that B i has n i − 1 + m i /k ′ edges.A key property of GreedyElimination is that if G is a graph with n − 1 + j edges, GreedyElimination(G) has at most 2j − 2 vertices and 3j − 3 edges [ST06].Hence GreedyElimination(B i ) has at most 3m i /k ′ edges.It follows that m i /m i+1 ≥ k ′ /3.Then, in order to satisfy the second requirement, we must have A i B i c ′ k ′2 A i , for some sufficiently small constant c ′ .
However, we also know that the call to IncrementalSparsify returns an incremental sparsifier B i that 3κ-approximates A i .So it is necessary that c ′ k ′2 > 3κ.Moreover, B i has n i − 1 + Õ(m i log 2 n/κ) edges, a number which we assumed is equal to n i − 1 + m i /k ′ .The value assigned to κ by the algorithm is taken to be the minimum that satisfies these two conditions.
The probability that B i has the above properties is by construction at least 1 − p/(2 log n) if n i > log n and 1 − p/(2 log log n) otherwise.The probability that the requirements hold for all i is then at least Finally note that each call to IncrementalSparsify takes Õ((m i log 2 n) log(1/p)) time.Since m i decreases faster than geometrically with i, the claim about the running time follows.
Combining Lemmas 7.2 and 7.3 proves our main Theorem.
Theorem 7.4 On input an n × n symmetric diagonally dominant matrix A with m non-zero entries and a vector b, a vector x satisfying ||x−A + b|| A < ǫ||A + b|| A , can be computed in expected time Õ(m log 2 n log(1/ǫ)).

Comments / Extensions
Unraveling the analysis of our bound for the condition number of the incremental sparsifier, it can been that one log n factor is due to the number of samples required by the Rudelson and Vershynin theorem.
The second log n factor is due to the average stretch of the low-stretch tree.
It is quite possible that the low-stretch construction and perhaps the associated lower bound can be bypassed -at least for some graphs-by a simpler approach similar to that of [KM07].Consider for example the case of unweighted graphs.With a simple ball-growing procedure we can concede in our incremental sparsifier a 1/ log n fraction of the edges, while keeping within clusters of diameters O(log 2 n) the rest of the edges.The design of low-stretch trees may be simplified within the small diameter clusters.This diameter-restricted local sparsification is a natural idea to pursue, at least in an actual implementation of the algorithm.

Appendix: The Complete Solver
The purpose of this section is to provide a few more algebraic details about the chain of preconditioners, and the recursive preconditioned Chebyshev method which consists the solve phase of the solver.The material is not new and we include it only for completeness.We focus on pseudocode.We refer the reader to [ST06] for a more detailed exposition along with proofs.
Direct methods -Cholesky factorization.If A is a symmetric and positive definite (SPD) matrix, it can be written in the form A = LL T , a product known as the Cholesky factorization of A. This extends to Laplacians, with some care for the null space.The Cholesky factorization can be computed via a symmetric version of Gaussian elimination.Given the decomposition, solving the systems Ly = b and L T x = y yields the solution to the system Ax = b; the key here is that solving with L and L T can be done easily via forward and back substitution.A partial Cholesky factorization with respect to the first k variables of A, puts it into the form where I k denotes the k × k identity matrix, and A k is known as the Schur complement of A with respect to the elimination of the k first variables.The matrix A k+1 is the Schur complement of A k with respect the the elimination of its first variable.Given a matrix A, the graph G A of A is defined by identifying the vertices of G A with the rows and columns of A and letting the edges of G A encode the non-zero structure of A in the obvious way.
It is instructive to take a graph-theoretic look at the partial Cholesky factorization when k = 1.In this case, the graph G A 1 contains a clique on the neighbors of the first node in G A .In addition, the first column of L is non-zero on the corresponding coordinates.This problem is known as fill.It then becomes obvious that the complexity of computing the Cholesky factorization depends crucially on the ordering of A. Roughly speaking, a good ordering has the property that the degrees of the top nodes of A, A 1 , A 2 , . . ., A k are as small as possible.The best known algorithm for positive definite systems of planar structure runs in time O(n 1.5 ) and it is based on the computation of good orderings via nested dissection [Geo73, LRT79, AY].
There are two fairly simple but important facts considering the partial Cholesky factorization of equality 9.2 [ST06].First, if the top nodes of A, A 1 , . . ., A k−1 have degrees 1 or 2, then back-substitution with L requires only O(n) time.Second, if A is a Laplacian, then A k is a Laplacian.Such an ordering and the corresponding Laplacian A k can be found in linear time via GreedyElimination, described in Section 7. The corresponding factor L can also be computed easily.
Iterative methods.Unless the system matrix is very special, direct methods do not yield nearly-linear time algorithms.For example, the nested dissection algorithm is known to be asymptotically optimal for the class of planar SPD systems, within the envelope of direct methods.Iterative methods work around the fill problem by producing a sequence of approximate solutions using only matrix-vector multiplications and simple vector-vector operations.For example Richardson's iteration generates an approximate solution x i+1 from x i , by letting x i+1 = (I − A)x i + b.
The solver in this paper, as well as the Spielman and Teng solver [ST06], are based on the very well studied Chebyshev iteration [Axe94].The preconditioned Chebyshev iteration (P-Chebyshev) is the Chebyshev iteration applied to the system B + Ax = B + b, where A, B are SPD matrices, and B is known as the preconditioner.The preconditioner B needs not be explicitly known.The iteration requires matrixvector products with A and B + .A product of the form B +1 z is equivalent to solving the system By = c.Therefore (P-Chebyshev) requires access to only a function f B (c) returning B +1 c.In addition it requires a lower bound λ min on the minimum eigenvalue of (A, B) and an upper bound λ max on the maximum substitution.Therefore, we can solve a system in B by solving a linear system in A 2 and performing O(n) additional work.Naturally, in order to solve systems on A 2 we can recursively apply preconditioned Chebyshev iterations on it, with a new preconditioner B 2 .This defines a preconditioning chain C that consists of progressively smaller graphs A = A 1 , B 1 , A 2 , B 2 , . . ., A d , along with the corresponding matrices L i , Π i , Q i for 1 ≤ i ≤ d−1.So, to be more precise than in Section 7, routine BuildChain has the following specifications.

R-P-Chebyshev
x :=P-Chebyshev(A i , b, t, f i (z), l, u) 14: return x 15: end if The complete solver.Finally, the pseudocode for the complete solver is as follows.

C
= C ∪ {A i , B i } 16: i := i + 1 17: end while 18: return C Lemma 7.3 Given a graph A, BuildChain(A,p) produces an Õ(log 4 n)-good chain for A, with probability at least 1 − p.The algorithm runs in time Õ((m log n + n log 2 n) log(1/p)).