Expanders via Random Spanning Trees

Motivated by the problem of routing reliably and scalably in a graph, we introduce the notion of a splicer, the union of spanning trees of a graph. We prove that for any bounded-degree n-vertex graph, the union of two random spanning trees approximates the expansion of every cut of the graph to within a factor of O(log n). For the random graph G_{n,p}, for p>c log{n}/n, two spanning trees give an expander. This is suggested by the case of the complete graph, where we prove that two random spanning trees give an expander. The construction of the splicer is elementary -- each spanning tree can be produced independently using an algorithm by Aldous and Broder: a random walk in the graph with edges leading to previously unvisited vertices included in the tree. A second important application of splicers is to graph sparsification where the goal is to approximate every cut (and more generally the quadratic form of the Laplacian) using only a small subgraph of the original graph. Benczur-Karger as well as Spielman-Srivastava have shown sparsifiers with O(n log n/eps^2)$ edges that achieve approximation within factors 1+eps and 1-eps. Their methods, based on independent sampling of edges, need Omega(n log n) edges to get any approximation (else the subgraph could be disconnected) and leave open the question of linear-size sparsifiers. Splicers address this question for random graphs by providing sparsifiers of size O(n) that approximate every cut to within a factor of O(log n).


Introduction
In this paper, we present a new method for obtaining sparse expanders from spanning trees.This appears to have some interesting consequences.We begin with some motivation.
Recovery from failures is considered one of the most important problems with the internet today and is at or near the top of wish-lists for a future internet.In his 2007 FCRC plenary lecture, Shenker desires a network where "even right after failure, routing finds path to destination" [21].How should routing proceed in the presence of link or node failures?
At a high-level, to recover from failures, the network should have many alternative paths, a property sometimes called path diversity, which is measured by several parameters, including robustness in the presence of failures and congestion.It is well-known that expander graphs have low congestion and remain connected even after many (random) failures.Indeed, there is a large literature on routing to minimize congestion and on finding disjoint paths that is closely related to expansion (or more generally, conductance); e.g.[20,11,3].
However, in practice, efficient routing also needs to be compact and scalable; in particular, the memory overhead as the network grows should be linear or sublinear in the number of vertices.This requirement is satisfied by routing on trees, one tree per destination.In fact, the most commonly used method in practice is shortest path routing which is effectively one tree per destination 1 .Since the final destination determines the next edge to be used, this gives an O(n) bound on the size of the routing table that needs to be stored at each vertex.If a constant-factor stretch is allowed, this can be reduced.For example, with stretch 3, tables of size O( √ n) suffice as shown by Abraham et al [1].
The main problem with shortest-path routing or any tree-based scheme is the lack of path diversity.Failing any edge disconnects some pairs of vertices.Recovery is usually achieved by recomputing shortest path trees in the remaining network, an expensive procedure.Further, congestion can be high in principle.This is despite the fact that the underlying graph might have high expansion, implying that low congestion and high fault-tolerance are possible.There is some evidence that AS-level internet topologies are expanders and some stochastic models for networks lead to expanders [14].However, known algorithms that achieve near-optimal congestion use arbitrary paths in the network and therefore violate the scalability requirement.This raises the following question: is it possible to have a routing scheme that is both scalable and achieves congestion and fault-tolerance approaching that of the underlying graph?
Our work is inspired by Motiwala et al. [16,15], who consider a conceptually simple extension of tree-based routing, using multiple trees.With one tree there is a unique path between any two points.With two trees, by allowing a path to switch between the trees multiple times, there could be a large number of available paths.Motiwala et al. showed experimentally that a small number of randomly perturbed shortest path trees for each destination leads to a highly reliable routing method: the union of these trees has reliability approaching that of the underlying graph.
This raises the question whether the results of this experiment can be true in general.I.e., for a given graph does there exist a small collection of spanning trees such that the reliability of the union approaches that of the base graph?As a preliminary step, we study the question of whether for a given graph the union of a few spanning trees captures the expansion of the original graph.Here we propose a construction that uses only a small number of trees in total (as opposed to one tree per destination) and works for graphs with bounded degree and for random graphs.The trees are chosen independently from the uniform distribution over all spanning trees, a distribution that can be sampled efficiently with simple algorithms.The simplest of these, due to Aldous [2] and Broder [6], is to take a random walk in the graph, and include in the tree every edge that goes to a previously unvisited vertex.Roughly speaking, our main result is that for bounded degree graphs and for random graphs a small number of such trees give a subgraph with expansion comparable to the original graph for each cut.
A second important application of splicers is to graph sparsification where the goal is to approximate every cut (and more generally the Laplacian quadratic form) using only a small subgraph of the original graph.Benczur-Karger [5] as well as Spielman-Srivastava [22] have shown sparsifiers with O(n log n/ε 2 ) edges that achieve a 1 + ε approximation.Their methods are based on independent sampling of edges with carefully chosen edge probabilities and require Ω(n log n) edges to get any approximation; with fewer edges, the subgraph obtained could be disconnected.They leave open the question of the existence of linear-size sparsifiers.Splicers, constructed using random spanning trees, provide sparsifiers of size O(n) for random graphs: When the base graph is random, with high probability, the union of two spanning trees approximates all cuts to within a factor of O(log n).We state this precisely in the next section.

Our results
A k-splicer is the union of k spanning trees of a graph.By a random k-splicer we mean the union of k uniformly randomly chosen spanning trees.We show that for any bounded degree graph, the union of two random spanning trees of the graph approximates the expansion of every cut of the graph.Using more trees gives a better approximation.In the following δ G (A) stands for the set of edges in graph G that have one endpoint in A, a subset of vertices of G.
Theorem 1.1.For a d-regular graph G = (V, E), let U k G be a random k-splicer, obtained by the union of k uniformly random spanning trees.Also let α > 0 be a constant and α(k − 1) ≥ 9d 2 .Then with probability 1 − o(1), for every A ⊂ V , we have Our proof of this makes novel use of a known property of random spanning trees of a graph, namely the events of an edge in the graph being included in the tree are negatively correlated.
Next we give a lower bound, showing that the factor 1/ log n is the best possible for k-splicers constructed from random spanning trees for any constant k.
Theorem 1.2.For every n, there is a bounded-degree edge expander G on n vertices such that with probability 1 − o(1) the edge expansion of a random k-splicer U k G is at most k 2 /C log n for any k ≥ 1.
For the complete graph, one can do better, requiring only two trees to get a constant-factor approximation.

Theorem 1.3. The union of two uniformly random spanning trees of the complete graph on n vertices has constant vertex expansion with probability 1 − o(1).
Since constant vertex expansion implies constant edge expansion, we get that the union of two uniformly random spanning trees has constant edge expansion with high probability.
Next we turn to the random graph G n,p .Our main result here is that w.h.p., G n,p has two spanning trees whose union has constant vertex expansion.We give a simple random process (called Process B p henceforth) to find these trees.
Theorem 1.4.There exists an absolute constant C, such that for p ≥ C log n/n, with probability 1 − o(1), the union of two random spanning trees from Process B p applied to a random graph H drawn from G n,p has constant vertex expansion.
The proof of this theorem is via a coupling lemma (Lemma 7.2) showing that a tree generated by Process B p applied to a random graph H is nearly uniform among spanning trees of the complete graph.
Theorem 1.4 relates to the work of [5,22] and leads to the first linear-size sparsifier with nontrivial approximation guarantees for random graphs: Theorem 1.5.Let p ≥ C log n/n for a sufficiently large constant C. Let H be a G(n, p) random graph, and let H ′ be the 2-splicer obtained from it via process B p , with a weight of pn on every edge.Then with probability 1 − o(1), for every A ⊂ V we have where c 1 , c 2 > 0 are constants.
Here w(•) denotes the sum of the weights.

Related work
The idea of using multiple routing trees and switching between them is inspired by the work of [16] who proposed a multi-path extension to standard tree-based routing.The method, called Path Splicing, computes multiple trees to each destination vertex, using simple methods to generate the trees; in one variant, each tree is a shortest path tree computed on a randomly perturbed set of edge weights.Path splicing appears to do extremely well in simulations, approaching the reliability of the underlying graph using only a small number of trees2 .
Sampling for approximating graph cuts was introduced by Karger, first for global min-cuts and then extended to min s-t cuts and flows.The most recent version due to Benczur and Karger [5] approximates the weight of every cut of the graph within factors of 1+ε and 1−ε using O(n log n/ε 2 ) samples; edges are sampled independently with probability inversely proportional to a connectivity parameter and each chosen edge is weighted with the reciprocal of their probability.Recently, Spielman and Srivastava [22], gave a similar method where edges are sampled independently with probability proportional the graph resistance and weighted in a similar way, by the reciprocal of the probability with which they are chosen.They show that every quadratic form of the Laplacian of the original graph is approximated within factors 1 − ε and 1 + ε.The similarity in the two methods extends to their analysis also -both parameters, edge strength and edge resistance share a number of useful properties.
It has long been known that the union of three random perfect matchings in a complete graph with even number of vertices (see, e.g., [9]) is an expander with high probability.Our result on the union of random spanning trees from the complete graph can be considered as a result in a similar vein, and our proof has a similar high-level outline.Still, the spanning trees case seems to be different and requires some new ideas.
On the other hand, our result for the union of spanning trees of bounded degree graphs doesn't seem to have any analog for the union of matchings.Indeed, generating random perfect matchings of graphs is a highly nontrivial problem, the case of computing the permanent of 0-1 matrices being the special case for bipartite graphs [10].

Preliminaries
We say that a family of graphs is an edge (resp., vertex) expander (family) if the edge (resp., vertex) expansion of the family is bounded below by a positive constant.
Let K n denote the complete graph on n vertices.
For a ∈ R, let [a] = {i ∈ N : 1 ≤ i ≤ a}.On several occasions we will use the inequality

Uniform random spanning trees
Uniformly random spanning trees of graphs are fairly well-studied objects; see, e.g., [13].In this section we describe properties of random spanning trees that will be useful for us.There are several algorithms known for generating a uniformly random spanning tree of a graph, e.g., [2,6,19,13].
The algorithm due to Aldous and Broder is very simple and will be useful in our analysis: Start a uniform random walk at some arbitrary vertex of the graph, and when the walk visits a vertex for the first time, include the edge used to reach that vertex in the tree.When all the vertices have been visited we have a spanning tree which is uniformly random regardless of the initial vertex.
A well-known fact (e.g.[12]) about uniform random spanning trees is that the probability that an edge e is chosen in a uniform random spanning tree, is equal to the effective resistance of e: Let each edge have unit resistance, then the effective resistance of e is the potential difference applied to the endpoints of e to induce a unit current.This fact shows a connection of our work with [22], who sample edges in a graph according to their effective resistances to construct a sparsifier.
For a connected base graph G = (V, E), random variable T G denotes a uniformly random spanning tree of G. U k G will denote the union of k such trees chosen independently.For edge e ∈ E, abusing notation a little, we will refer to events e ∈ E(T G ) and e ∈ E(U k G ) as e ∈ T G and e ∈ U k G .
Negative correlation of edges.The events of various edges belonging to the random spanning tree are negatively correlated: For any subset of edges e 1 , . . ., e k ∈ E we have A similar property holds for the complementary events: These are easy corollaries of [13, Theorem 4.5], which in turn is based on the work of Feder and Mihail [8].
Negatively correlated random variables and tail bounds.For e ∈ E, define indicator random variables X e to be 1 if e ∈ T , and 0 otherwise.Then we can rewrite (1) as follows.
For any subset of edges e 1 , . . ., e k ∈ E we have For random variables {X e } satisfying (3) we say that {X e } are negatively correlated.Several closely related notions exist; see Dubhashi and Ranjan [7], and Pemantle [18].[7] gave a property of negative correlation that will be useful for us: It essentially says that Chernoff's bound for the tail probability for sums of independent random variables applies unaltered to negatively correlated random variables.More precisely, we will use the following version of Chernoff's bound.Theorem 3.1.Let {X i } n i=1 be a family of 0-1 negatively correlated random variables such that {1−X i } n i=1 are also negatively correlated.Let p i be the probability that Proof.The proof splits into two steps: In the first step we prove that for arbitrary λ we have The second step is a standard Chernoff bound argument as in the proof of Theorem A.1.13 in [4].
Since the first step is not well-known and is not hard, we provide a proof here.In this, we basically follow Dubhashi and Ranjan [7].
The case λ = 0 is trivially true.We now prove (4) for λ > 0. Since X i 's take 0-1 values, for any integers a 1 , . . ., a n > 0, we have . Now, writing exp(λ n i=1 X i ) using the Taylor series for e x , and expanding each summand, we get a sum over various monomials over the X i 's.For each monomial we have by the definition of negative correlation that This gives (4) for λ > 0. For λ < 0, a similar argument using 1 − X i in the role of X i gives (4).

Expansion when base graph is a complete graph
Our proof here has the same high-level outline as the proof for showing that the union of three random perfect matchings in a complete graph with even number of vertices is a vertex-expander (see, e.g., [9]): One shows that for any given vertex set A of size ≤ n/2, the probability is very small for the event that |Γ ′ (A)| is small in the union of the matchings.A union bound argument then shows that the probability is small for the existence of any set A with |Γ ′ (A)| small.However, new ideas are needed because spanning trees are generated by the random walk process, which appears to be more complex to analyze than random matchings in complete graphs.

Proof (of Theorem 1.3).
For a random spanning tree T in K n and given A ⊆ V , |A| = a, we will give an upper bound on the probability that |Γ ′ T (A)| ≤ ca, for a given expansion constant c (Recall that Γ ′ T (A) denotes the set of vertices in V \ A that are neighbors of vertices in A in the graph T ).To this end, we will fix a set A ′ ⊆ V \ A of size ⌊ca⌋ and we will bound the probability that Γ ′ T (A) ⊆ A ′ , and, to conclude, use a union bound over all possible choices of A and A ′ .Without loss of generality the vertices are labeled V = {1, . . ., n}, A = [a] = {1, . . ., a} and A ′ = {a + 1, ⌊ca⌋}.More precisely, the union bound is the following: the probability that there exists a set A ⊆ V such that |A| ≤ n/2 and |Γ ′ T (A)| ≤ ca in the union of t random independent spanning trees is at most We will bound different parts of this sum in two ways: First, for a ≤ n/12, we use the random walk construction of a random spanning tree which, as we will see, can be interpreted as every vertex in A picking a random neighbor (but not in a completely independent way).Second, for a ∈ (n/12, n/2], we look at all the edges of the cut as if they were independent by means of negative correlation.So, for the first part of the sum in (5), a ≤ n/12, consider a random walk on V , whose states are denoted (X 1 , X 2 , . . .), starting outside of A, that defines a random spanning tree (as in the random walk algorithm).Let τ i be the first time that the walk has visited i different vertices of A. For i = 1, . . ., a − 1, let Y i = τ i+1 − τ i (the gap between first visits i and i + 1).We have that the random variables Y i are independent.Let Z i be the indicator of "Y i = 1", and let Z = a−1 i=1 Z i be the number of adjacent first visits.We have We now give an upper bound to the probability that the predecessor to the first visit of vertex i in given that this predecessor is not a first visit itself (in this case, the edge coming into i is within [a + ca]).That is, for 2 ≤ i ≤ a, Thus, using edges added when the walk goes from V \ A to A and ignoring edges in the other direction: We now use this in (5), for a ≤ n/12.Let K = 2(1 + c).
which goes to 0 as n → ∞ when αK t /12 t−1−c < 1, and this happens for t = 2 and a sufficiently small constant c.
For the rest of the sum in ( 5), a ∈ (n/12, n/2], we use negative correlation of the edges of a random spanning tree T (Section 3) to estimate the probability that Γ T (A) ⊆ [a + ca].Any fixed edge from K n appears in T with probability 2/n.We have that Γ T (A) ⊆ [a+ca] iff no edge between A and V \ [a + ca] is present in T , and negative correlation (Equation ( 2)) implies that this happens with probability at most (1 − 2/n) a(n−(a+ca)) .Thus, (e/γ) 1+c c c e 2t(1−(1+c)γ) γn For any fixed c > 0, the function is convex for γ > 0 and hence the sup is attained at one of the boundary points 1/12 and 1/2, and the function is strictly less than 1 at these boundary points for t = 2 and a sufficiently small constant c.This implies that this sum goes to 0 as n → ∞.

Expansion when base graph is a bounded-degree graph: positive result
In this section we consider graphs with bounded degrees.To simplify the presentation we restrict ourselves to regular graphs; it is easy to drop this restriction at the cost of extra notation.We show that for constant degree graphs the edge expansion is captured fairly well by the union of a small number of random spanning trees.
Proof (of Theorem 1.1).It follows by the random walk construction of random spanning trees that for any edge (u, v) ∈ E we have P[(u, v) ∈ T ] ≥ 1/d(u).To see this, note that if we start the random walk at vertex u then with probability 1/d(u) the first traversed edge is (u, v), which then gets included in T .Thus for A ⊂ V , we have that We would now like to use the above expectation result to prove our theorem.Recall the definition of random variables X e from Section 3: For edge e ∈ E, X e is the indicator random variable taking value 1 if e ∈ T , and value 0 otherwise.Thus we have |δ T (A)| = e∈δ G (A) X e .We want to show that e∈δ G (A) X e is not much smaller than its expectation with high probability.Random variables X e are not independent.Fortunately, they are negatively correlated as we saw in Section 3, which allows us to use Theorem 3.1: where p is the average of P[X e = 1] for e ∈ δ G (A).Since P[X e = 1] ≥ 1/d for all edges e, we have p ≥ 1/d, and for λ = (p − 1/(2d))|δ G (A)| we have .
Which gives Now we estimate the probability that there is a bad cut, namely a cut To do this we first look at cuts of size a in the first random tree, which have size at least αa ln n in G (This step is necessary: the modified Chernoff bound that we use is only as strong as the independent case, and when edges are chosen independently one is likely to get isolated vertices; looking at the first tree ensures that this does not happen).In order to be bad, these cuts have to have small size in all the remaining trees.The probability of that happening is given by (7).The number of cuts in the first tree of size a is clearly no more than n−1 a < n a , as there are n−1 a ways of picking a edges out of n − 1, although not all of these may correspond to valid cuts.Then, the probability that a bad cut exists is at most 6 Expansion when base graph is a bounded-degree graph: negative result Here we show that Theorem 1.1 is best possible up to a constant factor for expansion: Proof (of Theorem 1.2).We begin with a d-regular edge expander G ′ on n vertices with a Hamiltonian cycle (such graphs are known to exist), where d > 2 is a fixed integer.Let 0 < ℓ < log n be an integer to be chosen later, and let H be a Hamiltonian path in G ′ .Subdivide H into subpaths P 1 , . . ., P n/ℓ each of length ℓ (to keep the formulas simple we suppress the integrality issues here which are easily taken care of).
For two subpaths P i and P j , we say that they interact if (P i ∪ Γ ′ (P i )) ∩ (P j ∪ Γ ′ (P j )) = ∅.Since G ′ is d-regular, |Γ ′ (P i )| ≤ dℓ.So, any subpath can interact with at most d 2 ℓ other subpaths (this bound is slightly loose).Thus we can find a set I of 1 d 2 ℓ • n/ℓ paths among P 1 , . . ., P n/ℓ , so that no two paths in I interact.
We now describe the construction of G, which will be obtained by adding edges to G ′ .For each path P ∈ I, we do the following.Add an edge between the two end-points of P i , if such an edge did not already exist in G ′ .If the subgraph G[Γ ′ (P i )] induced by the neighborhood of path P i does not have a Hamiltonian cycle, then we add edges to it so that it becomes Hamiltonian.Clearly, in doing so we only need to increase the degree of each vertex by at most 2. The final graph that we are left with is our G.For each path P ∈ I we fix a Hamiltonian cycle in G[Γ ′ (P )], and we also have the cycle of which P is a part.We denote these two cycles by C 1 (P ) and C 2 (P ).
We will generate a random spanning tree T of G by the random walk algorithm starting the random walk at some vertex outside of all paths in I.For P ∈ I, we say that event E P (over the choice of a random spanning tree T of G) occurs if the random walk, on first visit to C 1 (P )∪ C 2 (P ), first goes around C 1 (P ) without going out or visiting any vertex twice, and then it goes on to traverse C 2 (P ), again without going out or visiting any vertex twice until it has visited all vertices in C 2 (P ).For all P ∈ I we have If event E P happens then in the resulting tree T we have |δ T (V (P ))| = 1.Thus our goal will be to show that with substantial probability there is a P ∈ I such that E P happens.Since no two paths in I interact with each other, events E P are mutually independent.If we are choosing k random As in the random walk algorithm, the spanning tree given by Process B p (if it succeeds in visiting all the vertices) is the set of edges that are used on first visits to each vertex, but the random sequence of edges is different here.
A covering path of a graph is a path passing through all vertices.Let D be the distribution on covering paths of the (undirected) complete graph starting at a vertex v 0 where a random path is generated by a random walk that starts at v 0 and walks until it has visited all the vertices.Let D p be the distribution on covering paths of the complete graph given by first choosing H according to G n,p and running Process B p starting from v 0 .Lemma 7.1.There exists an absolute constant c such that for p > c log n/n the total variation distance between the distributions D and D p is o(1).
Proof.We will couple D and D p so that the walk in D picks the same edges as the walk in D p , but if D p fails, then D continues its random walk.Then this covering walks coincide whenever D p succeeds, and thus the probability of success is an upper bound to the total variation distance between D and D p .Now, D p does not fail if every vertex in H d has out-degree at least c 1 log n and Process B p does not visit any vertex more than c 2 log n times, for c 1 > c 2 .A Chernoff bound gives c and c 1 such that the first part happens with probability 1 − o(1).For the second part, we observe that if there is no failure then Procedure B behaves exactly like a random walk in the complete graph, and therefore it visits all vertices in at most c 3 n log n steps with probability 1− o(1) for some constant c 3 (this is essentially the coupon collector's problem with n − 1 coupons, see [17, Section 3.6 and Chapter 6]) and a walk of that length does not visit any vertex more than c 2 log n times with probability 1 − o(1) for some constant c 2 (by a straightforward variation of the occupancy problem in [17, Section 3.1]).
Let T p be the distribution on trees obtained by first choosing H from G n,p and then generating a random spanning tree according to Process B p .Lemma 7.2.There exists an absolute constant c such that for p > c log n/n the total variation distance between the distributions T and T p is o(1).
Proof.This is immediate from Lemma 7.1, as random trees from T or T p are just functions of walks from D or D p , respectively.
Proof (of Theorem 1.4).In the random graph H, we generate two random trees by using one long sequence of edges, with a breakpoint whenever we complete the generation of a spanning tree.In the complete graph also, we generate two trees from such a sequence obtained from the uniform random walk.Using the same coupling as in Lemma 7.2 we see that these distributions on these sequences have variation distance o(1).Therefore the spanning trees of H obtained by the first process have total variation distance o(1) to random spanning trees of the complete graph.By Theorem 1.3, the union of these trees has constant expansion with probability 1 − o(1) overall.
With this results we are ready to prove our theorem about sparsifiers of random graphs: Proof (of Theorem 1.5).We need the fact that for sufficiently large constant C, with probability 1 − o(1), all cuts δ H (A) in random graph H satisfy This is well-known and follows immediately from appropriate Chernoff-type bounds.
We only need to prove the theorem for |A| ≤ n/2.We now prove the first inequality in the statement of the theorem.By Theorem 1.3, with probability 1 − o(1), for any A ⊂ V such that |A| ≤ n/2, we have |δ H ′ (A)| ≥ c 5 |A|, and so w(δ H ′ (A) ≥ c 5 |A|pn ≥ c 5 p|A|(n − |A|) ≥ c 5 c 4 |δ H (A)|.For the second inequality in the statement of the theorem, we need the fact that the maximum degree of a vertex in a random spanning tree in the complete graph is O(log n).So the same holds for random spanning trees generated by process B p .We then have |δH ′ (A)| ≤ c 6 log n|A|, and so w(δ H ′ (A)) ≤ c 6 log n|A|pn ≤ 2c 6 c 2 log n|δ H ′ (A)|.

Discussion
The problem of scalable routing in the presence of failures has motivated a novel construction of sparse expanders.The use of trees is particularly natural for routing.Our results suggest using a constant number of trees in total for routing, as opposed to the norm of one or more trees per destination.Further, the manner in which the trees are obtained is simple to implement and can lead to faster recovery since (a) paths exist after several failures and (b) fewer trees need to be recomputed in any case.
One aspect of splicers that we have not fully explored is the stretch of the metric induced by them.For the case of the complete graph, it is not hard to see that the diameter is O(log n) and hence so is the expected stretch for a pair of random vertices.This continues to hold for G n,p , in fact giving better bounds for small p (expected stretch of O(log log n) for p = poly(log n)/n).It remains to study the stretch of splicers for arbitrary graphs or bounded-degree graphs.This seems to be an interesting question since on the complete graph, the expected stretch on one tree is Θ( √ n) while that of two trees is O(log n).
Finally, Process B p appears interesting to study on its own.