Uniquely Represented Data Structures for Computational Geometry

. We present new techniques for the construction of uniquely represented data structures in a RAM, and use them to construct efﬁcient uniquely represented data structures for orthogonal range queries, line intersection tests, point location, and 2-D dynamic convex hull. Uniquely represented data structures represent each logical state with a unique machine state. Such data structures are strongly history-independent . This eliminates the possibility of privacy violations caused by the leakage of information about the historical use of the data structure. Uniquely represented data structures may also simplify the debugging of complex parallel computations, by ensuring that two runs of a program that reach the same logical state reach the same physical state, even if various parallel processes executed in different orders during the two runs.


Introduction
Most computer applications store a significant amount of information that is hidden from the application interface-sometimes intentionally but more often not.This information might consist of data left behind in memory or disk, but can also consist of much more subtle variations in the state of a structure due to previous actions or the ordering of the actions.For example a simple and standard memory allocation scheme that allocates blocks sequentially would reveal the order in which objects were allocated, or a gap in the sequence could reveal that something was deleted even if the actual data is cleared.Such location information could not only be derived by looking at the memory, but could even be inferred by timing the interface-memory blocks in the same cache line (or disk page) have very different performance characteristics from blocks in different lines (pages).Repeated queries could be used to gather information about relative positions even if the cache is cleared ahead of time.As an example of where this could be a serious issue consider the design of a voting machine.A careless design might reveal the order of the cast votes, giving away the voters' identities.
To address the concern of releasing historical and potentially private information various notions of history independence have been derived along with data structures that support these notions [14,18,13,7,1].Roughly, a data structure is history independent if someone with complete access to the memory layout of the data structure (henceforth called the "observer") can learn no more information than a legitimate user accessing the data structure via its standard interface (e.g., what is visible on screen).The most stringent form of history independence, strong history independence, requires that the behavior of the data structure under its standard interface along with a collection of randomly generated bits, which are revealed to the observer, uniquely determine its memory representation.We say that such structures have a unique representation.
The idea of unique representations had also been studied earlier [24,25,2] largely as a theoretical question to understand whether redundancy is required to efficiently support updates in data structures.The results were mostly negative.Anderson and Ottmann [2] showed, for example, that ordered dictionaries require Θ(n 1/3 ) time, thus separating unique representations from redundant representations (redundant representations support dictionaries in Θ(log n) time, of course).This is the case even when the representation is unique only with respect to the pointer structure and not necessarily with respect to memory layout.The model considered, however, did not allow randomness or even the inspection of secondary labels assigned to the keys.
Recently Blelloch and Golovin [4] described a uniquely represented hash table that supports insertion, deletion and queries on a table with n items in O(1) expected time per operation and using O(n) space.The structure only requires O(1)-wise independence of the hash functions and can therefore be implemented using O(log n) random bits.The approach makes use of recent results on the independence required for linear probing [20] and is quite simple and likely practical.They also showed a perfect hashing scheme that allows for O(1) worst-case queries, although it requires more random bits and is probably not practical.Using the hash tables they described efficient uniquely represented data structures for ordered dictionaries and the order maintenance problem [10].This does not violate the Anderson and Ottmann bounds as it allows random bits to be part of the input.
In this paper we use these and other results to develop various uniquely represented structures in computational geometry.We show uniquely represented structures for the well studied dynamic versions of orthogonal range searching, horizontal point location, and orthogonal line intersection.All our bounds match the bounds achieved using fractional cascading [8], except that our bounds are in expectation instead of worst-case bounds.In particular for all problems the structures support updates in O(log n log log n) expected time and queries in O(log n log log n + k) expected time, where k is the size of the output.They use O(n log n) space and use O(1)-wise independent hash functions.Although better redundant data structures for these problems are known [15,17,3] (an O(log log n)-factor improvement), our data structures are the first to be uniquely represented.Furthermore they are quite simple, arguably simpler than previous redundant structures that match our bounds.
Instead of fractional cascading our results are based on a uniquely represented data structure for the ordered subsets problem (OSP).This problem is to maintain subsets of a totally ordered set under insertions and deletions to either the set or the subsets, as well as predecessor queries on each subset.Our data structure supports updates or comparisons on the totally ordered set in expected O(1) time, and updates or queries to the subsets in expected O(log log m) time, where m is the total number of element occurrences in subsets.This structure may be of independent interest.
We also describe a uniquely represented data structure for 2-D dynamic convex hull.For n points it supports point insertions and deletions in O(log 2 n) expected time, outputs the convex hull in time linear in the size of the hull, takes expected O(n) space, and uses only O(log n) random bits.Although better results for planar convex hull are known ( [6]) , we give the first uniquely represented data structure.Due to space considerations, the details of our results on horizontal point location and dynamic planar convex hull appear in the full version of the paper [5].
Our results are of interest for a variety of reasons.From a theoretical point of view they shed some light on whether redundancy is required to efficiently support dynamic structures in geometry.From the privacy viewpoint range searching is an important database operation for which there might be concern about revealing information about the data insertion order, or whether certain data was deleted.Unique representations also have potential applications to concurrent programming and digital signatures [4].

Preliminaries
Let R denote the real numbers, Z denote the integers, and N denote the naturals.Let [n] for n ∈ Z denote {1, 2, . . ., n}.
Unique Representation.Formally, an abstract data type (ADT) is a set V of logical states, a special starting state v 0 ∈ V , a set of allowable operations O and outputs Y, a transition function t : V × O → V , and an output function y : V × O → Y.The ADT is initialized to v 0 , and if operation O ∈ O is applied when the ADT is in state v, the ADT outputs y(v, O) and transitions to state t(v, O).A machine model M is itself an ADT, typically at a relatively low level of abstraction, endowed with a programming language.Example machine models include the random access machine (RAM), the Turing machine and various pointer machines.An implementation of an ADT A on a machine model M is a mapping f from the operations of A to programs over the operations of M. Given a machine model M, an implementation f of some ADT (V, v 0 , t, y) is said be uniquely represented (UR) if for each v ∈ V , there is a unique machine state σ(v) of M that encodes it.Thus, if we run f (O) on M exactly when we run O on (V, v 0 , t, y), then the machine is in state σ(v) iff the ADT is in logical state v.

Model of Computation & Memory allocation.
Our model of computation is a unit cost RAM with word size at least log |U |, where U is the universe of objects under consideration.As in [4], we endow our machine with an infinite string of random bits.Thus, the machine representation may depend on these random bits, but our strong history independence results hold no matter what string is used.In other words, a computationally unbounded observer with access to the machine state and the random bits it uses can learn no more than if told what the current logical state is.We use randomization solely to improve performance; in our performance guarantees we take probabilities and expectations over these random bits.
Our data structures are based on the solutions of several standard problems.For some of these problems UR data structures are already known.The most basic structure that is required throughout this paper is a hash table with insert, delete and search.The most common use of hashing in this paper is for memory allocation.Traditional memory allocation depends on the history since locations are allocated based on the ordering in which they are requested.We maintain data structures as a set of blocks.Each block has its own unique integer label which is used to hash the block into a unique memory cell.It is not too hard to construct such block labels if the data structures and the basic elements stored therein have them.For example, we can label points in R d using their coordinates and if a point p appears in multiple structures, we can label each copy using a combination of p's label, and the label of the data structure containing that copy.Such a representation for memory contains no traditional "pointers" but instead uses labels as pointers.For example for a tree node with label l p , and two children with labels l 1 and l 2 , we store a cell containing (l 1 , l 2 ) at label l p .This also allows us to focus on the construction of data structures whose pointer structure is UR; such structures together with this memory allocation scheme yield UR data structures in a RAM.Note that all of the tree structures we use have pointer structures that are UR, and so the proofs that our structures are UR are quite straightforward.We omit the details due to lack of space.
Trees.Throughout this paper we make significant use of tree-based data structures.We note that none of the deterministic trees (e.g.red-black, AVL, splay-trees, weightbalanced trees) have unique representations, even not accounting for memory layout.We therefore use randomized treaps [22] throughout our presentation.We expect that one could also make use of skip lists [21] but we can leverage the elegant results on treaps with respect to limited randomness.For a tree T , let |T | be the number of nodes in T , and for a node v ∈ T , let T v denote the subtree rooted at v, and let depth(x) denote the length of the path from x to the root of T .

Definition 1 (k-Wise Independence).
Let k ∈ Z and k ≥ 2. A set of random variables is k-wise independent if any k-subset of them is independent.A family H of hash functions from set A to set B is k-wise independent if the random variables in {h(x)} x∈A are k-wise independent and uniform on B when h is picked at random from H.
Unless otherwise stated, all treaps in this paper use 8-wise independent hash functions to generate priorities.We use the following properties of treaps.

Theorem 1 (Selected Treap Properties [22]). Let T be a random treap on n nodes with priorities generated by an 8-wise independent hash function from nodes to [p],
where p ≥ n 3 .Then for any x ∈ T , ( 1) in expectation.Thus even if the cost to rotate a subtree is linear in its size (e.g., Dynamic Ordered Dictionaries.The dynamic ordered dictionary problem is to maintain a set S ⊂ U for a totally ordered universe (U, <).In this paper we consider supporting insertion, deletion, predecessor (Pred(x, S) = max{e ∈ S|e < x}) and successor (Succ(x, S) = min{e ∈ S|e > x}).Henceforth we will often skip successor since it is a simple modification to predecessor.If the keys come from the universe of integers U = [m] a simple variant of the Van Emde Boas et.al. structure [26] is UR and supports all operations in O(log log m) expected time [4] and O(|S|) space.Under the comparison model we can use treaps to support all operations in O(log |S|) time and space.In both cases O(1)-wise independence of the hash functions is sufficient.We sometimes associate data with each element.
Order Maintenance.The Order-Maintenance problem [10] (OMP) is to maintain a total ordering L on n elements while supporting the following operations: • Insert(x, y): insert new element y right after x in L.
• Delete(x): delete element x from L.
• Compare(x, y): determine if x precedes y in L. In previous work [4] the first two authors described a randomized UR data structure for the problem that supports compare in O(1) worst-case time and updates in O(1) expected time.It is based on a three level structure.The top two levels use treaps and the bottom level uses state transitions.The bottom level contains only O(log log n) elements per structure allowing an implementation based on table lookup.In this paper we use this order maintenance structure to support ordered subsets.
Ordered Subsets.The Ordered-Subset problem (OSP) is to maintain a total ordering L and a collection of subsets of L, denoted S = {S 1 , . . ., S q } with m = |L| + q i=1 |S i | while supporting the OMP operations on L and the following ordered dictionary operations on each S k : • Insert(x, S k ): insert x ∈ L into set S k .
• Delete(x, S k ): delete x from S k .
• Pred(x, S k ): For x ∈ L, return max{e ∈ S k |e < x}.Dietz [11] first describes this problem in the context of fully persistent arrays, and gives a solution yielding O(log log m) expected amortized time operations.Mortensen [16] describes a solution that supports updates to the subsets in expected O(log log m) time, and all other operations in O(log log m) worst case time, where m is the total number of element occurrences in subsets.In section 3 we describe a UR version.

Uniquely Represented Ordered Subsets
Here we describe a UR data structure for the ordered-subsets problem.It supports the OMP operations on L in expected O(1) time and the dynamic ordered dictionary problems on the subsets in expected O(log log m) time, where m = |L| + q i=1 |S i |.We use a somewhat different approach than Mortensen [16], which relied heavily on the solution of some other problems which we do not know how to make UR.Our solution is more self-contained and is therefore of independent interest beyond the fact that it is UR.Furthermore, our results improve on Mortensen's results by supporting insertion into and deletion from L in O(1) instead of O(log log m) time.We devote the rest of this section to proving Theorem 2. To construct the data structure, we start with a UR order maintenance data structure on L, which we will denote by D (see Section 2).Whenever we are to compare two elements, we simply use D.
We recall an approach used in constructing D [4], treap partitioning: Given a treap T and an element x ∈ T , let its weight w(x, T ) be the number of descendants, including itself.For a parameter s, let L s [T ] = {x ∈ T : w(x, T ) ≥ s}∪{root(T )} be the weight s partition leaders of T1 .For every x ∈ T let (x, T ) be the least (deepest) ancestor of x in T that is a partition leader.Here, each node is considered an ancestor of itself.The weight s partition leaders partition the treap into the sets {{y ∈ T : (y, T ) = x} : x ∈ L s [T ]}, each of which is a contiguous block of keys from T .
In the construction of D [4] the elements of the order are treap partitioned twice, at weight s := Θ(log |L|) and again at weight Θ(log log |L|).The partition sets at the finer level of granularity are then stored in UR hash tables.In the rest of the exposition we will refer to the treap on all of L as T (D).The set of weight s partition leaders of T (D) is denoted by L[T (D)], and the treap on these leaders by T (L[D]).
The other main structure that we use is a treap T containing all elements from the set Treap T is partitioned by weight log m partition leaders.These leaders are labeled with the path from the root to their node (0 for left, 1 for right), so that label of each v is the binary representation of the root to v path.We keep a hash table H that maps labels to nodes, so that the subtreap of T on L[T ] forms a trie.It is important that only the leaders are labeled since otherwise insertions and deletions would require O(log m) time.We maintain a pointer from each node of T to its leader.In addition, we maintain pointers from each x ∈ L[T (D)] to (x, 0) ∈ T .
We store each subset S k in its own treap T k , also partitioned by weight log m leaders.When searching for the predecessor in S k of some element x, we use T to find the leader in T k of the predecessor of x in S k .Once we have , the predecessor of x can easily be found by searching in the O(log m)-size subtree of T k rooted at .To guide the search for , we store at each node v of T the minimum and maximum T k -leader labels in the subtree rooted at v, if any.Since we have multiple subsets we need to find predecessors in, we actually store at each v a mapping from each subset S k to the minimum and maximum leader of S k in the subtree rooted at v. For efficiency, for each leader v ∈ T we store a hash table Recall T v is the subtreap of T rooted at v. The high-level idea is to use the hash tables H v to find the right "neighborhood" of O(log m) elements in T k which we will have to update (in the event of an update to some S k ), or search (in the event of a predecessor or successor query).Since these neighborhoods are stored as treaps, updating and searching them takes expected O(log log m) time.We summarize these definitions, along with some others, in Table 1.
H hash table mapping label i ∈ {0, 1} m to a pointer to the leader of T with label i Hv hash table mapping k ∈ [q] to the tuple (if it exists) T a treap storing L Ix for x ∈ L, a fast ordered dictionary [4] mapping each k ∈ {i : x ∈ Si} to (x, k) in T Table 1.Some useful notation and definitions of various structures we maintain.
We use the following Lemma to bound the number of changes on partition leaders.Lemma 1. [4] Let s ∈ Z + and let T be a treap of size at least s.Let T be the treap induced on the weight s partition leaders in T .Then the probability that inserting a new element into T or deleting an element from T alters the structure of T is c/s for some global constant c.
Note that each partition set has size at most O(log m).The treaps T k , J x and T , and the dictionaries I x from Table 1 are stored explicitly.We also store the minimum and maximum element of each L[T k ] explicitly.We use a total ordering for L as follows: (x, k) < (x , k ) if x < x or x = x and k < k .

OMP Insert & Delete Operations:
These operations remain largely the same as in the order maintenance structure of [4].We assume that when x ∈ L is deleted it is not in any set S k .The main difference is that if the set L[T (D)] changes we will need to update the treaps {J v : v ∈ L[T (D)]}, T , and the tables {H v : v ∈ L[T ]} appropriately.
Note that we can easily update H v in time linear in |T v | using in-order traversal of T v , assuming we can test if x is in L[T k ] in O(1) time.To accomplish this, for each k we can store L[T k ] in a hash table.Thus using Theorem 1 we can see that all necessary updates to {H v : v ∈ T } take expected O(log m) time.Predecessor & Successor: Suppose we wish to find the predecessor of x in S k .(Finding the successor is analogous.)If x ∈ S k we can test this in expected O(log log m) time using I x .So suppose x / ∈ S k .We will first find the predecessor w of (x, k) in T as follows.(We can handle the case that w does not exist by adding a special element to L that is smaller than all other elements and is considered to be part of L[T (D)]).Once we have found the predecessor w of (x, k) in T , we search for the predecessor w of x in L[T k ]. (If w does not exist, we simply use min{u ∈ L[T k ]}).To find w , we first use w to search for a node u , defined as the leader (x, k) would have had in T , had it been given a priority of −∞.Note that with priority −∞, (x, k) would be the leftmost leaf of the right subtree of w in T .Hence its leader would either be the leader of w, or the deepest leader on the leftmost path starting from the right child of w.Hence u can be found in expected O(log log m) time, by binary searching on its label (i.e., if the label of w is α, then find the maximum k such that α • 1 • 0 k is an label in H).
Let P be the path from u to the root of T .We use the label of u and H to binary search on P for the deepest node v ∈ P for which min{u : and (w , k) is in the left subtree of v.So let v l be the left child of v and note that w = max{u : u ∈ L[T k ] and (u, k) ∈ T v l }, which we can look up in O(1) time after finding v by using H v .Otherwise v = u .In this case, lookup a := min{u : u ∈ L[T k ] and (u, k) ∈ T v } and b := max{u : u ∈ L[T k ] and (u, k) ∈ T v }, find the least common ancestor c of {a, b} in T k , and starting from c search T k for w .Since a and b are both descendants of u , their distance (i.e., one plus the number of nodes between them in the order) in L is at most s = Θ(log m), and thus their distance in T k is at most O(log m).However, in random treaps the expected length of a path between nodes at distance d is O(log(d)), even if priorities are generated using only 8-wise independent hash functions [22].Thus we can find c in expected O(log log m) time.Note c has at most O(log 2 m) descendants between a and b in T k , since there are at most O(log m) partition leaders between a and b and each has at most O(log m) "followers" in its partition set, and we can find w in expected O(log log m) time starting from c. Once we have found w , the predecessor of x in L[T k ], we can simply find the successor of w in L[T k ], say w , via fast finger search, and then search the subtreaps rooted at w and w for the actual predecessor of x in S k in expected O(log log m) time.
OSP-Insert and OSP-Delete: OSP-Delete is analogous to OSP-Insert, hence we focus on OSP-Insert.Suppose we wish to add x to S k .First, if x is not currently in any sets {S i : i ∈ [q]}, then find the leader of x in T (D), say y, and insert x into J y in expected O(log log m) time.Next, insert x into T k as follows.Find the predecessor w of x in S k , then insert x into T k in expected O(1) time starting from w to speed up the insertion.
Find the predecessor w of (x, k) in T as in the predecessor operation, and insert (x, k) into T using w as a starting point.
be the leaders of T k immediately before and after the addition of x to S k , and let ).Then we must update {H v : v ∈ L[T ]} appropriately for all nodes v ∈ L[T ] that are descendants of (x, k) as before, but must also update H v for any node v ∈ L[T ] that is an ancestor of some node in {(u, k) : u ∈ ∆ k }.It is not hard to see that these latter updates can be done in

Uniquely Represented Range Trees
Let P = {p 1 , p 2 , . . ., p n } be a set of points in R d .The well studied orthogonal range reporting problem is to maintain a data structure for P while supporting queries which given an axis aligned box B in R d returns the points P ∩ B. The dynamic version allows for the insertion and deletion of points.Chazelle and Guibas [8] showed how to solve the two dimensional dynamic problem in O(log n log log n) update time and O(log n log log n + k) query time, where k is the size of the output.Their approach used fractional cascading.More recently Mortensen [17] showed how to solve it in O(log n) update time and O(log n + k) query time using a sophisticated application of Fredman and Willard's q-heaps [12].All of these techniques can be generalized to higher dimensions at the cost of replacing the first log n term with a log d−1 n term [9].
Here we present a uniquely represented solution to the problem.It matches the bounds of the Chazelle and Guibas version, except ours are in expectation instead of worst-case bounds.Our solution does not use fractional cascading and is instead based on ordered subsets.One could probably derive a UR version based on fractional cascading, but making dynamic fractional cascading UR would require significant work 2and is unlikely to improve the bounds.Our solution is simple and avoids any explicit discussion of weight balanced trees (the required properties fall directly out of known properties of treaps).If d = 1, simply use the dynamic ordered dictionaries solution [4] and have each element store a pointer to its successor for fast reporting.For simplicity we describe the two dimensional case.The remaining cases with d ≥ 3 can be implemented using standard techniques [9] if treaps are used for the underlying hierarchical decomposition trees.The description will be deferred to the full paper.We will assume that the points have distinct coordinate values; thus, if (x 1 , x 2 ), (y 1 , y 2 ) ∈ P , then x i = y i for all i. (There are various ways to remove this assumption, e.g., the composite-numbers scheme or symbolic perturbations [9].)We store P in a random treap T using the ordering on the first coordinate as our BST ordering.We additionally store P in a second random treap T using the ordering on the second coordinate as our BST ordering, and also store P in an ordered subsets instance D using this same ordering.We cross link these and use T to find the position of any point we are given in D. The subsets of D are {T v : v ∈ T }, where T v is the subtree of T rooted at v. We assign each T v a unique integer label k using the coordinates of v, so that T v is S k in D. The structure is UR as long as all of its components (the treap and ordered subsets) are uniquely represented.
To insert a point p, we first insert it by the second coordinate in T and using the predecessor of p in T insert a new element into the ordered subsets instance D. This takes O(log n) expected time.We then insert p into T in the usual way using its x coordinate.That is, search for where p would be located in T were it a leaf, then rotate it up to its proper position given its priority.As we rotate it up, we can reconstruct the ordered subset for a node v from scratch in time O(|T v | log log n).Using Theorem 1, the overall time is O(log n log log n) in expectation.Finally, we must insert p into the subsets {T v : v ∈ T and v is an ancestor of p}.This requires expected O(log log n) time per ancestor, and there are only O(log n) of them in expectation.Since these expectations are computed over independent random bits, they multiply, for an overall time bound of O(log n • log log n) in expectation.Deletion is similar.
To answer a query (p, q) ∈ R 2 × R 2 , where p = (p 1 , p 2 ) is the lower left and q = (q 1 , q 2 ) is the upper right corner of the box B in question, we first search for the predecessor p of p and the successor q of q in T (i.e., with respect to the first coordinate).We also find the predecessor p of p and successor q of q in T (i.e., with respect to the second coordinate).Let w be the least common ancestor of p and q in T , and let A p and A q be the paths from p and q (inclusive) to w (exclusive), respectively.Let V be the union of right children of nodes in A p and left children of nodes in A q , and let S = {T v : v ∈ V }.It is not hard to see that |V | = O(log n) in expectation, that the sets in S are disjoint, and that all points in B are either in W := A p ∪ {w} ∪ A q or in ∪ S∈S S.

Horizontal Point Location & Orthogonal Segment Intersection
Let S = {(x i , x i , y i ) : i ∈ [n]} be a set of n horizontal line segments.In the horizontal point location problem we are given a point (x, ŷ) and must find (x, x , y) ∈ S maximizing y subject to the constraints x ≤ x ≤ x and y < ŷ.In the related orthog-onal segment intersection problem we are given a vertical line segment s = (x, y, y ), and must report all segments in S intersecting it, namely {(x i , x i , y i ) : x i ≤ x ≤ x i and y ≤ y i ≤ y }.In the dynamic version we must additionally support updates to S. As with the orthogonal range reporting problem, both of these problems can be solved using fractional cascading and in the same time bounds [8] (k = 1 for point location and is the number of lines reported for segment intersection).Mortensen [15] improved orthogonal segment intersection to O(log n) updates and O(log n + k) queries.
We extend our ordered subsets approach to obtain the following results for horizontal point location and range reporting.

Uniquely Represented 2-D Dynamic Convex Hull
Using similar techniques we obtain a uniquely represented data structure for maintaining the convex hull of a dynamic set of points S ⊂ R 2 .Our approach builds upon the work of Overmars & Van Leeuwen [19].Overmars & Van Leeuwen use a standard balanced BST T storing S to partition points along one axis, and likewise store the convex hull of T v for each v ∈ T in a balanced BST.In contrast, we use treaps in both cases, together with the hash table in [4] for memory allocation.Our main contribution is then to analyze the running times and space usage of this new uniquely represented version, and to show that even using only O(log n) random bits to hash and generate treap priorities, the expected time and space bounds match that of the original version up to constant factors.Specifically, we prove the following.

Conclusions
We have introduced uniquely represented data structures for a variety of problems in computational geometry.Such data structures represent every logical state by a unique machine state and reveal no history of previous operations, thus protecting the privacy of their users.For example, our uniquely represented range tree allows for efficient orthogonal range queries on a database containing sensitive information (e.g., viral load in the blood of hospital patients) without revealing any information about what order the current points were inserted into the database, whether points were previously deleted, or what queries were previously executed.Uniquely represented data structures have other benefits as well.They make equality testing particularly easy.They may also simplify the debugging of parallel processes by eliminating the conventional dependencies upon the specific sequence of operations that led to a particular logical state.

Theorem 2 .
Let m := |{(x, k) : x ∈ S k }| + |L|.There exists a UR data structure for the ordered subsets problem that uses O(m) space, supports all OMP operations in expected O(1) time, and all other operations in expected O(log log m) time.
T ) number of descendants of node x of treap T L[T ] weight s = Θ(log m) partition leaders of treap T (x, T ) the partition leader of x in T T k treap containing all elements of the ordered subset S k , k ∈ [q] T (D) the treap on L T (L[D]) the subtreap of T (D) on the weight s = Θ(log m) leaders of T (D) Jx for x ∈ L[T (D)], a treap containing {u ∈ L : (u, T (D)) = x and ∃ Clearly, updating T itself requires only expected O(log m) time.Finally, we bound the time to update the treaps J v by the total cost to update T (L[D]) if the rotation of subtrees of size k costs k + log m, which is O(log m) by Theorem 1.This bound holds because |J v | = O(log m) for any v, and any tree rotation on T (D) causes at most 3s elements of T (D) to change their weight s leader.Therefore only O(log m) elements need to be added or deleted from the treaps {J v : v ∈ T (L[D])}, and we can batch these updates in such a way that each takes expected amortized O(1) time.However, we need only make these updates if L[T (D)] changes, which by Lemma 1 occurs with probability O(1/ log m).Hence the expected overall cost is O(1).
First search I x for the predecessor k 2 of k in {i : x ∈ S i } in O(log log m) time.If k 2 exists, then w = (x, k 2 ).Otherwise, let y be the leader of x in T (D), and let y be the predecessor of y in L[T (D)].Then either w ∈ {(y , 0), (y, 0)} or else w = (z, k 3 ), where z = max{u : u < x and u ∈ J y ∪ J y } and k 3 = max{i : z ∈ S i }.Thus we can find w in expected O(log log m) time using fast finger search for y , treap search on the O(log m) sized treaps in {J v : v ∈ L[T (D)]}, and the fast dictionaries {I x : x ∈ L}.
be bounded by 2(R + 1), where R is the number of rotations necessary to rotate x down to a leaf node in a treap on L[T k ] .Since it takes Θ(R) time to delete x given a handle to it, from Theorem 1 we easily infer E[R] = O(1).Since the randomness for T k is independent of the randomness used for T , these expectations multiply, for a total expected time of O(log m), conditioning on the fact that L[T k ] changes.Since L[T k ] only changes with probability O(1/ log m), this part of the operation takes expected O(1) time.Finally, insert k into I x in expected O(log log m) time, with a pointer to (x, k) in T .

Theorem 3 .
Let P be a set of n points in R d .There exists a UR data structure for the orthogonal range query problem that uses O(n log d−1 n) space and O(d log n) random bits, supports point insertions or deletions in expected O(log d−1 n • log log n) time, and queries in expected O(log d−1 n • log log n + k) time, where k is the size of the output.
Compute W 's contribution to the answer, W ∩ B, in O(|W |) time by testing each point in turn.Since E[|W |] = O(log n), this requires O(log n) time in expectation.For each subset S ∈ S, find S ∩ B by searching for the successor of p in S, and doing an in-order traversal of the treap in D storing S until reaching a point larger than q .This takes O(log log n + |S ∩ B|) time in expectation for each S ∈ S, for a total of O(log n • log log n + k) expected time.

Theorem 4 .
Let S be a set of n horizontal line segments in R 2 .There exists a uniquely represented data structure for the point location and orthogonal segment intersection problems that uses O(n log n) space, supports segment insertions and deletions in expected O(log n•log log n) time, and supports queries in expected O(log n•log log n+k) time, where k is the size of the output.The data structure uses O(log n) random bits.

Theorem 5 .
Let n = |S|.There exists a uniquely represented data structure for 2-D dynamic convex hull that supports point insertions and deletions in O(log 2 n) expected time, outputs the convex hull in O(k) time, where k is the size of the convex hull, requires O(n) space in expectation, and uses only O(log n) random bits.