Research Showcase @ CMU

Precedence techniques have been widely used in the past in the construction of parsers. H o w e v e r, the restrictions imposed by them on the grammars were hard to meet. Thus, alteration of the rules of the grammar was necessary in order to make them acceptable to the parser. We have shown that, by keeping track of the possible set of rules that could be applied at any one time, one can enlarge the class of grammars considered. The possible set of rules to be considered is obtained directly from the information given by a labelled set of precedence relations. Thus, the parsers are easily obtained. Compared to the precedence parsers, this new method gives a considerable increase in the class of parsable grammars, as well as an improvement in error detection. An interesting consequence of this approach is a n ew decomposition technique for LR parsers.


NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS:
The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or other reproductions of copyrighted material.Any copying of this document without permission of its author may be prohibited by law.

L. Introduction
Among the large variety of techniques used for parsing, one can distinguish the bottomup parsers, as those which attempt to make succesive reductions on a given string so as to eventually get to the starting symbol of the grammar.These parsers can be thought of operating in two modes (or phases).On the detection phase, the parser attempts to determine the portion of a righl hand side of a phrase within the string which is being considered.
Once this boundary is detected, the parser goes into a reduction phase, consisting of selecting a production which is a handle at the determined position.
If we classify different types of bottom-up parsers according to the amount of information they carry while in the detection phase, we can distinguish two extremes.On one hand we have the precedence parsers, which are characterized by the fact that they carry no information while looking for the righthand side of a phrase and by making its decisions in the reduction phase by using local context only.The parsers obtained are relatively simple but the classes of grammars they can parse is restricted by the existance of local ambiguities.
By varying the amount of context examined one can define different families of grammars.Among the most popular ones, we have the Wirth-Weber precedence [1], the simple weak precedence [2.3], and the simple mixed strategy precedence [3].
On the other side of the spectrum lie the LR(k) parsers [4].While in the detection pha c >e.
they carry enough information so that the decision to reduce can be made immediately after a right hand side is detected.The number of states an LR(k) parser has can become immense.Part of this high number of states is due to the fact that different information that is carried forward has to be further distinugished for the same local context.
An intermediate situation is obtained if one separates what is to be considered information which has to be carried forward and information that can be obtained from local context.
A parser thus constructed will consist of two machines: a forward machine 7 and a decision machine D. The parser will work as follows: Initially the control is given to the 7 machine.
While on 7, the parser behaves like a precedence parser but every time it shifts an input, it stores in the stack the input symbol together with a symbol denoting the state it is currently in.The decision to shift, which is accompanied by a transition to a new state, is done by examining local context.The 7 machine can also determine acceptance, an error condition or a call on the ID machine for a decision.The D machine determines whether a shift or a reduce has to be performed, by examining local context together with the state information that exists on the pushdown.A shift is performed like the 7 machine.If a reduce is called for, the right hand side of the production used is removed from the stack, the 7 machine is initialized to the state denoted by the topmost symbol, and the left hand side of the production used is given as input to it (this is like an LR(k) parser).A parser of this type is given in Example 1.

Example J_
Let G be given by: S cAbB G is not a member of any of the classes of precedence grammars mentioned above.An LR(1) (or an LR(O)) parser for G has 10 states.We can see that we really need 2 states to carry information forward (i.e.whether a V or a "b" was first seen).The rest of the information can be determined from local context.A diagram for the 7 machine could be: The V machine would check the contents of the stack to match a right hand side of a subset of the productions, determined by the state of 7 from which it was called and it would give a decision on which reduction to make.A diagram for D can be given as a forest: In this paper we examine parsers built using this approach.Different classes of parsable grammars can be obtained by applying different criteria for the construction of the 7 and D machines.We will see that any class of precedence grammars can be extended this way, without a significant complication of the parsers and with the big advantage of not having to accomodate the rules of the grammar to satisfy the requirements of the particular precedence method used.Although the intent of this study was to extend precedence parsers, we get as a side effect a decomposition method for LR(k) parsers.This approach is a matter of further study.
In this section we examine the construction of different parsers and the classes of grammars they can parse.We assume the reader is familiar with the terminology for context free grammars [7,8].
Since our original attempt was in the direction of extending precedence techniques, all the grammars considered here will be proper.Extensions to non A-free grammars can be studied along the same lines.
Definition J_: A proper context free grammar G=(V,V T ,P,S) is a reduced, A-free, cycle-free context free grammar.V denotes the vocabulary, Vj is the set of terminals, is the set of nonterminals.We assume the productions in P are indexed.The set I of indices will consist of symbols of the form A k where A (. V N .An index i=A k ( I will denote the k-th production whose left hand side is A. If this production is AS we will write i: A & (or A k : A S).
If there is only one production for nonterminal A we will use A instead of Ai as its index.
There will be an index 0 to denote an augmented production of the form S' LSI (S' < V).
(This is just a convenience to make definitions simpler.) Except where otherwise noted, the following conventions apply throughout the paper: A,B,C,D < V N ; a,b,c,d,e,g,r < V T ; M^WA*,?
We will now define certain relations between pairs of symbols in V.These relations will be defined in a similar way as was done in [1] but there will be a label attached to them.The labels will provide information about the way the relation between the symbols was obtained.

1)
X is less than Y under aj, a 2> which we will write as [ai; a 2 ] : X < Y, if Vi < a*, 3 A,B,X,P,iy, such that i : A -pXBi/ and a 2 = {j I B U C cr , j : C -Y7}.
The labelled precedence relations can be displayed in matrix form: The matrix of labelled precedence relations will be denoted by M. Note that for two symbols X and Y there may be more than one pair of labels ai, c*2 such that [ai;a2]:X«Y.
We will later perform reductions on this matrix.These will amount to merging some indices into one.We can think of the set of labels as coming from a set L and having a mapping ?:I-»LThe original matrix is defined with 1=1 and f 1-1.In general though, we will have a labelled precedence matrix M with labels from a set L.
Given a labelled matrix of precedence relations we now define a parser for the grammar.
The (forward) states of the parser will be subsets of L Informally, the parser can be defined as follows: Define a directed graph whose nodes are the members of V (plus two other nodes, denoted by 1, one of them will be the unique source node, the other, the unique sink node in the graph).An arc exists between nodes X and Y if the X-Y entry of the M matrix is not empty.The initial state will be the set consisting of the label for production 0, and we will say it is incident to the source node 1.
Now we perform the following operation at every node: Let state s be incident to node X and let there be an arc from X into Y.Let The state t will be referred to as the successor of state s.When no new states are created the process stops.
Note that the computation of the states is done using only boolean operations on sets and that checking if a state has already been created is straightforward.
(The whole process can be viewed as a parallel operation at all nodes.) The set of states so created constitutes the set Q F of states of the 7 machine.The underlying fsa will be called the unrestricted7rnachine.
The parsing of a word proceeds as follows: Initially the 7 machine is in the initial state so, incident to node i.There is a stack The D machine will either determine a shift, by examining productions in sfKa^Uaj), or a reduce to one of the productions in si la*.If a shift is determined, control is transferred to the succesor state of s in the machine 7. If a reduce is determined, the right hand side of the production being reduced is popped up from the stack, control is transfered to the topmost state now appearing on channel 2, and the input symbol fed to machine 7 is the left hand side of the production used.The parser accepts if the input symbol is 1, 7 is in its final state and Vi=±S.
We will now define the 7 machine.
7 is a finite state machine, 7-(Q F A/xV,$ F ,?-l(0),{<p-l(0)}), where Q F is a subset of the set of all subsets of L, VxV is the input alphabet, the initial (and final) state is the set containing ?-i(0) and S F is defined as follows: Let s<Q r ,(X,Y)(VxV.The (X,Y) entry of M contains labels The D machine can be defined in different ways, giving rise to different classes of parsable grammars.
We will give some definitions here.For simplicity, we will restrict to local contexts of one symbol, but these constructions can be extended to other contexts.We will need some definitions which we now give: This ID machine works as follows: For each production i:A h in sf)a4 it checks that & appears as a valid expansion of i.If so, machine P outputs "reduce i".Also, it may output a state consisting of the set of all labels of productions i:A (iXCS such that Y<fi*C, [s]:X Y and such that 0X appears as a valid expansion of i.Thus, the D machine could produce more than one output.We are interested in deterministic behavior so we will say that a parser is well defined if the D machine has at most one output.(An empty output from V is an indication of error.) The class of grammars which have deterministic parsers whose V machine are defined as above and whose 7 machines have n states will be called the class of n-state labelled precedence grammars with independent left and riftht context (n-LPI grammars).The parsers constructed as above will be such that their 7 machines usually have more states than it is necessary.
We can get minimal machines 7 as follows: Assume we have a definition for the class of D machines.
We then define an incompatibility relation on the set of productions I.We will say that two productions ii, i 2 , are incompatible if when a call to D occurs with state ^f\^f\ 2 , I> will produce more than one output.Given the set of incompatible productions, we can define a partition n on the set of productions such that if ii,J2 are incompatible productions they belong to different classes.
For each class we define a symbol.Let L be the set of all these symbols and define the natural map f:I L such that ?i=?j if i and j belong to the same class of it.We can now define the 7 and D machines as before.For some partitions n it may happen that D will not be well defined.But if the parser defined on the identity partition was well defined, there exist a partition for which the parser is well defined and for which the number of states of the machine 7 is minimal.This number gives an indication on the amount of information that has to be carried forward in order to successfully parse the sentences of the language generated by the grammar.It is clear that, for each n, we can define grammars for which the 7 machine will have at least n states, so this gives a measure of the complexity of the grammar.
As the following result shows, even the simple class of grammars in this hierarchy, i.e., those for which the number of states of the 7 machine is 1, is an extension of the largest class of grammars defined using precedence relations over VxV, i.e., the class of simple mixed strategy precedence.
Theorem U The class of SMSP grammars is contained in the class of 1-LPI grammars.
Proof: Let G be a SMSP grammar.The class of 1 state labelled grammars with independent left and right context has been presented in the literature under another name as indicated by the following result.
Theorem 2: The class of 1 state labelled grammars with independent left and right context coincides with the class of overlap resolvable (OR) grammars [5].

Proof:
The reader is referred to [5] for the definition of OR grammars.A case analysis shows that V has a deterministic behavior iff every conflict is left or right resolvable.| Thus we get the following corolary, which answers a conjecture of Wise: Corolary J_: The class of OR languages coincides with the class of deterministic languages.
Proof: Follows from the fact that every deterministic language has an SMSP grammar.| Example 2 presented a grammar which failed to be OR.There are two entries in M which can cause incompatibilities, namely M(a,g) and M(g,i).
For the latter we have that productions Y and Si are not of the form occuring in case 1 or 2 for the definition of incompatibility.For the former, we do have that S2^Z.Thus, at least 2 states are required for the 7 machine.It turns out that 2 states are sufficient to get a parser for this grammar.
Because we have defined the I) machine as one which checks left and right context independently we have the following result.The claim is certainly true for So because So lsB So 2s!So 3s 4. Now, assumming the claim holds for Sj, we note that GOTO(Sj,Y) is obtained by taking all productions in SjMJ S/ with the dot shifted over the symbol Y (which becomes the set SjMJ Sj 2 11 Sj 3 ), and applying a closure operator to get the set Sj 4 II Sj 5 .But, for every index i of a production in Sj 1 we have (i):X=Y, and for every index j of a production in Sj 4 , there is an index i of a production in Sj 1 U Sj 2 such that (i;j):X«Y.Thus, all indices of productions in Sj 1 U Sj 2 U Sj 3 appear in state h(Sj) and the claim holds.
It is now straightforward to verify that if G is not SLR(l), i.e., if there are two conflicting items in some set Sj of LR(0) items, then the corresponding state of the 7 machine will produce a call of the D machine which will in turn, give more than one output.Thus the parser will not be a deterministic one and the grammar will not be an n-state LPI grammar, i We note that to generate the 7 machine we do not distinguish positions within a production, as an LR(or SLR) parser does.Thus, we are able to get the 7 machine faster, but we restrict the class of grammars which can be parsed, excluding those which have productions in which a repeated occurrance of a symbol may cause problems, as suggested by the The V machine gives as output both "reduce A" and "reduce B".This behavior will occur even if the T> machine checks the left and right context simultaneously as is done later.
On the other hand, it is easily seen that G is an SLR(1) grammar.
Example 3 leads us to the following definition: Definition 4: Let A XiX 2 ...X n _iX n be a production.We will say that this production is free of repetitions (FOR) if for all l<i,j<n we have i*j implies X^Xj (i.e., there is no repeated occurrence of a symbol among the first n-1 symbols).A grammar will be free of repetitions (FOR) if all of its rules are FOR.FOR grammars and FOR productions occur very often.Any grammar in normal 2 form is a FOR grammar and every CF language can be given a trivial FOR grammar.Among the grammars used in programming languages, a quick glance at some reveals that: PL360 as defined in [9, pages 39-53] is FOR; SNOBOL4, as defined in [7, pages 505-507], has only one non FOR rule; ALGOL 60, as defined in [10], has only one non FOR rule (which happens to be a production for the <for list element>!);PAL, as defined in [7, pages 512-514], is FOR.
If we are dealing with FOR grammars, we can strengthen the result of Theorem 3: Theorem 4: If G is FOR and SLR(1), then it is n-LPI.
Proof: Define the 7 machine using the identity map ?:I1=LIf G is FOR, the claim stated in the proof of Theorem 3 becomes the following: Claim: If Sj is a set of LR(0) items partitioned as before,then h(Sj) coincides with the set of indices of all productions in Sj 1 U Sj 2 U Sj 3 .
To prove the claim, it suffices to show that there are no indices of productions in h(Sj) which are not in Sj* I) Sj2 U Sj3.This follows from the fact that, if (i):X±Y or (i;j):X<Y then, since G is FOR, there is only one occurrence of X in the production whose index is i.Since an LR(0) item is identified by this symbol, the map h is 1-1.It is easy to see that the parser constructed is isomorphic to the SLR(l) parser.| Thus, if we restrict our attention to FOR grammars, both classes coincide.Moreover, the SLR(l) parser can be obtained very easily from the 7 machine so that a fast procedure for constructing SLR(l) parsers is obtained.As mentioned above, FOR productions and grammars occur frequently in programming languages.Thus, we should take advantage of this fact when constructing parsers for them.
We will now modify the definition of the D machine so as to make it check for simultaneous left and right context.We need to introduce the following definition.The change will only affect the instruction labelled a.This instruction is changed to: a: if 3i, ytesHc^, i:A 0X, nH|3X|+l, In^Z^X, Z leads into i and A is a valid reduction for (3X within symbols Z and Y and state s, where s=fil n V2 the state which appears next to Z) then "reduce i".
We will now construct a parser for a grammar using this machine D.
Example: Let G be Si: This follows from the fact that for a FOR grammar, the converse of lemma 1 holds, i.e., if A is a valid reduction for & within symbols X and Z then XAZ is a substring of some sentential form.Thus, if the D machine gives more than one output, it means that knowledge of the left and right context of a handle of a sentential form does not uniquely determines it.Thus, G is .not(l-l)BRC.

S-Aa
3_i A decomposition of LR parsers So far, we have considered parsers which operate as precedence parsers, in the sense that, once a reduction could occur (as determined by the 7 machine) we would check the contents of the stack to either determine the production to use in the reduction, or to continue the forward scan.
This sequentially of actions is clearly not necessary.Since the D machine, when called, only inspects a bounded amount of tape (not more than one plus the length of the longest right hand side of any production), we can construct a (definite) machine which can operate in parallel with the T machine and which performs the checking that D does.(We will also refer to this new machine as the D machine.)In this way, the decisions are already taken when the 7 machine requests them.Now the parser is behaving exactly as an LR parser, but since we have separated the functions in the 7 and D machines, the total number of states is reduced.As an example pf these ideas, consider the following grammar: From the M matrix we can determine the incompatibilities.We find there are none.Thus one state is sufficient for the 7 machine.(In fact, G is an OR grammar,though not an SMSP).
The 7 machine is obtained directly from the matrix of (unlabelled) precedence relations.We should point out here that, although not explicitly mentioned, a similar decomposition technique appears in [12].

Conclusions
Keeping track of the possible productions which can be in use at any one time during theoperation of a precedence parser can significantly enlarge the class of grammars to which it applies.
We have shown how to obtain such parsers and given some ideas about their relative power.An additional feature over conventional precedence parsers is the improved error detection capability.The fact that we have more than one state during the detection phase allows the parser to discover errors before they are detected by conventional precedence parsers.In fact, these parsers look very much like LR parsers, but are easier to obtain, and they are considerably smaller than these.By "reversing" the machine which decides which reduction to perform we were able to get parsers which are equivalent to LR parsers obtained using error postponrnent techniques [7] but, again, at a substantial savings in the number of states.More work is needed concerning this method of LR decomposition.
: X = Y, if oc 3 -{i I i : A > pXYiy} <L.Labejed Precedence Parsing 3) X js greater than Y under 04, which we will write as [cc$] : X > Y, if Y < Vj, 3 i < I, i : A -> HBDv, D &> VP and a 4 -{j | B U a-C, j : C -7X) Notice that, ignoring the labeling, the relations are defined as in [1].Example 2 shows a grammar together with a matrix of labelled relations.Example 2^ Let G be defined by the productions Si: S • bZg Y : Y -ag S 2 : S -crY Z : Z -ra

Si]:> 1 [(
We have listed the elements of the sets a, instead of using the usual set notation.) [ai;o 2 ]:X<Y and [o^]:X^Y.(There may be more than one label of the form [oj;cto] for the < relation.)We then define a state t incident to node Y as s 11 03 together with the set of all indices of productions in a 2 such that s fl a^<t>.
which will have two channels, subsequently referred as Kj and #2-Vi<(V U {1})*, VZ^QF*--<QF*> l*H<H, be the contents of the stack at some point in the computation.(Thus the 7 machine is in state s incident to node X.) Let Y be the next input symbol (normally this is the next symbol in the input string).Let [o4]:X»Y.If silo4 =-<)>, a shift is performed.This consists in changing state to the successor state t of s and pushing in the stack the symbols Y on the first channel and t on the second.If sMa^ wo say that a potential conflict occurs.The set of all productions whose indices are in sfl(o4l)o 3 Ucti), for all C4, is made available to the D machine which (hopefully) will give a unique decision of what to do.

[
oci;or2], [<*3],[a4] (there may be many labels of type [ai,c<2]).Ms,(X,Y)) = if sna4-<t> then (sflo 3 ) U 1) a 2 silaiAJ> else D (D in the range of S F is interpreted as a call to machine D).The empty state is interpreted as an error indication.The transition function for the unrestricted 7 machine is &r'(s,(X,Y)) « (sHa 3 ) IJ I) a 2 sHa^ Let us compute the machines 7 and D for the grammar in Example 2: to the D machine is given, the set of all i such that fiCsTKatlJailJaa) is given.The D machine can be represented as a forest where the root of each tree is labelled by an element I of L and the corresponding tree represents all right hand sides of productions i such that ?i«LIn this case, 1=1 and f is 1-1 so there is one tree for each production.reduce S 2 reduce S3 reduce Y reduce Z reduce X Once we have determined all incompatible pairs of productions we will define a new set L and a new function f such that if ii and i 2 are incompatible then ?ii*?»2-(In other words we are defining an equivalence relation on I.) Note that a call to D occurs whenever there is an entry in the matrix M containing a relation >.The incompatibilities are defined below.Let 4 denote incompatibility between productions.1) Aj*C k if 3X,Y such that (C^BjfcXOT, (A,):X±Y, A r .A ^XY(3Z (Aj):Z>W for some WCf^i/ or V=A and 3W such that (A h Bj):Z»W.Bj:B-Y0Ziy and 2) C k *D m if there are productions Aj-.A Y0Zi/, Bj:B Y<3Z, there is V such that (C k5 Ai):V<Y and (D m ;Bj):V<Y and (Bj):Z>W for some W<fi*i/ or v=A and 3W such that (A,,Bj):Z»W.

Theorem 3 :
For any n, the class of n-state labelled grammars with independent left and right context is properly included in the class of SLR(l) grammars[6].Proof;Given the set Qo of sets of LR(0) items for a grammar and the set Qp of states of the unrestricted 7 machine, we can define a mapping h from Qo to Qp as follows: h(So)={0}-Let Si be a set of LR(0) items.For each symbol Y(V we can partition S\ in 5 sets, S»=Sj A U Sj2 U Sj3 II Si* U Sj5, SJMA oX.Y(3}, Si 2 =={A crX.Z|3|Z*Y}, Sj3={A aX.}, Sj*={A -.Y(3}, SjS«{A-\Zp|Zp^r}.If h(Sj)«qi then h(&(S i ,Y))=& , (q i ,(X,Y)) r where &' is the transition function of the unrestricted 7 machine and &(Sj,Y)=Sj is the set of LR(0) items obtained as the GOTO(Si,Y) (see [7] for undefined terms).Now we make the following claim.Claim: If Sj is a set of LR(O) items partitioned as above, then h(Sj) contains the indices of all productions in Sj 1 U Sj 2 t) Sj 3 .
Si,S 2 ]:i<a, [Si,S 2 ]:a-b and [Si;A]:b<d, [S 2 ;B]:b«d and [A,B]:d»l we have that the 7 machine calls the V machine when in state {A,B} and reading symbol (d,l).
Cj):X^Y and either (Cj):Y I or (Cj):Y»Z or 2) (Cj$D k ):X<Y and (D k ):Y I for some production D k .Let AjtA & be a production and P(A)={B|B4>A}.We say that A is a valid reduction for S within symbols X and and state s if 1) (C J ;A j ):X<f 1 & for some Cj^s 2) 3Y((P(A) such that Y is adjacent to symbols X and Z within the context of production Cj.Note that we can check the condition of valid reduction by inspecting the matrix M. As the following lemma shows, we get information about possible simultaneous left and right context in which a nonterminal may appear.Lemma h Let Cj-.C ^Xc , tf<-V*,c<V\ Let S^oC^ayXcp^ayXYc^^ayXYZc", with a,frc\c M <V* (but Z<fi*(c'0)) for some Y<P(A) such that P(Y)=4>.Then A is a valid reduction for £> within symbols X and Z and some state s such that Cj<s.Proof: We know C^yXc^tfXYc'.There are two cases: c=Yc* or C*YC',CVA (since (P(Y)=4>).In the first case, (C/hX-Y.Also, either Z<fi*(c') or c'=A and Z(fi*((3).Then, either (Cj):Y-Z or (Cj):Y»Z.If cAV then 3Dj£r Yp such that c&Dp'^Ypp^Yc' with j^*A.Then Z<fi*(p) so (Cj;Dj):X«Y and (Dj):Y Z.In either case, Y is adjacent to symbols X and Z within the context of Cj.Since Y^A^S we have (Cj;Aj):X«fi& where AJ:A &.Thus we have that conditions 1) and 2) of definition 5 are satisfied.|We are now in a position to specify another class of parsers, by changing the T) machine.
B]:» [A,B]:» [S 4 ;B]:<,[S 2 A]:<< [0;A,B]:< [0;S 2 ,S 4 ]:< the right hand side of a production we have that we are within the (l-l)BRC.The following grammar is (1-1 )BRC but not in the class of labelled precedence grammars considered: S aAbAclaBc B dIt thus remain to be shown that any FOR grammar which is (l-l)BRC is in this class.
't care entry is shown as -.An error entry is shown as x.)The following example shows a sequence of configuration taken by the parser when given an input string.Since 7 has 1 state we do not show it on the stack.The state of D appears as a second component.symbol "a" not been there, the last two configurations would have been to note that this grammar has an 18-state LR(l) parser (constructed a la Knuth), a 14-state parser (using Korenjak's method [11]), and a 10-state SLR(l) parser.By allowing the parser to postpone error detection (as the one above does), Aho and Ullman constructed a 7-state parser [7].We have shown that using decomposition techniques one can get a 1+5-state parser for this grammar.Because of the simple way the 7 and D machines are determined, this decomposition technique appears quite useful.
Definition 3j.Let £><V + .We denote by i y an operator such that f k & is the longest prefix of & of length <k.We denote by f> * an operator such that f K *& = {f k p| Similarly we define l k & for suffix strings.
Let (Z,s) be an interior symbol of a ?-channel stack (i.e., the stack is y=(Vi,V2)> l^iN^'pl?andfor some n > 1, fil n tfjL~Z, fi'n^z^)-Let i:A be the production whose index is i.If [cq$ 0*2] : Z <? fi&, sf1a^4>, A<ct2 we say that (the distinguished occurrence of) Z leads into production i.If 3 n> 1, InXj^^ZS-ZS' and (the distinguished occurrence of) Z leads into production i then (the distinguished occurrence of) £>' is a valid expansion of production i.If [ai;oc 2 ]:X«Y or [c*3]:X=Y and for some state s, sfKaiUa^y^ then we will say that X leads into Y under s.We will write [s]:X Y.If i<cc and [or]:X=Y we will sometimes write (i):X=Y.A similar convention holds for the other labels.Now we can give a definition for the D machine.The D machine is specified as follows:a: if 3i, ?i<sfla 4 , i:A (3X, n-|(3X|+l, l n *i=Z(3X and Z leads into i, then "reduce i";