A linear logical framework

We present the linear type theory LLF as the formal basis for a conservative extension of the LF logical framework. LLF combines the expressive power of dependent types with linear logic to permit the natural and concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with references including its type system, its operational semantics, and a proof of type preservation. Another example is the encoding of a sequent calculus for classical linear logic and its cut elimination theorem. LLF can also be given an operational interpretation as a logic programming language under which the representations above can be used for type inference, evaluation and cut-elimination.


INTRODUCTION
A logical framework [27,37] is a formal meta-language specifically designed to represent and reason about programming languages, logics, and other formalisms that can be described as deductive systems. These frameworks consist of a meta-representation language with desirable computational and representational properties, normally a logic or a type theory and of a meta-representation methodology that suggests how to best take advantage of the underlying meta-language to encode a given formal system. The logical framework LF [27] is among the most successful such proposals: it is based on the dependent type theory λ , relies on the judgments-as-types representation methodology, and has been implemented as the higher-order constraint logic programming language Elf [38,41]. LF and Elf have been widely used to study logical formalisms [43] and programming languages [33,39] (see [44] for a survey).
Unfortunately, many constructs and concepts needed in common programming practice cannot be represented in a satisfactory way in meta-languages based on intuitionistic logic and intuitionistic type theory, such as LF. In particular, constructs based on the notion of state as found in imperative languages often escape an elegant formalization by means of these tools. Similarly, logical systems that, by definition (e.g., substructural logics) or by presentation (e.g., Dyckhoff's contraction-free intuitionistic sequent calculus [17], rely on destructive context operations require awkward encodings in an intuitionistic framework. Consequently the adequacy of the representation is difficult to prove and the formal meta-theory quickly becomes intractable.
Linear logic [21] provides a view of context formulas as resources, which can be exploited to model the notion of state, as described for example in [12,28,34,53]. The current proposals put the emphasis on the issue of representing imperative constructs and resource-based logics, but appear inadequate for reasoning effectively about these representations. For example, the linear specification formalism Forum [34] has been used to give an immediate representation of the semantics of imperative programming languages [12]; however, imperative computations are not effectively representable in this formalism and therefore meta-theoretic properties have not been encoded. On the other hand, intuitionistic type-theoretic frameworks such as LF make the representation of meta-reasoning easy, but do not have any notion of linearity built in. For example, the computations of the functional programming language Mini-ML can easily be expressed in LF, which permits automating the meta-theory of that language [33]. However, LF is not equipped to handle imperative computations as effectively, causing the meta-reasoning task to become a major challenge [42].
In this paper, we propose a conservative extension of the logical framework LF that permits representing linear objects and reasoning about them. We call this formalism Linear LF or LLF for short. The language underlying LLF is the dependent type theory λ & , which extends LF's λ with the linear connectives −• (linear implication), & (additive conjunction), and (additive truth), seen in this setting as type constructors. The language of objects of λ is consequently extended with linear functional abstraction, additive pairs and unit, the corresponding destructors, and their equational theory. In order to keep the system simple we restrict the indices of type families to be linearly closed so that a type can depend only on intuitionistic assumptions, but not on linear variables. While at first this may appear to be a strong restriction, the expressive power of the resulting system does not seem to be hindered by this limitation.
The meta-representation methodology of LLF extends the judgments-as-types technique adopted in LF with a direct way to map state-related constructs and behaviors onto the linear operators of λ & . The resulting representations retain the elegance and immediacy that characterize LF encodings and the ease of proving their adequacy. LLF has so far been used to encode the syntax of linear logic, sequent calculus, and natural deduction presentations of its semantics, imperative programming languages and their operational behavior, and a number of state-based games. We have also applied LLF to formalize aspects of the meta-theory of these systems such as the proof of cut elimination for classical linear logic, translations between linear natural deduction and sequent calculus, and properties of imperative languages such as type preservation [6].
The principal contributions of this paper are (1) the definition of a uniform type theory admitting linear entities in conjunction with dependent types; (2) a thorough meta-theoretical investigation of this framework; and (3) the use of this system as a logical framework to represent and reason about problems that are not handled well by previous formalisms, either linear or intuitionistic. To our knowledge, λ & is the first example in the literature of a linear type theory with dependencies. The case of simple types has been analyzed for example in [1,2,5,32]. Subsequent work along the same lines of thought has been proposed by Ishtiaq and Pym in [30]. Both type theories were inspired by ideas in [35].
The paper is organized as follows. Section 2 describes the linear type theory λ & on which LLF is founded. It also presents major results in its meta-theory, such as the strong normalization theorem and the decidability of verifying whether a term has a given type (type-checking) and of computing a type for a given term (type synthesis). Finally, it describes a canonical formulation of this language, which forms the basis for the meta-representation methodology adopted in LLF . Section 3 demonstrates the expressive power of LLF as a logical framework by providing an encoding of the syntax and the semantics of an imperative programming language and by showing how to take practical advantage of the resulting representation of computations. Finally, Section 4 assesses the results and outlines future work. Further details about the work presented in this paper can be found in [6].

THE LINEAR TYPE THEORY λ &
In this section, we define the linear type theory λ & on which LLF is founded. More important, we present the major results in its meta-theory that justify adopting it as a meta-representation language. In order to facilitate the description of LLF in the available space, we must assume that the reader is familiar with both the logical framework LF [27] and various presentations of linear logic [21,22] and linear λ-calculi [1,2]. We will also take advantage of the natural extension of the Curry-Howard isomorphism to linear logic by viewing types as formulas. Due to space constraints, we limit our discussion to the main results in the meta-theory of LLF and, moreover, present only sketches of their proofs. The interested reader is invited to consult [6] for further details about the presentation and the proofs in this section.
The discussion proceeds as follows: we first describe the syntax of λ & in Section 2.1. Then, in Section 2.2, we introduce its semantics as a precanonical typing system, where typable terms are expected to be in η-long form, although they may contain β-redices. In Section 2.3, we focus our attention on the equational theory of this language. We present some basic properties in Section 2.4 and prove strong normalization for this system in Section 2.5. In Section 2.6, we exploit this result to simplify the precanonical presentation of the semantics of λ & as an equivalent algorithmic system, which allows easy proofs of the decidability of type-checking and type synthesis. These properties are presented Section 2.7. On the basis of these results, we devise in Section 2.8 a canonical system for λ & whose only typable terms are both η-long and β-normal. The way λ & is used as the language of the logical framework LLF relies on this formulation. Finally, in order to simplify the treatment of the case study in the next section, we extend the concrete syntax of Elf, the major implementation of LF, to the linear operators of λ & in Section 2.9.

Language and Basic Operations
The linear type theory λ & underlying LLF extends the language λ of the logical framework LF with three connectives from linear logic, seen in this context as type constructors, namely multiplicative implication (−•), additive conjunction (&), and additive truth ( ). The language of objects is augmented accordingly with the respective constructors and destructors. Linear types manipulate linear assumptions which we represent as distinguished declarations of the form x: A in the context; we write x : A for context elementsà la λ and call them intuitionistic assumptions. The syntax of λ & is given by the following annotated grammar, where we have separated the constructs not present in λ with a double bar : Here x, c, and a range over object-level variables, object constants, and type family constants, respectively. We adopt the convention of denoting linear variables with the letter u, possibly subscripted; we will however continue to write x for generic variables. In particular, we write x :: A for a context assumption whose exact nature (linear, x: A, or intuitionistic, x : A) is unimportant. In addition to the names displayed above, we will often use N and B to range over objects and types, respectively. Moreover, we denote generic terms, i.e., objects, types, or kinds, with the letters U and V , possibly subscripted. As usual, we write A → U for x : A.U whenever x does not occur in the type or kind U . Finally, an index is an argument M i to a type family in a base type P = a M 1 . . . M n . The notions of free and bound variables are adapted from LF (notice the presence of a new binding construct: linearλ-abstraction). We denote with FV(U ) the free (linear or intuitionistic) variables of a term U . We extend this notation to contexts and write FV( ) to denote the union of FV(A) for all x :: A in . As usual, we identify terms that differ only by the names of their bound variables and write [M/x]U for the capture-avoiding substitution of M for x in the term U ; note that x can be either linear or intuitionistic. We extend this notation to contexts and write [M/x] for the result of substituting M for x in the type of every assumption in . Finally we require variables and constants to be declared at most once in a context and in a signature, respectively.
In the following discussion, we will as usual drop the leading empty sequence (·) from the representation of a context. Similarly, we overload the context constructor (,) and use it to denote sequence concatenation as well. We do not state or prove the usual properties of this operation. Whenever we concatenate two contexts 1 and 2 we assume they do not declare common variables so that the resulting context ( 1 , 2 ) contains just one assumption for each declared variable. We denote with dom the domain of context , defined as the set of variables declared in it, and write |χ for the restriction of to the variables appearing in χ.
Below we will often need to refer to the intuitionistic part of a context . Therefore, we introduce the function¯ , defined as follows: We overload this notation and use¯ to express the fact that the linear portion of the denoted context is constrained to be empty (e.g., in the all rules for type families in Fig. 3).
Multiplicative connectives in linear logic require the context to be split among the premises of a binary rule (or the contexts in the premises to be merged in the conclusion, depending on the point of view). We rely on the context splitting judgment to specify that the linear assumptions in a context must be distributed in the contexts 1 and 2 , while the intuitionistic assumptions should be shared. Whenever this is the case, the judgment = 1 ✶ 2 is derivable. The rules in Fig. 1 define this judgment.
Notice that, whenever the judgment = 1 ✶ 2 is derivable, 1 and 2 differ from only by missing linear assumptions. In particular, the relative order of the declarations still mentioned in these contexts corresponds to the order in which they occur in . We anticipate that assumptions, either intuitionistic or linear, cannot depend on linear variables in λ & . Therefore, splitting a context that is valid according to the specifications in the next section yields two valid contexts. Similarly, merging valid contexts having distinct names for their linear variables produces a valid context.

Precanonical Forms
The meaning of the syntactic entities of a language can be presented in various forms, the choice being dictated by the aspects we want to emphasize. In this section, we define the semantics of λ & by means of a deductive system that we call precanonical, which enforces derivable terms to be in η-long form, although they might contain β-redices. The aim of the present section is to study the main properties of the type theory underlying LLF, ultimately the decidability of type checking. Relying on a precanonical system is particularly convenient for this purpose since it cleanly separates the practical desideratum of having extensionality as part of our language, commonly expressed by means of ηconversion rules, from the role of β-redices as the foundation of the equational theory of λ & .
The main properties of LF were originally stated and proved for a type theory that did not enforce extensionality, but whose notion of definitional equality was restricted to β-equivalence [27]. However, the adequacy theorems that relate an object system to its LF encoding and the efficient implementation of this formalism as a logic programming language require considering λ terms in canonical form. Therefore, the type theory that is used as a meta-representation language is based on βη-equivalence. This discrepancy was known to Harper et al. when they first presented LF in 1987 [27]. A full treatment of the meta-theory of LF with βη-equivalence was successively devised by various authors [14,20,49] and resulted in nontrivial complications.
The formulation of the semantics of LLF as a precanonical system has the advantage of forcing all derivable judgments to mention only terms in η-long form, as formally expressed in Lemma 2.11. Indeed, all the issues concerning η-conversion are hardwired into the system and do not required explicit treatment. Consequently, the terms that we will ultimately produce are exactly the βη-long terms needed in the adequacy theorems. That property allows us to focus on β-conversion and in particular to retain the simple techniques used in [27], without the above mentioned anomaly. Our approach was inspired by Felty's canonical LF [18].
The precanonical system for LLF is specified by means of a number of judgments. We first have seven judgments defining precanonical terms and the auxiliary notion of preatomic expressions, as well as the natural extension of these concepts to contexts and signatures. The inference rules describing how to derive them are distributed over Figs. 2 and 3. These rules will be discussed in detail shortly. The shape of these expressions is reported here: Note that the judgments referring to types and kinds operate on purely intuitionistic assumptions, expressed by using the notation¯ for their context. We will gain in clarity and conciseness in the following by relying on some abbreviations. Whenever the same property holds for each of the judgments (iii-vii) when applied to terms of the appropriate syntactic category, we write p U ⇑↓ V and then refer to the generic terms U and V if needed. Moreover, if two or more such expressions occur in a statement, we assume that the arrows of the actual judgments match, unless explicitly stated otherwise. We take the liberty of adopting this notation also in the case of kinds, even though Kind is not a term and there are no preatomic kinds. Whenever the judgment p U ⇑↓ V has a derivation P, a fact that we will sometimes write as P :: ( p U ⇑↓ V ), we will often refer to U as the term being validated in this judgment, and call P a validation of U .
The notion of definitional equality we consider is β-equivalence and it operates at the level of objects, type families, and kinds. Among the various possible presentations, we adopt parallel nested reduction (→), defined in Fig. 4 and discussed shortly. We write → * and ≡ for its transitive closure and the corresponding equivalence relation. We omit the obvious rules defining them. Cumulatively, we have the following nine judgments: We need one further ingredient to cope with the multiplicative type constructor −•, namely the context splitting judgment presented in the previous section and defined in Fig. 1.
We will now go through the rules defining λ & and describe the main ideas behind this formulation of the semantics of our language. We first concentrate on the typing judgments in Figs. 2 and 3 and then discuss the notion of definitional equality, founded on the reduction relation in Fig. 4, to be discussed in the next section.
A term M of type A is in η-long form, or precanonical, if it is structured as a sequence consisting solely of constructors (abstractions, pairing, and unit) that matches the structure of the type A, applied to preatomic terms in those positions where objects of base type are required. A preatomic term consists of a sequence of destructors (applications and projections) that ends with a constant, a variable, or another precanonical term, where the argument part of each application is also required to be precanonical. Note that this allows β-redices. This definition extends the usual notion on η-long forms of λ to the linear type operators , &, and of λ & without insisting on β-normal forms.
It is characteristic for η-long forms that the type alone determines the structure of a term until we reach a base type, so, for example, any η-long term N of type A = a & (a a) will have the form Rules opc unit, opc pair, opc llam, and opc ilam allow the construction of terms of the form , M 1 , M 2 ,λu : A.M, and λx : A.M, respectively. These four operators are the object constructors of our calculus. We call these inference patterns introduction rules since, if we focus our attention on their type component, they introduce each of the type constructors of λ & in their conclusion. The manner they handle their context is familiar from linear logic. Notice in particular that opc unit (for ) is applicable with any valid context and that the premises of rule opc pair (for &) share the same context, which also appears in its conclusion. These two are therefore additive constructors, in the sense of linear logic.
Rules opc llam (for ) and opc ilam (for →) differ only by the nature of the assumption they add to the context in their premise: linear in the case of the former, intuitionistic for the latter. The two remaining rules defining the object-level precanonical judgment leave the term in their central part unchanged. The type conversion rule opc eq simply allows replacing the type component of a precanonical judgment with another type, under the condition that it is valid and definitionally equivalent to the original type.
Rule opc a is the coercion from preatomic to precanonical terms. It is restricted to base types P. As a result, there is exactly one rule for each type constructor and one rule for base type, if we ignore type conversion for the moment. This guarantees the property stated above, namely, that the structure of a precanonical term is determined by the structure of its type. Type conversion (rule opc eq) does not destroy this property since it affects only objects embedded as indices in base types, as it will become clear shortly.
The rules defining the preatomic judgment at the level of objects, p M ↓ A, are displayed in the lower part of Fig. 2. They validate constants (rule opa con) and linear and intuitionistic variables (rules opa lvar and opa ivar, respectively). They also allow the formation of the terms M N, MˆN , FST M, and SND M (rules opa iapp, opa lapp, opa fst, and opa and, respectively), whose main operators we call destructors. The latter four inference figures are called elimination rules since they permit taking apart each of the type constructors of λ & from one of their premise, with the exception of . The role played by linear assumptions in λ & is particularly evident in these rules. Indeed, an axiom rule (opa con, opa lvar, and opa ivar) can be applied only if the linear part of its context is empty or contains just the variable to be validated, with the proper type; this is expressed by using the · · · notation. Linearity appears also in the elimination rule for −•, where we rely on the splitting judgment defined in Fig. 1 to manage the context for this connective in rule opa lapp. Observe also that the context of the argument part of an intuitionistic application, in rule opa iapp, is constrained not to contain any linear  assumption. Two remaining rules define preatomic derivability for the level of objects. The semantics of the equivalence rule opa eq is similar to its precanonical counterpart. The coercion from precanonical to preatomic objects, opa c, is unrestricted in its type. This means that destructors can be directly applied to constructors; that is, objects may contain redices. If we omit this rule (or restrict it to base type, which is equivalent), we obtain precisely the canonical forms, that is, those η-long forms which contain no β-redices.
The rules concerning linear objects in Fig. 2 define the behavior of linear types. If we ignore the objects and the distinction between precanonical and preatomic judgments, they correspond to the specification of the familiar rules for the linear connectives , &, and −•, presented in a natural deduction style. It is easy to prove the equivalence to the usual sequent formulation. The objects that appear on the left of these types record the structure of a natural deduction proof for the corresponding linear formulas. The dependent function type x : A.B that λ & inherits from λ generalizes both intuitionistic implication A → B (customarily defined as !A −•B in linear logic) and the universal quantifier ∀x.B, where A plays the role of the type of the (intuitionistic) variable x. With this interpretation, λ & encompasses all the connectives and quantifiers of the freely generated fragment of the language of linear hereditary Harrop formulas, on which the programming language Lolli is based [28]. Additionally, λ & offers the characteristic features of a type theory: higher-order functions, proof terms, and type families indexed by arbitrary objects, possibly higher-order and linear.
Admitting other linear connectives in this language is problematic since the remaining operators of linear logic would introduce in the equational theory a form of reductions known as commuting conversions, which would destroy the possibility of achieving unique normal forms. On the other hand, the semantics of the other linear connectives can be easily emulated in λ & , as shown in [42]. We now turn to the judgments validating types, kinds, contexts, and signatures, treated in Fig. 3. The rules defining the precanonical judgment¯ p A ⇑ TYPE simply specify that every valid type in the system should result from the combination of base types (rule fpc a) by means of the type constructors , −•, &, and (rules fpc dep, fpc limp, fpc with, and fpc top, respectively). Notice the differences in the rules concerning the two function type constructors: the validity of A in x : A.B is implicitly ascertained when checking the validity of the context in the premise; instead, the type A in A −• B is to be validated explicitly since no assumption is inserted in the context. The rules for the preatomic type family judgment¯ p A ↓ K simply verify that base types are syntactically well formed and respect the kind declaration of their leading type family constant (rules fpa iapp and fpa con). Notice the presence of an equivalence rule: fpa eq. Finally, the rules defining the precanonical kind judgment, p K ⇑ Kind, check that every type appearing in K is valid. Note that this judgment is invoked only when validating a signature. The remaining rules in Fig. 3 consider signatures and context. They specify that a signature is valid if the type or kind of every item declared in it is itself valid. Similarly, a context is valid if the type of each of its assumptions is valid.

Signatures sp dot
In the rules in Figs. 2 and 3, types and kinds are always checked using a purely intuitionistic context. This has the effect of preventing valid types from containing free linear variables (although bound linear variables are admitted). Therefore, although the indices of type families are in general linear objects, these terms can refer only to context variables that are intuitionistic. We say that indices are linearly closed. Loosening this restriction would require admitting linear dependent function types in our language, corresponding to linear quantifiers. Preliminary investigations indicate that this would lead to tremendous complications in the typing rules of the language, not to speak of its meta-theory. For example, we could not expect a purely intuitionistic context anymore when looking up a variable, context splitting would rely on the typing judgments since a blind split might violate linear dependencies, and linear dependent types have been observed to interact in a complex manner with the other type constructors, in particular . On the other hand, very few of our examples could have taken advantage of a linear version of . In every case, using its intuitionistic counterpart in its place did not substantially alter the resulting representation, nor its adequacy. In conclusion, although a dependent version of −• appears beneficial for certain applications, we are led to believe that the consequent complications in the meta-theory of the language might outweigh the potential advantages.
We classify the rules in Figs. 2 and 3 into essential and nonessential rules. We count among the latter class the type conversion rules (opc eq, opa eq, and fpa eq) and coercion opa c. All other rules are considered essential. A major task in this section of the paper will be to either hide or eliminate the nonessential rules of λ & . Hiding the equivalence rules will permit an easy proof of the decidability of type-checking for this language. Showing that the rule opa c can be eliminated (in the sense that a type is inhabited with this rule if and only if it is inhabited without it) amounts to showing that canonical forms exist for all objects and types. This is a necessary condition for adopting λ & as the underlying type theory of the logical framework LLF.
We now turn to the reduction semantics of λ & , partially defined in Fig. 4. The notion of definitional equality that we consider is the equivalence relation ≡ constructed on the congruence relation →. The basis of this congruence consists of the following β-reduction rules: As usual, we call the expressions appearing on the left-hand side of the arrow redices. The only possible redex in λ is β iapp . We adopt the standard terminology and call a term U that does not contain β-redices normal, or β-normal. This definition extends immediately to contexts and signatures. Another task of this section will be to show that every valid entity in λ & can be reduced to a normal form and that this normal form is itself valid in LLF. Finally, a term U is reducible if there is a derivation of the judgment U → V for some V .

Equational Theory
We now attack the formal study of λ & . We aim at proving that this type theory has all the desirable properties that a formalism should possess in order to be a suitable meta-language for a logical framework. In particular, we will show that λ & is strongly normalizing, admits unique normal forms, and that typechecking is decidable for this language. These properties rely on a large number of lemmas and it is a challenge of its own to organize the overall meta-theory in a linear sequence of results with simple dependencies. This organization relies crucially on the details of the formulation of the semantics of our type-theory. The adoption of a precanonical system that imposes extensionality, rather than just permitting it via additional rules, simplifies the development of the meta-theory of λ & considerably. The principal consequence of this choice is that definitional equality can be based entirely on β-reduction and, more important, can be defined independently from typing, as shown in Fig. 4. Therefore, the analysis of the equational theory of λ & can be conducted in a totally self-contained manner. Indeed, the results that we will present in this section, in particular the Church-Rosser theorem, apply to arbitrary λ & terms and not just to those that are valid according to the typing rules of our language. This apparently unnecessary generality is the first step toward disentangling the meta-theory of λ & . Had we relaxed extensionality by considering an equational theory containing η-conversion rules, as normally done in the literature, we would have been forced to provide mutually recursive definitions for the typing and the definitional equality judgments. In particular, the equational theory of the resulting formalism could not be studied in    isolation and most of its meta-theory would collapse in one dense theorem consisting of a discouraging number of mutually dependent properties. We will come back to this point at the end of this section.
As we already anticipated, the principal result of this section will be the Church-Rosser theorem for parallel nested reduction. We will rely on this property in order to prove the uniqueness of normal forms, and therefore the decidability of the equational theory of LLF.
The proofs of the results in this section adapt the technique originally devised by Tait and Martin-Löf for the traditional untyped λ-calculus [3]. A very detailed presentation of that method, as well as its formalization in Elf, can be found in [40]. We deviate from this presentation in order to take into account all the entities of LLF that participate in the definition of parallel nested reduction. Specifically, we treat the linear constructs of our language and the new forms of β-reduction they introduce; we also need to consider types and kinds.
The parallel nested reduction strategy defined in Fig. 4 is based on the four β-reduction rules or beta fst, or beta snd, or beta lin, and or beta int. All the other rules are congruences that allow applying reductions to subterms. Notice that the β-reduction rules are directional: the expression on the left-hand side of the arrow is a β-redex, and we like to think of the expression on the right-hand side as "simpler," even if it may be larger, or contain more β-redices, than the term on the other side of the arrow.
A key result in the study of the definitional equality of λ & is that substituting a variable in a reducible term maintains its reducibility. This property is formalized and generalized in the following lemma, where R :: J is used as an abbreviation for "there is a derivation R of the judgment J." LEMMA 2.1 (Substitution). Assume that there exists a derivation for N → N . Then, Proof. We proceed by induction on R in each case.
A term can contain several β-redices and the parallel reduction strategy can reduce any of them, possibly zero or more than one. Therefore, a term U can in general reduce to a number of distinct terms U 1 , . . . , U n . However, a fundamental property of this strategy is that there always exist a common term V to which all these terms are reducible. This is known in the literature as the Diamond property and it is stated below. LEMMA 2.2 (Diamond property). If R :: U → U and R :: U → U , then there is a term V such that U → V and U → V .
Proof. By induction on the structure of U and inversion on R and R . Functional β-reduction is handled through the substitution lemma.
Although one run of parallel nested reduction has the possibility of reducing several β-redices, it is not sufficient in general to produce the normal form for a term, even when it exists. For example, the following judgment shows on the righthand side the simplest term the expression on the left-hand side can be reduced to in one step: (λx : (a → a) → (a → a). xc)(λy : a → a. y) → (λy : a → a. y)c.
Notice that one further step would suffice to obtain the normal form of that expression, which is c.
In order to achieve normal forms when they exist, we need to chain parallel nested reductions by taking their transitive closure. Confluence extends the diamond property to → * , while the Church-Rosser theorem states that it is always possible to reduce equivalent entities back to a common term. These two properties are stated below. They follow from the diamond property by virtue of general techniques [16]. Confluence: If U → * U and U → * U , then there is a term V such that U → * V and U → * V . Church-Rosser: If U ≡ U , then there is a term V such that U → * V and U → * V .
The properties above apply to arbitrary terms, possibly ill-typed or in general invalid according to the precanonical system above (indeed, our example above contained the subterm λy : a → a. y which is not η-expanded). Although definitional equality is always invoked with valid terms in the rules in Figs. 2 and 3, intermediate terms participating in the equivalence derivation might not be precanonical. We will show that it is possible to limit the intermediate terms produced during a definitional equality test to entities that are valid.
Arbitrary terms do not have in general a normal form. A classical example is the term (λx : a. x x) (λx : a. x x). This term reduces to itself and therefore it is not possible to eliminate the β-redex it contains by reduction. However, every valid term, i.e., every term that appears in a derivable typing judgment in our precanonical system, admits a normal form. Furthermore, the strong normalization theorem proved below will show that the order in which β-redices are reduced is not important.
We conclude this section with a short discussion about notions of definitional equality that includes rules for η-conversion. If we were simply to add η-rules based on the following reductions, Therefore, the simple η-conversion rules above are insufficient to capture the notion of definitional equality we are interested in. The situation is actually more serious than that: in the presence of a unit type, in our case, the equational theory we need cannot be axiomatized by means of schematic reduction rules such as the above. Indeed, is the only inhabitant of type and therefore every object of type is η-equivalent to . However, adding the rule M → is clearly unsound since it does not take typing information into account. The correct approach to this problem requires typed definitional equality judgments. The specific inference rules that handle η-expansion are called extensionality rules and only vaguely resemble the η-conversions presented above. It seems that (untyped) η-rules can serve as a foundation of the definitional equality of type theories only in strongly restricted circumstances.

Fundamental Properties
The purpose of this section is to illustrate the basic properties of the precanonical deduction system presented earlier. Many of these results are interesting by themselves since they provide insight about the type theory of LLF beyond what is apparent from the inference rules. Moreover, most of these properties will play a role in the development of the proof of the decidability of type checking for our language.
First, we summarize the principal properties regarding the occurrences of free variables. The long proof of this lemma can be found in [6]. Only a limited number of further results will mention free variables or domains explicitly. In reading this statement, recall we can tacitly rename bound variables and that the names of all variables declared in a context are distinct. Moreover, we remind the reader that p U ⇑↓ V is an abbreviation for the precanonical or preatomic judgments at the level of objects, types, and kinds. i.
iv. If , x :: This lemma provides some insight about how and where free variables can appear in a derivable judgment in λ & . By (i) and (ii), we have that all free variables occurring in a valid judgment must be declared in its context. Property (iv) specifies that the free variables in an assumption must have been declared in the part of the context that is to its left; i.e., assumptions can depend only on declarations made before them. Item (iii) in the lemma states instead that a signature cannot contain free variables. All these properties already hold in λ . The peculiarities of our language appear when analyzing the role of free variables that are assumed linearly. Since the context of the judgments for types and kinds are strictly intuitionstic, (i) entails that free linear variables are permitted only in valid terms from the level of objects. Moreover, (ii) implies that no assumption in the context can depend on a linear variable. These strict constraints are a consequence of the property of our language of permitting only linearly closed expressions as indices to type families.
We now present a number of properties that can be seen as admissible rules of inference. First we have that assumptions in the context can be exchanged freely as long as they do not violate dependencies among them. More precisely, if x :: A immediately precedes y :: B and x does not occur free in B, then the relative order of these two assumptions can be exchanged. This idea is generalized in the lemma below. Permutation depends on weakening, which itself requires permutation in its proof. Therefore, we need to state and prove both properties at the same time. Notice that weakening forbids adding linear assumptions into the context.

LEMMA 2.4 (Structural properties of contexts).
Permutation : i. If P :: ( , , x :: A, p U ⇑↓ V ) and P :: ii. If P :: ( p , , x :: A, ⇑ Ctx) and P :: Weakening: Proof. By simultaneous induction on P and, in the case of permutation, on the length of .
The permutation property has important consequences on the linear assumptions in the context. As we described earlier, no assumption in a context can depend on a linear variable. Therefore, the permutation lemma allows us to shift all these variables to end of . Let us writeˆ for the linear assumptions of a context . Notice that, because of possible dependencies on intuitionistic variables, is not necessarily a valid context. We can then write as (¯ ,ˆ ), or even as "¯ ,ˆ " as in recent presentations of linear logic that maintain intuitionistic and linear assumptions in different contexts (separated by ";") [22,28,45]. Furthermore, since the variables in the domain ofˆ cannot occur inˆ , we are free to permute the contents of this part of the context. Therefore,ˆ can be treated as a multiset.
We could expect a further property, strengthening, to be part of the statement of the lemma above; the presence of this property would actually make that proof easier, but unfortunately we do not have yet the tools to prove it.
Strengthening states that, whenever a variable is declared in a context but does not occur free anywhere in a judgment, then it can safely be dropped and the judgment will still be provable. However, strengthening requires the strong normalization theorem, proved in Section 2.5. Even though a variable does not occur in a derivable judgment, it is possible that the application of one of the definitional equivalence rules produces a term containing it, so that not the judgment itself, but its derivation mentions it. These uses of equivalence are nonessential and can be removed from the derivation. However, we will be able to prove this only as a by-product of strong normalization.
Next, we present a technical result that, although of minor importance in itself, plays an important role in the statement of the adequacy theorems for the representation of an object language. Specifically, when the meta-theory of the object language expects certain objects to behave as if they were linear hypotheses, an adequate encoding would require free linear variables in the indices of base types. This is not achievable in LLF since our language does not admit linear dependent types. We bypass the problem by encoding these linear entities as intuitionistic assumptions. Linearity conditions can be checked at an earlier stage of the computation, or be kept as intrinsic invariants of the object deductive system. This technique permits us to give an effective representation to complex linear judgments without dealing with the complications of linear dependent types. Examples showing how this issue is handled in LLF can be found in [6].
The lemma below states that whenever a derivable term mentions a linear variable, we can safely make it intuitionistic. Intuitively, linear variables must be used once while intuitionistic variables can be used as many times as desired. Proof. By induction on the structure of P.
An important ingredient of the proofs of the theorems below are the lemmas that we call transitivity, following the terminology in [27]. These results permit interpreting assumptions as place-holders for unspecified derivations. Whenever a provable judgment depends on the assumption x :: A, any derivation of a term N of type A satisfying certain context conditions can be substituted into the original derivation and maintain its validity. Therefore, judgments containing assumptions can be thought of as parametric expressions. The transitivity lemmas specify how to instantiate these parameters.
These results contribute to the suitability of λ & as the meta-language of the logical framework LLF. They are the formal justification of the representation of the hypothetical and parametric judgments, so common in formal systems, as simple and dependent function types, respectively. The transitivity lemmas, together with the inversion lemma below, postulate that these operators have a semantics that mimics the behavior of those forms of judgment. LLF extends this correspondence, already present in LF, to capture hypothetical judgments where the hypotheses are linear.
The transitivity lemmas are tightly connected to several results in logic and type theory. The interpretation depends on which part of the judgments we focus our attention on. From the point of view of the λ-calculus embedded in our language, these lemmas can be seen as substitution principles since they describe how variables can be substituted into valid terms while preserving validity. In this, they are closely related to the notion of subject reduction for functional objects. From the logical perspective, under the interpretation of types as formulas, the transitivity lemmas state the admissibility of the cut rule for intuitionistic and linear formulas. Whenever a formula B relies on an assumption A, any evidence of the validity of A, possibly on the basis of further assumptions, can be included directly in an equivalent proof of B that does not mention A among its hypotheses.
Linear and intuitionistic assumptions need to be treated separately since they require a different structuring of the context. Therefore, we distinguish two transitivity lemmas. This corresponds to differentiating two substitution principles or to having a linear and an intuitionistic cut rule (see [29,42]). LEMMA 2.6 (Intuitionistic transitivity).
i. If¯ p N ⇑ A and P:: ii. If¯ p N ⇑ A and P :: Proof. By induction on the structure of P.
We now state the linear transitivity lemma. Notice that, in contrast to the intuitionistic case, the context in the resulting judgment can be larger than the contexts mentioned in either premise. Proof. We proceed by induction on the structure of P. Weakening is required in this proof.
The cumulative validity lemma below states that whenever a judgment is derivable, all entities mentioned in it are themselves valid, i.e., have derivations validating them.
ii. If P :: iii. If P :: iv. If P :: p U ⇑↓ V or P :: Proof. By induction on the structure of P. We need intuitionistic transitivity in order to handle the dependent function type constructor.
The traditional Curry-Howard interpretation associates types with formulas and terms with proofs of these formulas. Clearly, a single formula can have more than one proof, expressed in type theory by admitting several objects of the same type, possibly infinitely many. This is also consistent with the view of types as sets and terms as their elements. In the logical interpretation, we expect every proof to be the proof of a single formula. This property might not be desirable in all type theories, but it holds in the case of languages such as LF and LLF so that objects have meaning independent of the type ascribed to them. Uniqueness, in these frameworks, is considered modulo definitional equality. Indeed, every valid λ & object-level term has a unique type. This property, which extends naturally to kinds, is essential in our proof of the decidability of type checking. It is formally stated in the following lemma. ii. If P ::¯ p A ⇑↓ K and P ::¯ p A ⇑↓ K where the arrows do not need to match, Proof. By induction on the structure of P and P . The idea is to examine these derivations from the bottom up until an introduction or an elimination rule is exposed; then we apply the induction hypothesis on the subderivations.
Given a particular instance of a judgment, the proof technique known as inversion allows identifying a limited number of inference rules whose conclusion matches this judgment. Each matching rule constitutes an alternative case and the judgments obtained by instantiation of its premises can be used in order to draw further inferences. In order to prove that the original judgment is derivable, it is sufficient to exhibit derivations for the premises of all the matching rules. This technique is general and can be applied in our system. Deductive systems having the characteristic that every rule of inference is fully determined by the shape of a particular term in its conclusion are called syntax-directed. They are particularly useful since matching this term yields a single rule. Therefore, further inferences can be drawn on the basis of its premises, without having to consider alternatives. The essential rules in the precanonical deductive system for λ & are syntax-directed with respect to the term they validate. However, the equivalences and the rules that bridge the preatomic and precanonical judgments do not change the derived term and can therefore be seen as filters or pipelines that connect these essential rules, from the standpoint of this term. A detailed analysis of these rules shows that we can indirectly recover the stronger form of inversion. This desirable property is expressed by the following lemma. ii. If P :: iii. If P :: Proof. By induction on the structure of P. All these results apply the same technique: the derivation is unfolded until an introduction or an elimination appears as its last inference rule.
The last property we present in this section is extensionality. It confirms that all terms that are valid according to the precanonical judgment of the object level are indeed precanonical, i.e., in η-long form. It forbids, for example, a constant c of compound type A to be derivable by means of a judgment of the form p c ⇑ A, whatever the signature and the context are. i. If P :: ii. If P :: iii. If P :: Proof. By induction on the structure of P.

Strong Normalization
The aim of this section is to prove strong normalization for λ & . This property implies that every valid λ & term U has a normal form NF(U ), that this normal form is unique, and that it can be obtained by performing β-reductions in arbitrary order in U . We adapt the technique originally proposed for LF in [27]. We only sketch it here. The interested reader is referred to [6] for details.
The proof proceeds via a translation of λ & into the simply typed λ-calculus with pairs λ ×→ . The effect of this encoding will be to eliminate dependencies and linearity, considerably simplifying the treatment of the calculus. This translation has two fundamental properties: first it maintains welltypedness, so that valid terms in λ & are mapped to terms that are valid in λ ×→ . Second, it preserves reductions so that every reduction sequence in λ & corresponds to a reduction sequence in λ ×→ . The strong normalization theorem for λ & is then a consequence of the same property of λ ×→ .
The first step toward proving the strong normalization theorem is given by the following lemma. It states that derivability is closed under reduction; i.e., if a term U is valid in λ & , then every term U that differs from U only by the application of a β-reduction step is also valid. This property is known as subject reduction. We write U → + V if U → * V and U = V . Notice that the symmetric property considering β-expansion does not hold in general. LEMMA 2.12 (Subject reduction). If P :: p U ⇑↓ V and R :: Proof. By induction on the structure of P and inversion on R.
We do not present in detail the simply typed λ-calculus with pairs, λ ×→ [51]. We overload some of the operators of λ & to indicate analogous symbols of λ ×→ . For our purposes, we will need a single base type that we denote as ω. We base the equational theory of λ ×→ on the one-step reduction relation → 1 , which is more appropriate for our purposes than parallel nested reduction. We write → + 1 for its transitive closure. We will rely on some basic properties for these judgments. We do not state or prove them formally since they resemble similar properties of LLF and are well known from the literature. A further property that λ ×→ enjoys is strong normalization: if M : σ , then every reduction sequence on M terminates and, by confluence, yields a unique normal form for this term. Proofs of this and stronger properties for extensions of this language can be found in the literature [19,51].
The encoding we propose transforms LLF judgments p U ⇑↓ V into λ ×→ judgments of the form , M : σ . It maps the generic term U to an object M in λ ×→ , V to a simple type σ , to a signature , and to a context . We now present the four parts that constitute this translation.
We use the function τ ( ) to denote the translation of a term that appears on the right-hand side of the arrow of an LLF judgment. These terms can be either types, kinds, or the symbol Kind, which we map to ω. Given a type or kind U , τ (U ) is a simple type of λ ×→ that maintains the structure of U , but forgets dependencies and linearity. Type families are mapped to the base type ω, the additive product type constructor & of LLF is encoded into the (intuitionistic) product types × of λ ×→ , and both function type constructors and of our language are represented by the unique arrow of that calculus. Kinds are treated similarly. Specifically, we have the following definition for τ ( ):

Types
Kinds As an example, the λ & type has the following encoding in λ ×→ : A term U appearing immediately to the left of the arrow of an LLF judgment p U ⇑↓ V is mapped to a λ ×→ object by means of the function | |. U can be an object, a type, a type family, or a kind.
The encoding of objects maps variables to variables, constants to constants, and constructors and destructors of λ & to the corresponding operator of λ ×→ . The two forms of λ-abstraction of LLF must be mapped to λ ×→ in a way which preserves the redices in the type label. We cope with this issue by encodingλx : A.M, for example, as (λy : ω.λx : τ (A).|M|)|A|. The expected translation of the former term is λx : τ (A).|M|. We embed it in the β-redex (λy : ω. )|A| in order to account for possible reductions performed in the λ & type A. This redex is vacuous since y is a fresh variable not appearing in A or M.
The encoding of types and kinds translates each λ & operator as a constant in λ ×→ , which is applied to the encoding of the arguments. The dependent type and kind constructor requires a functional second argument since its semantics introduces assumptions in the context.
We have the following definition for this encoding function, where π , possibly subscripted, denotes constants in λ ×→ :
The encoding of contexts inductively extends τ ( ), eliminating the distinction between intuitionistic and linear assumptions. The encoding τ ( ) of an LLF signature consists of two parts: a variable part defining the type of every declaration v : U in as π v : τ (U ) and a fixed part declaring the type of the constants needed to represent the type and kind operators of our language. The treatment of yields an infinite family of declarations, one for each simple type in λ ×→ . We have the following definitions:

Context
Signature The encoding just presented preserves well-typedness: Whenever a term U is valid in λ & , the object |U | is valid in λ ×→ . This property is formally stated in the following lemma. i. If P :: ii. If P ::¯ p A ⇑↓ K , then τ (¯ ) τ ( ) |A| : τ (K ).
iii. If P :: Proof. We proceed by induction on the structure of P.
The proposed encoding has the further property of preserving reductions. Therefore, whenever, a term U reduces to U in λ & , the term |U | reduces to |U | in λ ×→ in at least as many steps. The extra β-redex in the representation of λ-abstraction causes individual reductions in our language to be mapped to multistep reductions in the target language, in general. LEMMA 2.14 (Preservation of reduction sequences). If R :: U → + V, then |U | → + 1 |V |. Proof. By induction on the structure of R.
We now have all the ingredients to prove the strong normalization theorem for λ & . A term U is normalizing if there exist a term U in normal form such that U → * U . U is strongly normalizing if every reduction sequence yields a normal term.
The strong normalization theorem states that every derivable term is strongly normalizing. This property holds in λ ×→ , as proved for example in [19,51], and we use this fact to prove that it is valid also for λ & . THEOREM 2.2 (Strong normalization). If p U ⇑↓ V, then U is strongly normalizing.
Proof. By the adequacy of the translation (Lemma 2.13), we have that Assume we have a (possibly infinite) reduction sequence in λ & starting from U : By reduction preservation (Lemma 2.14) there is a corresponding reduction sequence in λ ×→ . Since the latter must be finite, the former will also be finite.
The validity of strong normalization permits the derivation of a number of further properties for our language. A first result is that the normal form of a derivable term is unique, stated below. Proof. By confluence, there exists a term V such that U → * V and U → * V . However, since U and U are normal, they do not contain β-redices, and therefore U = V = U .
This property allows us to define a function NF( ) in order to denote the normal form of a valid term U . NF(U ) is computed from U by applying β-reductions in this term until a normal form is eventually reached. Strong normalization guarantees that this normal form arises after a finite number of steps, and the lemma above ensures that the resulting term is unique.
A further consequence of the strong normalization theorem is that the equational theory of LLF is decidable; i.e., it can be effectively decided whether there exists a derivation for the judgment U ≡ U , for U and U valid terms. The idea is to check that NF(U ) and NF(U ) are identical. Proof. By the Church-Rosser property, U and U have a common reduct U . By subject reduction, U is valid. Therefore, by uniqueness (Corollary 2.1), U , U , and U share the same normal form NF(U ).
By the strong normalization theorem, every sequence of reduction on U and U eventually produces NF(U ) after a finite number of steps. Therefore, a possible decision procedure for definitional equality is as follows: compute the normal forms of U and U and then check whether they are syntactically equal. If they are, then U is definitionally equal to U . Otherwise, they are not equivalent.
Yet another consequence of the strong normalization theorem is that every derivable λ & judgment can be constrained to mention only objects in normal form. Although not strictly needed, it is a common practice to write LLF signatures in normal form.

COROLLARY 2.3 (Normal forms).
i. If P :: Proof. We first reduce U to normal form in (i) by means of the previous corollary and then proceed by induction on the structure of P.
In an implementation of the language, converting terms to normal form as soon as β-redices appear as the result of substitutions is not always necessary. It is usually more efficient to work with weak headnormal forms, which differ from normal forms by permitting redices in the argument of applications.
A final consequence of the strong normalization theorem is that rule opa c can be dropped as soon as we are only interested in valid normal terms. As we briefly motivated earlier, only the presence of this rule permits the formation of β-redices in valid terms (i.e., in the terms immediately to the left of the arrow in a precanonical or preatomic judgment). Eliminating this rule is beneficial in order to use λ & as the language of the logical framework LLF since it prevents nonnormal terms from being validated without losing any valid normal term.

Algorithmic System
A proof of the decidability of type checking for LLF is difficult to achieve directly in the precanonical system in Figs. 2 and 3. Indeed, it is not possible to predict the size of a derivation for a judgment since the rules that we called nonessential in Section 2.1 (the equivalence rules and opa c) can be chained arbitrarily. The strong normalization theorem and the admissibility of opa c limit the need for these rules drastically. In this section, we will embed the first of these results in a deductive system for λ & having the characteristic that every derivable judgment has a derivation whose size is bounded by a function of the terms constituting this judgment; we will come back to this aspect in Section 2.7. Following the terminology of [27], we call this system algorithmic. The properties of this system will also permit us to eventually prove the validity of strengthening for our language.
The algorithmic system for λ & consists of judgments similar to the precanonical presentation; indeed, we use the same expressions, only annotating the turnstile symbol with the letter a instead of p. The notion of definitional equality is the same, but we do not access the judgments defining the equational theory directly. We rely instead on the normalization function NF( ), which is known to exist from the previous section. We remind the reader that this function is defined only for valid terms, and it will be easy to check that whenever it is used in the algorithmic system, its argument is valid.
The inference rules defining the behavior of the algorithmic system are displayed in Figs. 5 and 6. This deductive system shares with the precanonical system of Section 2.2 the property that every derivable term is in η-long form. This aspect will be a consequence of the soundness theorem below. However, the algorithmic system has the further characteristic that all terms mentioned in any well-formed derivation are themselves valid. As we said, ill-formed terms could appear in equivalence subderivations by using β-expansions. In the algorithmic system, the equivalence relation ≡ has been eliminated in favor of applications of the normalization function in the rules introducing the dependent type or kind constructor (faa iapp and oaa iapp), which involve the application of a substitution. The algorithmic system has also the property that the terms appearing on the right of the arrow are always in canonical form. We achieve this effect by normalizing types and kinds when fetching them from the signature or the context (rules faa con, oaa con, oaa lvar, oaa ivar, oca llam, and oca ilam).
The correspondence between the algorithmic and the precanonical systems is formalized by means of the following soundness and completeness theorems. First, every valid term in the algorithmic system is also valid in the precanonical formulation. i. If A :: a U ⇑↓ V, then p U ⇑↓ V and V is in normal form.

Signatures sa dot
ii. If A :: a ⇑ Ctx, then p ⇑ Ctx.
iii. If A :: a ⇑ Sig, then p ⇑ Sig.
Proof. We proceed by induction on the structure of A.
The completeness theorem states that every judgment having a derivation in the precanonical system is also derivable in the system presented in this section. Therefore, no valid term is lost by moving to the algorithmic system. Notice, however, that the type or kind appearing on the right-hand side of the main judgments of the precanonical system must be normalized in the algorithmic system. ii. If P :: iii. If P :: p ⇑ Sig, then a ⇑ Sig.
The strict access to definitional equality, and in particular the impossibility of using it for βexpansions, permits a direct proof of the strengthening lemma in the algorithmic system. Proof. We first prove the analogous lemma for the algorithmic system by induction on a derivation of the given derivations and then use the above soundness and completeness results to transfer it to the precanonical setting.

Decidability
The absence of explicit equivalences in the algorithmic system limits the choices of the inference rules that can be used at every step of a derivation considerably. Every well-formed judgment matches the conclusion of at most one rule, with the only exception of judgments of the form M ↓ A, for which the coercion oaa c from canonical terms is always available. Moreover, the terms appearing in the central part of a validity judgment become smaller when going from the conclusion to the premises in all rules in Figs. 5 and 6 except fca a, oca a, and oaa c, for which they remain the same. Notice that possible cycles generated by the last two can be easily detected and removed. In practice, we restrict the oaa c coercion to types A which are not base types P, thereby also avoiding cycles while retaining completeness. Objects: Size computation for the algorithmic system.
In this section, we take advantage of these characteristics in order to prove the decidability in LLF of verifying whether a fully specified λ & judgment is derivable or not-type checking-and of computing a type or a kind for a judgment whose rightmost term is left unspecified, or declaring that no such term exists-type synthesis. Both issues need to be faced simultaneously in our language.
In order to achieve this goal, a preliminary step consists of defining a complexity measure for an algorithmic judgment. This number yields an upper bound for the size of at least one of its derivations. For this purpose, we rely on a family of size functions that we denote uniformly as . We designed these functions so that the size of the conclusion of every essential rule in the algorithmic system is strictly larger than the size of each of its premises.
The size of terms, types, and kinds is defined in the upper part of Fig. 7. The numerical constants in this definition ensure that a term has larger size than its subterms. Notice that the size expressions of constructs that bind variables rely on the constant 2 rather than 1. This measure ensures that the size of the conclusion of their introduction rule is larger than the size of the premise, which mentions an extended context. This definition to contexts and signatures is in the central part of Fig. 7. We then combine the size of terms, contexts, and signatures in order to define the size of all the judgments participating in the algorithmic system in the lower part of Fig. 7. Notice that the size of a judgment does not refer to the term appearing to the right of the arrow. This is necessary for our purposes since the size of this term in the premises of the elimination rules can in general be larger than in the conclusion.
We designed the functions above so that the size of a judgment is an upper bound on the height of at least one of its derivations in the algorithmic system (which does not directly access the definitional equality judgments). This property is expressed by the following lemma. It is proved in two steps: first we eliminate all sequences of rules consisting only of the alternation of oca a and oaa c; second, we show that the size of the premises of an introduction or elimination rule is always smaller than the size of the conclusion. In each case A has height less than 2h and contains at most 3 2h nodes.
We now have all the necessary ingredients to prove that the type checking problem is decidable in LLF. Given a judgment whose validity we want to decide, a first naive idea is to match it against the conclusion of the inference rules defining the algorithmic system. If none of these rules apply, then the judgment is not derivable; otherwise, we check recursively that the instantiated premises of the viable rules are derivable. The lemma above provides an upper bound on the number of rules that need to be considered.
Unfortunately, such a bound is not enough since the types in the premises of the elimination rules are larger than in the conclusion and would have to be guessed in a pure bottom-up strategy. However, they are determined by the signature and context using type synthesis [4]. Proving decidability of type synthesis requires type checking in order to validate contexts, so we need to prove these two properties simultaneously. ii. (Type synthesis) Given a signature , a context , and a term U, there is a recursive procedure that computes a term V such that the judgment p U ⇑↓ V is derivable or determines that no such V exists.
Proof. We prove the analogous property for the algorithmic system and rely on the constructive aspects of the soundness and completeness theorems above to transfer it to the precanonical setting. The idea, in order to prove the algorithmic formulation of this result, is to apply inference rules that match U (and V for type checking) until either a derivation is produced, no rule is applicable, or the upper bound on the size of the derivation at hand has been reached.
The mutually recursive parts of this theorem yield effective procedures for type checking and type synthesis. Once this result has been proved, type checking can be conveniently reduced to type synthesis: In order to check whether p M ⇑↓ A is derivable, it suffices to check that A is valid, infer a type A such that p M ⇑↓ A is derivable, and check whether A ≡ A holds, which, by Corollary 2.2, is decidable. An application of the equivalence rules yields p M ⇑↓ A. The subproblem of checking whether A is valid is also reduced to finding a kind for it, without indirections this time.
The decidability of type checking is a necessary property for using a formalism as a metarepresentation language. Like LF, LLF encodes judgments from an object language as types and their derivations as object level terms. The decidability of type checking for λ & permits determining effectively whether a given object is the representation of a valid derivation from a (potentially linear) object formalism.

Canonical Forms
As in the precanonical case, the algorithmic system presented in Section 2.6 prevents terms which are not in η-long form from being validated. The only exception concerns the process of constructing a term U by means of the judgments a U ↓ V , where U will not necessarily be η-expanded, in general. In this section, we will present a proof system that forces all entities appearing in a derivable judgment to be also in normal form. Therefore, all valid terms will be canonical, i.e., in η-long form and without β-redices. We will achieve this property by removing rule oaa c, which, as we saw, permits the formation of β-redices in derivable terms. In this system, we will also customize the rules of the algorithmic system to make them more suited for automation. We will rely on this deductive system in the remainder of the paper.

oa con
An important consequence of the elimination of oaa c is that the resulting system is syntax-directed: any judgment matches the conclusion of at most one inference rule.
The equivalence between the algorithmic and the canonical system in Figs. 8 and 9 is expressed by means of the following soundness and completeness theorems. THEOREM 2.6 (Soundness of the canonical system).
Moreover, , , U, and V are in normal form.
Proof. By induction on the structure of C.
The converse of this statement is not true in general. It holds, however, whenever all the entities appearing in a derivable algorithmic judgment are in normal form. We know by the normal form corollary 2.3 that every derivable judgment in that system is equivalent to a judgment that mentions only normal terms. Therefore, the correspondence between the algorithmic and the canonical system is not perfect, since it preserves derivability but not derivations, in general. This is, however, acceptable for our purposes since, when performing search and when encoding a deductive system, we are only interested in terms that are η-long and β-normal, i.e., canonical. THEOREM 2.7 (Completeness of the canonical system). v. If a ⇑ Sig, then NF( ) ⇑ Sig.

i. If a M ⇑↓ A, then • if A is a base type, then NF( ) NF( ) NF(M) ⇑↓
Proof. We proceed by induction on the given judgments after applying the normal forms corollary 2.3 and taking into consideration the admissibility of rule oaa c, which derives from the analogous property of rule opa c in the precanonical system.
We will adopt the system presented in this section to compare the features of LLF to LF. We will rely on a canonical system adapted from [39] for our comparison. Details can be found in [6]. We will distinguish the λ equivalents of the judgments presented earlier by annotating them with the superscript LF .
All syntactic entities of λ are available in our language. This embedding is maintained at the level of judgments. The canonical system for λ & in Figs. 8 and 9 differs from the corresponding λ system only by the addition of rules that deal with the linear entities of our language. Therefore, every judgment derivable in λ has an isomorphic derivation in λ & . iii. If LF :: LF ⇑ Sig, then ⇑ Sig.
Proof. We proceed by induction on the structure of LF. All cases are immediate as soon as we notice that, for a λ context , we have that¯ = .
LLF has also the converse property of being conservative over LF; i.e., every derivable λ & judgment that mentions only entities in the λ fragment of the syntax has a corresponding derivation in λ . This also entails that every judgment that is not derivable in LF remains such in LLF. THEOREM 2.9 (Conservativity over LF). Let , , U, and V be an LF signature, an LF context, and two LF terms, respectively; then i. If C :: ii. If C :: ⇑ Ctx, then LF ⇑ Ctx.
Proof. We proceed by induction on the structure of C. We need to remember that for an LF context , we have that¯ = .
These properties have important consequences. Not only every judgment derivable in LF is derivable also in our language, but, more important, all the representation techniques, adequacy theorems, and examples developed for LF remain valid for LLF.

A Concrete Syntax for LLF
In this section, we extend the concrete syntax of Elf [41] to express the linear operators of LLF. In doing so, we want to fulfill two constraints: first of all, existing Elf programs should not undergo any syntactic alteration (unless they declare some of the reserved identifiers that we will introduce) if we were to execute them in an implementation of LLF relying on the new syntax. In other words, the extension we propose should be conservative with respect to the syntax of Elf. Second, we want to

Abstract syntax
Concrete syntax avoid a proliferation of operators: keeping their number as small as possible will make future extensions easier to accommodate if their inclusion appears beneficial. The set of special characters of Elf consists of % : . ) ( ] [ } {. We extend these with two symbols: , and^. λ & object and type family constants are consequently represented as identifiers consisting of any nonempty string that does not contain spaces or the characters % : . ) ( ] [ } { ,^. As in Elf, identifiers must be separated from each other by whitespace (i.e., blanks, tabs, and new lines) or special characters. We augment the set of reserved identifiers of Elf (type, -> and <-) with <T>, &, −•, •−, <fst>, and <snd>. Although not properly an identifier, the symbol () is also reserved; this string is forbidden in Elf. Figure 10 associates every λ & operator to its concrete representation. Terms in the λ sublanguage of LLF are mapped to the syntax of Elf. This language offers the convenience of writing -> as <-with the arguments reversed in order to give a more operational reading to a program, when desired: under this perspective, we read the expression A <-B as "A if B." We extend this possibility to linear implication, −•. Clearly, when we use •−, the arguments should be swapped: A •− B is syntactic sugar for B −• A. Figure 11 gives the relative precedence and associativity of these operators. As in Elf, parentheses are available to override these behaviors.

Precedence
Operator As in Elf, a signature declaration c : A is represented by the program clause: Type family constants are declared similarly. For practical purposes, it is convenient to provide a means of declaring linear assumptions. Indeed, whenever the object formalism we want to represent requires numerous linear hypotheses, it is simpler to write them as program clauses than to rely on some initialization routine that assumes them in the context during its execution. To this end, we permit declarations of the form cˆA.
with the intent that this declaration should be inserted in the context as a linear assumption. We retain from Elf the use of % for comments and interpreter directives. We adopt the conventions available in that language in order to enhance the readability of LLF programs [38]. In particular, we permit keeping the type of bound variables implicit whenever they can be effectively reconstructed by means techniques akin to those currently implemented in Elf [38]. We write {x}B, Similar conventions apply to dependent kinds. As in Elf, the binders for variables quantified at the head of a clause can be omitted altogether if we write these variables with identifiers starting with a capital letter. Moreover, the arguments instantiating them can be kept implicit when using these declarations.
Finally, we relax the requirement of writing LLF declarations only in η-long form. With sufficient typing information it is always possible to transform a signature to that format.

THE METHODOLOGY OF LINEAR META-REPRESENTATION
LF and Elf constitute a useful tool for studying existing logics and programming languages and an ideal playground for experimenting with alternative constructs in the design phase of new languages. The range of practical applicability of these formalisms is limited by their foundation on intuitionistic type theory. All the formal systems that have been successfully encoded in LF (functional and logic programming languages [33,39], λ-calculi [40], and a number of logics [27,43]) share a fundamental characteristic: whenever a judgment mentions a context, a bottom-up reading of the inference rules for it may add items, but it never removes assumptions. We call contexts with this property permanent, in contrast with volatile contexts, free from this restriction. Object formalisms admitting arbitrary operations on their context cannot be effectively encoded in LF: the standard technique, representing object context items as LF assumptions, is not sound in this case since LF assumptions are permanent. The alternative is to represent the object context as a term in LF and implement explicitly the operations required to access and manipulate it. This is undesirable since it makes the adequacy results difficult to prove and often complicates the encoding of meta-theoretic properties to the point of making it hardly manageable in practice.
This situation is quite unfortunate since most formalisms of practical significance rely on a volatile context in an essential manner. The languages used for programming commercial applications are imperative: they have a store and assignment instructions to change the value of variables. Most realworld problems carry a state that changes with time. Many new logics and type theories are inherently bound to destructive context manipulations. Permanent contexts are insufficient even for more traditional formalisms, for example when studying efficient proof-search procedures for intuitionistic logic [17].
The linear type theory λ & presented in the previous section retains all the desirable properties of LF and also augments this formalism with linear assumptions, admitting volatile manipulations, and with a suitable set of operators to manage them. These new features overcome the above deficiency of LF: if we represent the volatile context of an object language as linear assumptions in λ & , destructive context operations in the object formalism can be modeled by an appropriate combination of linear operators.
The linear logical framework LLF is founded on the type theory λ & and combines as its metarepresentation methodology the judgments-as-types technique of LF with the above observation. The present section illustrates the added expressiveness of LLF as a logical framework by describing the meta-representation methodology it adopts, first abstractly and then on a concrete case study. The formalism we want to represent is an imperative extension of Mini-ML [25,33,39], a purely functional restriction of the programming language ML [23,36]. More precisely, we augment that language with a store and imperative instructions to access and modify the values it contains, we formalize the typing and evaluation semantics of these constructs, and we show that this extended language enjoys the type preservation property. We call this language MLR, for Mini-ML with References. The linear assumptions of LLF can be used to encode individual memory cells and the linear operators of our type theory offer effective tools to model manipulations on them.
We review the judgments-as-types representation methodology and extend it to handle volatile assumptions in Section 3.1. Then, we give a detailed but informal presentation of the syntax, semantics, and type preservation property for MLR in Section 3.2. Finally, we show how to encode these different aspects in LLF in Section 3.7. Appendix A contains the complete LLF signature for this example. In the following, we will concentrate mainly on the novel constructions available in MLR, referring the reader to the literature [13,25,33,39] for aspects already present in Mini-ML.

Judgments-as-Types Revisited
We will review the technique of judgments-as-types of LF [27] by analyzing the following simplified rule of inference from the case study in this section: , x : τ e e : τ e fix x.e : τ tpe fix Ignoring for the moment the context , it specifies that the fix-point expression fix x.e has type τ if e has type τ assuming that the variable x has also type τ . We will emphasize the fact that x can occur in e by writing e(x). Given a closed expression fix x.e(x), the judgment in the conclusion of tpe fix postulates that fix x.e(x) has type τ (we need to provide a derivation to ascertain that this is indeed the case). We call such a judgment simple. The judgments-as-types representation methodology encodes simple judgments as λ base types. In Section 3.7, we will use the type family constants EXP and TP, both of kind TYPE to classify the expressions and the types of the object language, respectively. The general form of the typing judgment above relates an expression and a type, and therefore we encode it as a type family TPE, of kind EXP → TP → TYPE. Given representations (FIX λx: EXP. e x) and τ (to be explained below) for the closed expression fix x.e(x) and for the object language type τ , the simple judgment e fix x.e(x) : τ is represented as TPE(FIX(λx : EXP. e x)) τ .
The judgment in the premise of rule tpe fix is different in nature. Indeed, it specifies that the expression e(x) has type τ if we assume that the variable x has also type τ . A judgment of this form is called hypothetical. Notice also that x is a bound variable in fix x.e(x), but it is free in e(x). Therefore, that premise expresses the fact that e(x) has type τ for a generic expression x of type τ . The judgment , x:τ e e(x) : τ is therefore said to be also parametric in x. The judgments-as-types representation methodology encodes hypothetical and parametric judgments by means of simple and dependent function types, respectively. The premise of the rule above, which is parametric in x and hypothetical in x : τ , is represented as follows: x : EXP.TPE x τ → TPE( e x) τ Notice that instantiating the parameter x with some term e yields a hypothetical judgment postulating that e(e ) has type τ assuming that e has type τ . This reduces to a simple judgment as soon as we provide a derivation for this hypothesis.
An attempt at finding a canonical LF derivation with the above type reduces to searching for a derivation for the base type TPE ( e x) τ after having added the assumptions x : EXP and t x : TPE x τ to the context of LF. Viewing this as an alternate encoding for the premise of rule tpe fix illustrates the manner an object context is encoded according to the judgments-as-types methodology: each item in the context of the object formalism is represented as one or more assumptions in the context of LF. This technique offers the further advantage that we can rely on the primitive operations of LF to simulate the lookup of object level assumptions. Less sophisticated representations, for example those that encode the object context as a term, must provide explicit access operations.
Observe that rule tpe fix can be read as a judgment that is parametric in the (functional) expression e and the type τ , and hypothetical in the derivability of its premise. Indeed, it is encoded as the following declaration TPE FIX : e:EXP → EXP. τ : TP. In summary, the judgments-as-types representation methodology for LF encodes simple judgments as base types, hypothetical and parametric judgments as simple and dependent function types, respectively, and elements of the object context as items in the context of LF. Moreover, derivations for a simple judgment are naturally represented as terms of the corresponding base type.
The judgments-as-types methodology interacts particularly well with higher-order abstract syntax, a technique for the representation of the syntactic level of an object formalism that encodes object variables as meta-variables and relies on the λ-abstraction of λ to emulate generic object-level binding constructs. Above, we encoded the fix-point expression fix x.e(x), that binds the variable x in e(x) as (FIX (λx : EXP. e x)). We used the λ-abstraction of LF to express binding, and consequently encoded the operator fix by means of the LF constant fix that accepts a functional operator (fix : (exp -> exp) -> exp).
The faithfulness of the representation of an object formalism is captured by means of adequacy theorems that relate the entities being represented to their encoding. An important advantage of the judgments-as-types technique with respect to less sophisticated approaches is that it produces encodings very close to the notations being formalized. This makes the adequacy theorems easy to prove.
Here, and in the remainder of this paper, we view and describe operations on the context as they arise when we construct derivations "bottom-up," that is, from the judgment in question toward the axioms. This view is the most natural one to elucidate the examples and anticipates the logic programming interpretation of LLF. For example, instead of saying that we discharge a hypothesis in rule opc ilam in Fig. 2 we say that we introduce a hypothesis. From this point of view, λ offers two operations on its context: insertion and lookup. In particular, the context can only grow during the bottom-up construction of a derivation. Therefore, the judgments-as-types methodology in λ cannot capture object languages that perform deletion on their context. Consider as an example the following inference rule, taken from the case study in the next section: This rule describes the semantics of assignment in an imperative programming language (further details will be given in the next section). It specifies that, in order to assign the value v to the cell c, we must update the binding c = v in the store with c = v; some uninteresting value is returned. An elegant encoding of this system in LF would represent each cell-value pair in the store as a meta-level assumption. However, λ does not provide means to simulate the deletion of the old binding, c = v . In contrast, we can easily achieve this effect in LLF. Indeed, looking up a linear assumption in λ & removes it from the context. This suggests encoding each cell-value pair c = v present at any instant in the store of the object language as an LLF linear assumption Cn: contains c v .
The linear type constructors of λ & provide the necessary means to manipulate such assumptions. We rely on to enter them in the context of LLF and take advantage of the context splitting semantics of this operator to isolate them in order to access them. The additive product type constructor, &, offers means to duplicate or share linear assumptions among its two conjuncts. This operator can also be used to express selection between exclusive alternatives, although we will not take advantage of this feature here. Finally, the unit type, , permits discarding unused linear hypotheses.
These different features will be illustrated in detail in Section 3.7. We just show the encoding of the rule above: ev assign*2 : (contains C V -o ev K (return unit) A) -o (contains C V' -o ev K (assign*2 (rf C) V) A).
The linearity of our logical framework can be integrated into higher-order abstract syntax as a convenient manner of encoding languages relying on linear binders [6]. When they are not needed we can just use the LF fragment of LLF exactly as before.

Mini-ML with References
Critical choices in the implementation of programming languages depend on the validity of metatheoretic properties. Type preservation in Standard ML [23,36], for example, guarantees that no typing error can arise during evaluation; therefore execution can be sped up significantly by disregarding type information at run-time. Meta-theoretic properties in the presence of nonfunctional features, included in most concrete languages, are difficult to prove and therefore the formal analysis of imperative extensions of purely functional programming languages has received great attention in the literature. The addition of references and their interaction with polymorphism have been analyzed with different tools, ranging from the complex domain-theoretic approach of Damas [15] to the syntactic formulation of Harper [26]. The latter idea was adapted from Wright and Felleisen, who additionally consider continuations and exceptions [54].
The proofs of these properties are long and error-prone. Therefore, recent work has investigated the possibility of partially automating their generation or at least their verification. Chirimar gives Forum specifications for a language with references, exceptions, and continuations and uses the meta-theory of Forum [34] to study program equivalence [12]. VanInwegen [52] formally proves properties such as value soundness (the fact that evaluating an expression yields a value, if it terminates) for most of Standard ML with the help of the HOL theorem prover [24].
In this section, we define MLR as an extension of Mini-ML with references and imperative instructions and study aspects of its meta-theory. Although our principal objective is to demonstrate the expressive power of LLF, our presentation differs in some aspects from the formulations and proofs in the literature and therefore might be interesting in itself. We will point out differences and similarities with other approaches as they arise.

Expressions and Store
Since its introduction in [13], the language Mini-ML and variants of it have been used for case studies in the presentation of logical frameworks [25,33,39]. Mini-ML is a purely functional restriction of the programming language ML [23,36]. More specifically, it is a small statically typed functional programming language including numerals, conditional expressions, pairs, polymorphic definitions, recursion, and functional expressions.
We consider an extension of Mini-ML with a store and imperative instructions in the style of ML to access and modify the values it contains. We call this language Mini-ML with References, or MLR for short. The store of an MLR program is defined as a collection of cells each containing a value. We will sometimes use location or address as synonyms of cell. MLR makes available all the constructs of Mini-ML but enriches the syntax of its expressions with the necessary operations to manipulate individual cells. The resulting language is specified by the following grammar, where we have separated out the constructs not present in standard presentations of Mini-ML with a double bar ( ). Cells c and stores S are not directly accessible to the programmer, but it is customary and convenient to enrich the syntax in order to represent intermediate stages during computation. In these productions, c ranges over the lexical category of memory locations, while we use the letter x for variables. The meta-variable v denotes values, which we will define shortly. We will treat stores as multisets, omit the leading "·" from a nonempty store, and overload "," to denote the union of two stores. Finally, we require the cells appearing on the left-hand side of a store item to be distinct.
The polymorphism in MLR is restricted to values, which is generally accepted as superior to the imperative type variables present in previous versions of SML [31]. We achieve this by distinguishing two forms of let. The expression ref e dynamically allocates a cell and initializes it with the value of e. The contents of a cell can be inspected by dereferencing it with ! and modified with an assignment (:=). Different, from [54], but consistent with the main stream in the literature (including the definition of Standard ML [36]), we choose this operation not to return the assigned object, but the unit element . The sequencing operator (;) is typically used as a means of chaining a series of assignments with some interesting final value; it is syntactic sugar for the expression (letval x = e 1 in e 2 ) when x does not occur in e 2 . As is normally the case in functional languages, MLR does not offer explicit means to deallocate memory cells.
All these constructs are available in Standard ML [36] with the exception of addresses themselves (c), which cannot be manipulated directly in that language. We require MLR programs not to mention locations directly so that cells are always guaranteed to be initialized. Thus cells are created dynamically with ref and can be named by binding them to variables with one of the two let constructs of MLR.
As in ML, the reference cells of MLR encompass two distinct features of imperative programming languages such as C or Pascal. First of all, they play the role of the imperative variables of these languages and can be used as such (except for the necessity of dereferencing them explicitly in order to access their value). Second, we can use them as pointers in data structures, although their usefulness is rather limited in this respect due to the absence of recursive data types in MLR. Such data structures could be easily added to the language.

Typing
The language of types of MLR augments the typing constructs typically present in Mini-ML, namely natural numbers, unit, pairing, and functional types, with one new constructor: for each type τ , the type τ ref for references to objects of type τ . The syntax of types is summarized in the following grammar: We use type variables to express schematic polymorphism. We eliminate an explicit quantifier in favor of substitution in the typing rule for the letname construct (see Fig. 12). On the basis of this definition, the static semantics of MLR naturally extends the traditional typing rules of Mini-ML. The possibility of expressions to mention cells requires introducing a store context as a means to declare the type of  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   free locations. More precisely, the item c : τ in a store context declares τ as the type of the values that c can contain; c itself has consequently type τ ref. Contexts, as usual, assign types to free variables. They are constructed according to the following grammar:

Contexts:
::= · | , x : τ Store contexts: ::= · | , c : τ We rely on the usual convention that the names of the variables and the cells declared in stores and context stores, respectively, are distinct. Moreover, we treat both forms of contexts as multisets.
We express the fact that the MLR expression e has type τ with respect to a store context and a context with the judgment ; e e : τ.
The presence of a store context in the typing rules for MLR is necessary even if we forbid the users to write addresses directly in their programs. It accounts for cells dynamically allocated during evaluation, which may appear in intermediate results and in the final answer. The inference rules for the typing judgment are displayed in Fig. 12. The upper part of this figure shows the rules for the functional core of MLR. The changes with respect to the usual rules for Mini-ML are limited to the systematic inclusion of a store context in the judgments.
The central part of Fig. 12 shows the rules for the novel features of MLR. As for the functional case, they express the conditions under which an expression can be statically accepted as meaningful. For example, rule tpe deref enforces that only references be dereferenced.
In the lower part of Fig. 12, we present the rules for typing a store. The judgments we consider have the form S S : that we interpret as requiring that the type of each value v stored in S coincides with the type of the corresponding cell as specified in . The store context gives the type of the cells v may mention. We will always be interested in top-level judgments of the form S S : since a store will in general refer circularly to its own cells. Rule tpS cell prevents expressions containing free variables from being inserted in the store.

Evaluation
An MLR expression e will in general mention reference cells whose values are contained in the store. The evaluation of e will typically not only retrieve these values, but also change them or create new cells. Therefore, as e is evaluated, the store will undergo transformations, and by the time a value for e is eventually produced, it might appear very different from the store we started with. This observation suggests an evaluation judgment of the form where S is the store prior to evaluating e, and S results from the evaluation of e to v: cells in e refer to S while cells in v refer to S . This formulation extends the traditional evaluation judgment for Mini-ML [25,33,39].
The dynamic semantics of functional languages enriched with imperative features, such as MLR's references, is normally expressed in the literature in this manner. We will instead adopt a different strategy and present the reductions occurring during the execution of an MLR program as continuationbased evaluation rules. This choice has been dictated by our intention to encode the semantics of MLR in LLF. A direct representation of the judgment above, although possible, would have resulted in a less elegant encoding. For similar reasons, Chirimar [12] also chose a continuation-based formulation.
Different from more declarative formulations, a continuation-based execution strategy imposes a strict order of evaluation on the different subexpressions of any given construct in the language. This order respects the expected flow of data and is therefore natural. For example, when computing the value of an expression of the form (letval x = e 1 in e 2 ) we will first evaluate e 1 , obtain a value v , substitute it for x in e 2 , and only then evaluate the resulting expression.
An effective implementation of this strategy requires sequentializing the evaluation of the subexpressions of constructs with more than one argument. One of them is evaluated immediately while the evaluation of the others is postponed until a value has been produced for it. Clearly, if a subexpression depends on the value of another, we process it last. We realize this idea by maintaining a stack of expressions to be evaluated, called a continuation.
Postponing the evaluation of an expression e 2 in favor of another expression e 1 is achieved by pushing the former into the continuation. Since, as when evaluating (letval x = e 1 in e 2 ) for example, the value of e 1 might need to be substituted for some free variable x in e 2 , we wrap a binder for x around e 2 and thus insert an object of the form λx.e 2 into the continuation (or compose it with the current continuation, depending on whether the continuation is viewed as a stack of functions or as a single function corresponding to their composition). For uniformity, it is convenient to take this measure every time we insert an item into the stack. As soon as e 1 has been fully evaluated to a value v, λx.e 2 is extracted from the continuation, v is substituted for the variable x in e 2 , and [v/x]e 2 is evaluated in turn.
The necessity of distinguishing expressions still to be evaluated from values being returned requires the introduction of the new syntactic layer of instructions. Specifically, we write eval e for the request to evaluate an expression e and denote the intention to return a value v as return v. Instructions are needed also for the purpose of handling partially evaluated expressions.
While evaluating a Mini-ML expression simply yielded a value, MLR expressions will in general produce objects mentioning cells. Therefore the result of the evaluation of an instruction i must include not only a final value v but also a reification [S ] of the final store S it draws its references from; moreover, as a measure of hygiene, we mark the cells c that have been introduced during the evaluation process by binding them in front of the pair ([S ], v) by means of the new c. operator. The resulting object is called an answer and is indicated with the letter a. For our purposes, [S ] will be a sequence obtained by ordering the elements of S according to some arbitrary order. It is, however, conceivable that only the cells that contribute to the final value be kept, realizing in this way a form of garbage collection.
The structure of instructions, continuations, and answers is given by the following grammar, where we have indicated with the double bar the instructions introduced in correspondence to the imperative constructs of MLR. The typing rules for objects in these three categories are displayed in Fig. 13. Notice that the type of an answer coincides with the type of the embedded value. Rule tpa val requires that the store it is paired with be well typed, while rule tpa new constrains every occurrence of the cells bound in an answer to be consistently typed.
Values constitute the subclass of expressions that evaluate to themselves. They are specified by the following grammar.
On the basis of this definition, we can justify the uses of the term "value" in the above presentation. Not only does return operate only on values, but computation places a value at the heart of answers and the contents of every cell in the store are a value. See [6] for a formal statement of these properties. We model the continuation-based semantics of the imperative constructs of MLR by means of a judgment of the form where i is the instruction to be executed, K is the current continuation, S is the store with respect to which i is to be evaluated, and a is the final answer produced as the result of the evaluation.
The inference rules concerned with nonfunctional expressions of MLR and the corresponding instructions are separated out by a dotted line in Figs. 14 and 15, respectively.
Cells (rule ev cell) simply evaluate to themselves, like any value. The sequencing instruction e 1 ; e 2 has a simple semantics too: it evaluates e 1 , disregards the returned value, and then proceeds with the evaluation of e 2 (rule ev seq).
The . The argument part of !e is evaluated to a reference cell (rule ev deref) and the value associated to it is returned (rule ev deref * ). We rely on the auxiliary read judgment S c = v in order to retrieve the value of a cell (rule read val). The evaluation of e 1 := e 2 first evaluates e 1 to a store location c (rule ev assign), computes the value v of e 2 (rule ev assign * 1 ), and replaces the former contents of c with v (rule ev assign * 2 ). The returned value is . We conclude our discussion about evaluation with a few words about the interaction of references and polymorphism. The question is subtle and has received great attention in the literature [ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Consider for example the following MLR expression: At first sight, this expression allocates a cell and initializes it with the identity function, which has polymorphic type α → α. In the body of letname, we first update it to the successor function, of type nat → nat, and then apply it to , of type 1. Clearly, something is wrong, but the typing rules of MLR accept the program above as a correct expression of type 1. Is there a flaw in the definition of the static semantics of our language? Fortunately, no. A closer analysis reveals that, since the evaluation of letname substitutes ref (lam x.x) for every occurrence of f in its body, the expression above reduces to:  Each occurrence of ref (lam x.x) evaluates to a different cell that is typed according to its use. The expression above would not be typable if we had used letval in place of letname.
Languages with explicit type variables solve the same problem by distinguishing between applicative and imperative type variables in order to avoid problems such as the above [23,26,36]. Restricting polymorphism to values has also been proposed as a solution to this problem [50] and has been adopted in the current definition of Standard ML [36]. This language offers only one form of let, but it takes different courses of action depending on whether it defines a value or an arbitrary expression. Our treatment is slightly more general since it makes the call-by-name semantics of letname directly available: for example, the above expression does not type-check in SML.

Type Preservation
We conclude this section with the statement of the type preservation theorem for MLR and of the lemmas it depends on. For reasons of space, we will not formalize the proof of these results in LLF. The interested reader can find an encoding of this proof in our linear logical framework in [6].
The type preservation theorem states that the type of an expression does not change as the result of evaluation. The proof of the type preservation theorem relies on a number of auxiliary lemmas. The first is weakening: whenever an expression is well typed in a given context and store, it remains well typed under further assumptions and additional cells. This is easily proved by induction on typing derivations. ii. If T :: ; i i : τ, then , ; , i i : τ .
Proof. We proceed by induction on the structure of T . The parts of this lemma should be proved in the order they are presented.
The second auxiliary property we need is the substitution lemma: it states that free variables in a well-typed expression can be substituted for expressions of the same type and the result will be well typed. Proof. We proceed by induction on the structure of T .
As in the functional case, type preservation ensures that the type of an expression is identical to the type of its value. Intermediate evaluation steps require us to take into account arbitrary continuations and stores. We have the following generalization. Proof. We proceed by induction on the structure of a derivation of the evaluation judgment and inversion on the derivations of the typing judgments.
The type preservation result is formalized as follows at the top level of evaluation. COROLLARY 3.1 (Type preservation). if · init eval e → a with ·; · e e : τ, then · a a : τ .

Representation in LLF
In this section, we give an LLF representation of the syntax of MLR, of its static and dynamic semantics, and show how to exploit the resulting encoding of computations. The representation we propose is a natural extension of the LF code for Mini-ML found in the literature [33]. In particular, it retains its structure, its elegance, and the ease of proving its adequacy with respect to the informal presentation we just concluded. We describe the main issues in the representation by displaying fragments of the code and a limited number of adequacy statements. A complete treatment can be found in Appendix A. It is interesting to compare the result of our encoding with similar endeavors in the literature.
VanInwegen used the HOL theorem prover [24] to verify properties about a substantial portion of Standard ML [52]. She adopted a brute-force approach to the meta-representation problem, encoding, for example, contexts as terms. This choice resulted in a complex representation and only partial achievement of the main goal of this endeavor: a formal proof of type preservation for that language. Although on a much simpler fragment, our use of higher-order abstract syntax, of parametric and hypothetical judgments, and of the linear features of LLF avoids these difficulties completely.
Chirimar used Forum [34] to represent a language similar to MLR with the addition of exceptions and continuations [12], but without any emphasis on typing. He took advantage of the higher-order nature of Forum and of its linear constructs. The resulting program is as elegant as our code and is proved adequate with respect to the informal specification of the object language. The absence of proof-terms in Forum prevents the direct manipulation of object-level derivations and no attempt is made to use that meta-language to investigate meta-theoretic properties such as type preservation.

Syntax
The representation of the syntactic level of MLR is based on higher-order abstract syntax and does not require the expressive power of the linear constructs of LLF. It lies therefore in the LF fragment of this language.
As is normally done in LF, every syntactic category of the object language is mapped to a distinguished base type. The type families necessary to encode the syntactic categories of MLR are given by the following declarations: exp : type.
The four declarations on the left encode expressions, instructions, types, and continuations. The four on the right are needed to represent the imperative features of MLR programs. cell corresponds to the lexical category of memory cells. cv and store will be used to represent the store. Finally, answer encodes final answers. We encode the abstract syntax of MLR expressions, as described in the grammar of Section 3.7, by means of the representation function -. This function maps every production to an LLF object constant that, when applied to the representation of the subexpressions that it relates, yields an object that has type exp. The function -is inductively defined on the left-hand side of the table below (we have separated out the treatment of the imperative constructs); its right-hand side gives the type of the constants used in the encoding. The representation of most expressions reflects directly the abstract syntax of MLR. We take advantage of higher-order abstract syntax in the representation of cells, variables, and binding constructs of MLR. Variables are encoded as LLF variables (of type exp). The fact that an object-level construct binds a variable x in a subexpression e is then modeled by using the λ-abstraction of LLF in order to bind x in e . Cells appear as hypotheses c:cell in the context of LLF, similarly to free variables. Their representation as expressions is mediated by the constant rf, which maps entities of type cell to objects of type exp.
As an example, consider again the following MLR expression from the previous section: It is represented by the following LLF term of type exp: (s x)))) (app (!f) unit))))) The faithfulness of this representation with respect to the object level syntax of expressions consists of a number of properties that we summarize in the following adequacy theorem, where corresponds to the signature in Appendix A: Compositionality in this statement means that the representation function commutes with substitution, i.e., that for every MLR expression e and e , [e /x]e = [ e /x] e . It confirms the correct application of higher-order abstract syntax in our encoding. Note that compositionality is not needed for cells since they are never subject to substitution.
Due to the complexity of our object language, we do not display the simple but long and somewhat tedious inductive proof of this statement. The interested reader is referred to [6] for a full treatment; the proof of a different adequacy statement is sketched at the end of this section. The techniques used in order to prove adequacy theorems for LLF encodings naturally extend the methods successfully applied for years in the more restricted setting of LF. In particular, they retain their simplicity in our richer application area. This contrasts with other proposals, e.g., the treatment of linearity in LF itself [42], where adequacy theorems have complex proofs even for simple object languages.
Types, instructions, and continuations are represented in a similar way. The LLF declarations for the constants needed in their encoding can be found in Appendix A. We omit displaying the statements of the respective adequacy theorems since they do not introduce new concepts. They can be found in [6].
MLR makes a dual usage of the collection of cell-value pairs thta we informally referred to as its store: as a repository from which to retrieve the value associated with a cell during evaluation (the proper store we indicated as S), and as a term to be eventually returned with the final answer (the reified store we denoted [S]). We will correspondingly have two distinct LLF representations of the store. We will discuss the internal encoding S of a proper store S when considering evaluation. A reified store [S] is given the following external represention [S] : [·] = estore estore : store [S, c = v] = with [S] (holds c v ) with : store -> cv -> store.
Here and in the following, we systematically overload the notation used for expressions to denote the various representation functions that are required in this example. The nature of its argument should always permit disambiguating which specific function we are refering to in each case.
The representation of answers directly expresses the grammatical rules: The declarations for these constants are repeated in Appendix A. The adequacy theorems that link them to the syntax of MLR are reported in [6].

Static Semantics
On the basis of the above encoding of the syntax of MLR, we will now describe the meta-representation of the static semantics of this language.
As for syntax, the representation of the static semantics of MLR does not rely on the linear features of LLF. The resulting code lies therefore in the Elf fragment of our logical framework. We have the following declarations for the type families that model the various typing judgments presented in Section 3.2: tpe : exp -> tp -> type. tpi : instr -> tp -> type. tpK : cont -> tp -> tp -> type.
Again we have separated out the declarations that suffice for the functional core of MLR from the type families that are required to handle the imperative aspects of this language. The first three represent the typing judgments for expressions, instructions, and continuations, while tpS and tpa encode respectively the store and answer typing judgments, and tpc records the type of individual cells in the store.
We illustrate the representation of the static semantics of MLR by displaying how to encode a typing derivation for expressions. The remaining typing judgments are treated similarly and the resulting LLF declarations are presented in Appendix A.
In Section 3.2, we denoted the fact that an expression e has type τ assuming the types given in for its free variables and the types given in for its reference cells as the hypothetical judgment ; e e : τ . We represent the schematic form of this judgment by means of the LLF type family TPE. This family accepts two parameters: the representation of an expression and the representation of a type. Therefore, the instance above will be encoded as the LLF base type tpe e τ . The context is taken into consideration only when checking that this term is indeed derivable in LLF. Then, we will encode each pair x i : τ i in by means of an LLF hypothesis where the free variable x i is declared as an expression (x i : exp). Similarly, we encode every item c j : τ j in as the (intuitionistic) assumption t j : tpc c j τ j in the context of LLF, where c j is declared as a cell (c j : cell). Note that tpc only serves the purpose of making typing assumptions for cells. We write and for the encoding we just outlined for the context and the store context , respectively.
The inference rules defining the derivability of the typing judgment for MLR are encoded according to the technique presented in Section 3.1. We consider two rules as additional examples; the remaining declarations can be found in Appendix A. Rule tpe z associates the type nat to the numeral z. We represent it by means of the LF constant tpe z that relate z to nat: ; e z : nat tpe z = tpe z : tpe z nat.
Rule tpe case specifies how to type-check a conditional expression. We repeat it from Fig. 12: ; e e : nat ; e e 1 : τ ; , x : nat e e 2 : τ ; e case e of z ⇒ e 1 | s x ⇒ e 2 : τ tpe case This rule has multiple premises. It is hypothetical because the rightmost premise inserts the assumption x : nat into the context. It is also parametric since the variable x is bound in the case construct, but it appears as a new symbol both in the added hypothesis and in the expression e 2 type-checked by the rightmost premise. We represent this rule by means of the LLF constant tpe case and encode its structure in the associated type. We have the declaration: Notice the quantification over x and the embedded implication with antecedent tpe x nat in the encoding of the third premise. In this declaration, the LLF variables E, E1, E2, and T correspond to the schematic variables e, e 1 , e 2 , and τ , respectively. They are implicitly quantified at the front of the declaration.
It is worth noticing that there is no declaration in correspondence of rule tpe x: ; , x : τ e x : τ tpe x Since assumptions are represented directly in the context of LLF, a judgment of the form tpe x T , where T is the representation of some concrete type τ , will be validated by accessing the context of LLF rather than the signature and succeed precisely when t x : tpe x T appears in it as an assumption. Similar considerations hold for reference cells.
We have now the means for representing derivations of expression typing judgments. The adequacy theorem below ensures that whenever T is a (valid) derivation for the MLR judgment ; e e : τ , it is a canonical inhabitant of the LLF type tpe e τ with respect to the proper encoding of and , and vice versa.

Dynamic Semantics
Unlike syntax and static semantics, the representation of evaluation relies heavily on the linear features of LLF. It is based on the following four type families:
which we will describe in turn. Assuming the appropriate representation functions for continuations, instructions, and answers, we model the continuation-based judgment S K i → a as the LLF base type ev K i a . The store S mentioned by this judgment is represented in a distributed fashion in the context of LLF. Each item c = v in S is modeled by two assumptions: first of all, we need to declare c as a cell and we do so by means of the assumption c:cell; second, we represent the fact that the current contents of c is v by a linear hypothesis of the form h: contains c v . The first assumption should clearly be intuitionistic since c may be mentioned many times in K , i, a, and S. In contrast, the second must be linear since assignment updates the value associated with a cell destructively. If h were an intuitionistic hypothesis, we would have no means of prohibiting the old value from being accessed. In summary, we associate to every proper store S = (c 1 = v 1 , . . . , c n = v n ) the following internal representation: . . , c n :cell, h 1: contains c 1 v 1 , . . . , h n: contains c n v n Four rules in the deductive system for continuation-based evaluation presented in Figs. 14 and 15 access the store directly: ev ref * , ev assign * 2 , ev deref * , and ev init. We will illustrate the use of the linear features of LLF on their encoding. However, in order to gain familiarity with our representation technique, we will first analyze rule ev z. All other inference figures are treated similarly to this rule. The complete code is displayed in Appendix A.
A bottom up reading of rule ev z, shown below on the left, specifies that evaluating z simply amounts to returning it as a value. We represent this rule by means of the declaration for the constant ev z shown on the right: The linear arrow in the representation of rule ev z enables its antecedent and its consequent to access the same linear assumptions in the context. This accounts for the fact that the premise and the conclusion of this rule mention the same store. Had we used an intuitionistic implication, the antecedent (and therefore the whole expression) would have been applicable only with contexts deprived of any linear assumptions, corresponding to empty stores.
Rule ev ref * , repeated below on the left, creates a new location c in the store and initializes it with the argument v of ref * . Its representation on the right models these actions on the context of LLF: the new cell is intuitionistically assumed when processing the dependent type {c:cell}, while the resolution of the embedded linear implication has the effect of asserting contains c V in the linear part of the context. Since this assumption is made linearly, it will be possible to remove it from the context, for example in order to update the value contained in c in response to an assignment. Notice how the newly created cell c is bound in the final answer.
Of the three rules that realize assignment in MLR, only ev assign * 2 accesses the store. The declaration ev assign*2 below mimics the destructive update of the contents of the cell c (written C in the clause) in the store in two steps. First the old value is retrieved by contains C V'. Since it appears as a linear assumption, accessing it causes its removal from the linear context of LLF. Since the other antecedent of this clause is reached through the multiplicative connective , the remaining linear hypotheses will be passed to it. This term inserts the new value v (i.e., V) of c in the representation of the store by means of the antecedent contains C V of the embedded linear implication.
Dereferencing a cell c is naturally modeled in LLF through the use of the additive operators of our language. In order to encode rule ev deref * , we need two copies of the store representation: one to retrieve the contents of c, and one to proceed with the evaluation. This is immediately achieved by means of the additive conjunction of LLF. We have the following declaration: The conjunct read C V, which implements the read judgment S c = v, looks up its copy of the linear context in search of the assumption contains C V and relies on the additive unit of LLF, written <T>, to discard the rest. This technique is generally applicable to every situation that involves looking up the encoding of volatile information. The definition of read consists of a single clause encoding rule read val: We could have alternatively modeled dereferencing similarly to assignment by first accessing the linear assumption contains C V directly. In order to balance its consequent removal from the linear context of LLF, this same assumption should be reentered in the context before returning the value V. We would have the following declaration: Although it achieves a similar effect, this declaration does not encode rule ev deref * , or read val, or any combination of these rules. Instead, it is a transliteration of the following inference rule, which we could have used to formalize dereferencing: Finally, rule ev init pairs up the store and the final value in order to produce the answer. We model this behavior by means of the auxiliary procedure collect which translates the internal representation of the store S, as linear LLF assumptions, to its external representation [S], as an object of type store. The code for collect is displayed below.
col empty : collect estore. col cv : contains C V -o collect S -o collect (with S (holds C V)).
Since the use of multiplicatives removes the assumptions contains C V as they are retrieved, each recursive access to collect adds a different item to the external representation of the store. Clause col empty is provable only when the linear part of the context of LLF is empty and therefore only when the complete store of MLR has been externalized. The effectiveness of the representation we just illustrated relies on the ability to remove objects from the context of LLF. Using LF on this problem would have produced awkward encodings with prohibitive consequence for the development of the meta-theory of MLR [6]: a first alternative would have relied entirely on the external representation of the store, implementing all the operations required to access and modify it explicitly. A second alternative would be to proceed as we did, with the tedious addition of declarations aimed at checking the linearity of the resulting derivations a posteriori.
We will now make the above motivating discussion more precise. The faithfulness of our representation of evaluation is expressed by the following adequacy theorem. In order to prove the above theorem, we will decompose it into four parts. Again, is the signature contained in Appendix A. We need to prove the following properties: Functionality: is a total function from MLR evaluation derivations to LLF objects over . Soundness: The representation of a derivation of a given MLR evaluation judgment is an LLF object whose type is the representation of this judgment.
Completeness: Whenever a canonical LLF object over inhabits the type corresponding to the encoding of an MLR evaluation judgment, this object is the representation of a derivation of that judgment.
Bijectivity: is a bijection between evaluation derivations in MLR and canonical LLF objects whose type encodes the corresponding evaluation judgment.
Different from expressions and typing derivations, the representation function is trivially compositional (it involves closed expressions only); otherwise, we should prove it as an additional property.
Detailed proofs of these properties are long and rather tedious, although conceptually simple. We will sketch them by using the declaration for ev assign*2 as a representative case. In order to do so, we repeat it complete with the -quantifiers we omitted in the above presentation: In the specific case of this example, it is convenient to state and prove the functionality and soundness properties together. We have the following result: LEMMA 3.3 (Functionality and soundness of MLR evaluation's representation). Given a store S, a continuation K, an instruction i, and an answer a, where K, i, S, and a are closed except for the possible presence of free cells, for every derivation E of the judgment S K i → a, E is defined and unique, and the LLF judgment Proof. This proof proceeds by induction on the structure of the derivation E. We illustrate only the case in which it ends with an application of rule ev assign * 2 . Therefore, Let us also denote as S the store (S * , c = v). Notice that S = S . By the induction hypothesis on E , we deduce that there is a unique LLF object M such that M = E and there is a derivation of the judgment S M ⇑ ev K (return unit) a . Iterated applications of the LLF rule oa iapp are used to instantiate the arguments of the declaration for ev assign*2. Indeed, there is an atomic derivation A of the following judgment: Let t: contains c v be the assumption in S corresponding to the pair (c = v) in S . We can abstract it over M in the LLF derivation for this object, obtaining a derivation C of the judgment S * λ t : contains c v .M ⇑ (contains c v ev K (return unit) a ), where S * differs from S only by the removal of assumption t. Since S = S * , we can then apply rule oa lapp to A and C , obtaining a derivation A of: Let t : contains c v be the assumption in S corresponding to the pair (c = v ) in the store S. Then, there is a derivation C of the LLF judgment ( S * , t : contains c v ) t ⇑ contains c v . We can then apply rule oa lapp again to A and C to obtain a derivation A of: In order to understand this formula, observe that S = ( S * , t : contains c v ). We now apply rule oc a to A to get the desired canonical derivation. At this point, it is enough to notice that the LLF object M appearing on the left of the arrow in this canonical judgment is the representation of the MLR derivation E above and that the type on the right of the arrow is the representation of its type. It is also easy to ascertain that M is unique, given the uniquenees of M .
We now consider the completeness of the encoding of MLR evaluation derivations. We have the following lemma. Proof. We proceed by induction on the structure of M. Since the type in C is a base type, M can either be a constant, a variable, or start with a destructor. Then M has the following structure where c M is a constant in of some appropriate type, * represents either linear or intuitionistic application, and M 1 , . . . , M n are objects of some type. The proof now distinguishes cases on the basis of possible constants c M . We consider only the case in which this constant is ev assign*2.
If c M is ev assign*2, then it must be the case that i = (c := * 2 v) for some cell c and expression v, and moreover By analyzing the types of the objects M v , M * , and M t , we deduce that there is an expression v such that M v = v , that M * =λt : contains c v .M for some term M of type ev K (return unit) a , and that M t = t for some linear assumption t : contains c v . Moreover, we have that S = (S * , c = v ).
We can apply the induction hypothesis on M relative to a store representation that differs from S by the replacement of assumption t with t. The corresponding MLR store S is (S * , c = v). We deduce in this way that there exists a derivation E of the judgment (S * , c = v) K return → a. An application of rule ev assign * 2 suffices to obtain the desired derivation. We conclude the treatment of the adequacy of the representation of MLR evaluation derivations by showing that the function -is indeed bijective. LEMMA 3.5 (Bijectivity of the representation of MLR evaluation). Given a store S, a continuation K , an instruction i, and an answer a, where K , i, S, and a are closed except for the possible presence of free cells, the representation function -is a bijection between derivations E of MLR the judgment S K i → a, and LLF objects of type ev K i a in the context S .
Proof. Lemma 3.3 establishes that -is a total function from the set of MLR derivations mentioned in the statement to the specified set of LLF objects. By the completeness lemma, we deduce that this function is surjective. It therefore remains solely to prove that it is also injective. Given two derivations E 1 and E 2 such that E 1 = E 2 , the proof that E 1 = E 2 proceeds by induction on these derivations.
A derivation E for an evaluation judgment S K i → a is a trace of the computation that a continuation-based MLR interpreter performs when evaluating the instruction i and the continuation K to the final answer a with respect to the store contents S. According to the above adequacy theorem, such derivations are faithfully represented by the terms M inhabiting the LLF type encoding this judgment. We conclude this section by illustrating how to take advantage of this internal representation of MLR computations. We only give a small example here-more interesting examples such as the proof of type preservation or a cut elimination procedure for classical linear logic can be found in [6]. Specifically, we will give LLF declarations that permit counting the number of reference cells dynamically allocated during the evaluation.
In order to achieve this purpose, we first give the following declarations for natural numbers: num : type. zero : num. succ : num -> num.
The counting judgment relates an MLR computation to the number of cells it allocates. It is represented by the following type family count : ev K I A -> num -> type.
We implement the counting procedure in LLF by unfolding the representation of an MLR computation. We ignore the steps that do not allocate memory cells, but increment by one the counter every time rule ev ref * is applied. We show three declarations, corresponding to the initialization step performed by rule ev init, the allocation of a new cell by rule ev ref * , and one of the numerous cases where nothing happens (rule ev z):   cv^Cn1'^col empty))) ))))))))))))))^Cn1)))))))))))) (succ (succ zero))

CONCLUSION AND FUTURE WORK
In this paper, we have presented the linear logical framework LLF as an extension of LF with internal support for the representation of state-based problems. We have demonstrated its expressive power by providing a usable representation of the syntax and the semantics of an imperative variant of the functional programming language Mini-ML; space reasons prevented us from extending this encoding to aspects of the meta-theory of this language, such as a proof of its type preservation property [6]. Additional substantial case studies we have completed include the formalization of a proof of cut elimination for classical linear logic and translations between minimal linear natural deduction and sequent calculus, as well as a number of puzzles and solitaires. The interested reader may access them on the World Wide Web at [9] or in [6].
The representation language of LLF, λ & , conservatively extends LF's λ with constructs from linear logic. We can think of it as the type theory obtained from the type constructors , &, , and . This choice of constructors is complete in the sense that they suffice to represent full intuitionistic or classical linear logic. Further, adding any other linear connective as a free type constructor destroys the property that usable canonical forms exist by introducing commuting conversions. This property is necessary in the proofs of adequacy theorems for encodings and also for the interpretation of LLF as an abstract logic programming language.
The meta-representation methodology of LLF extends the judgments-as-types technique adopted in LF with a direct way to map state-related constructs and behaviors onto the linear operators of λ & . The resulting representations retain the elegance and immediacy that characterize LF encodings and the ease of proving their adequacy.
LLF maintains the computational nature of LF as an abstract logic programming language. The implementation of LLF combines the experience with higher-order logic programming languages gained with Elf [38,41], an older realization of LF, on previous research work on linearity as in the language Lolli [8,28], and on new experimental term representation [11] and compilation [7] techniques. Among the new problems is the necessity of performing higher-order unification between linear terms [10].
LLF generalizes other formalisms based on linear logic such as Forum [34] by making linear objects available for representations, by permitting proof terms, and by providing linear types. It is closely related to the system RLF of Ishtiaq and Pym [30], which allows dependencies on linear variables, but does not have as an operator. Linear dependent types are potentially useful but not essential in our experience, while is a necessary tool in many representation problems. The meta-theory of LLF appears significantly simpler than that of RLF, a fact that might imply that proving the adequacy of an encoding may be substantially more complex in this formalism. Finally, our approach is orthogonal to general logics in the style of LU [22].
In the near future, we intend to gain experience with the use of LLF as a representation language by encoding state-based deductive systems such as imperative programming languages constructs, hardware systems, security protocols, and real-time systems. The availability of an implementation will be of great help in doing so since it will enable us to concentrate on high-level representation issues. We would also like to extend the tools available in Twelf [46,48], notably the theorem proving component of this system [47], to handle the possibilities offered by the linear operators of LLF. Finally, we are interested in investigating a generalization of the type constructors & and of λ & to linear and types, respectively, although it currently appears that this would greatly complicate the type theory while it is not clear how much would be gained. tpe V (tref T) -> tpi (deref* V) T. tpi assign*1 : tpe V (tref T) -> tpe E T -> tpi (assign*1 V E) one. tpi assign*2 : tpe V1 (tref T) -> tpe V2 T -> tpi (assign*2 V1 V2) one.   (read C V ev K (return V) A) -o ev K (deref* (rf C)) A. ev assign*1 : ev (klam K ([x : exp] assign*2 (rf C) x)) (eval E) A -o ev K (assign*1 (rf C) E) A. ev assign*2 : (contains C V -o ev K (return unit) A) -o (contains C V' -o ev K (assign*2 (rf C) V) a).