A Coverage Checking Algorithm for LF

. Coverage checking is the problem of deciding whether any closed term of a given type is an instance of at least one of a given set of patterns. It can be used to verify if a function deﬁned by pattern matching covers all possible cases. This problem has a straightforward solution for the ﬁrst-order, simply-typed case, but is in general undecidable in the presence of dependent types. In this paper we present a terminating algorithm for verifying coverage of higher-order, dependently typed patterns. It either succeeds or presents a set of counterexamples with free variables, some of which may not have closed instances (a question which is undecidable). Our algorithm, together with strictness and termination checking, can be used to certify the correctness of numerous proofs of properties of deductive systems encoded in a system for reasoning about LF signatures.


Introduction
Coverage checking is the problem of deciding whether any closed term of a given type is an instance of at least one of a given set of patterns.This has a number of applications: in functional programming, it is used to decide if a given set of cases defining a function is exhaustive or not.In proof assistants it is used to verify if a purported proof covers all possible cases.Depending on the application, the underlying term algebra, meta-theoretic requirements, and efficiency considerations, a variety of algorithms that implement or decide properties about pattern matching emerge.In this paper we discuss one algorithm for coverage checking in the logical framework LF [8].
The choice of the underlying term algebra is essential.In traditional functional programming languages, for example, we have only simple types and possibly prefix polymorphism, and the structure of functions is not observable by pattern matching.This makes coverage checking straightforward, both in theory practice.In LF, on the other hand, we have dependent types and functions are intensional: their structure can be observed by pattern matching.This makes coverage checking undecidable since, for example, any set of patterns will cover all terms of an empty type and emptiness is undecidable.
will also use substitutions in a critical way throughout this paper so we briefly introduce them here (see also, for example, [2]).We write a for constant type families, x or u for object-level variables, and c for constructors.A term may come form any of the syntactic levels.As usual, we identify α-equivalent terms.In order to state certain definitions and propositions more concisely, we write U to stand for either an object or a type and V for either a type or a kind and h for a family-level or object-level constant.We take βη-conversion as the notion of definitional equality [8,2], for which we write U ≡ U and V ≡ V .Substitutions are capture-avoiding and written as U [σ] or V [σ] with the special form U [M/x] and V [M/x].Often, we write ∆ for contexts that are interpreted existentially, and Γ for universal ones.When we write Γ [σ], it is a shorthand for applying σ in left to right order to each variable type in Γ .Signatures, contexts, and substitutions may not declare a variable or constant more than once, and renaming of bound variables may be applied tacitly to ensure that condition.Besides equality, the main judgment is typing Γ U : V , suppressing the fixed signature Σ.We always assume our signatures, contexts and types to be valid.
Type-checking and definitional equality on well-typed terms for LF are decidable.Every term is equal to a unique β-normal η-long form which we call canonical form.In the remainder of the paper we assume that all terms are in canonical form, because it simplifies the presentation significantly.In the implementation this is achieved incrementally, first by an initial conversion of input terms to η-long form and later by successive weak-head normalization as terms are traversed.
Since it is perhaps not so well-known, we will give only the typing rules for substitutions, which are used pervasively in this paper.

Coverage
In this section we first formally define the problem of coverage in the LF type theory in Section 3.1.This relies on higher-order matching, a problem whose decidability is an open question.We therefore identify an important subclass, the strict coverage problems (Section 3.2) which guarantee not only decidability but also uniqueness of matching substitutions.All examples we have ever encountered in practice belong to this class and we explain the reasons for this after the necessary definitions.Then we define splitting in Section 3.3, which is the second critical operation to be performed during coverage checking.Next we describe our basic coverage algorithm and prove it sound and terminating in Sections 3.4 and 3.5.The last component of our coverage checker is finitary splitting, discussed and proved correct in Section 3.6.

Definition of Coverage
A coverage goal is simply a term (object or type) with some free variables.Intuitively, a coverage goal stands for all of its closed instances.In order to emphasize the interpretation of the variables as standing for closed terms, we write ∆ for such contexts and denote variables in ∆ by u and v rather than x and y.The distinction between ∆ and Γ can be formalized (see [20]), but this is not necessary for the present purposes.
A coverage problem is given by a goal and a set of patterns.One can think of these as the patterns of a case expression in a functional program, or the input terms in the clause heads of a logic program.In the general case, a set of patterns is just a set of terms with free variables.
Definition 1 (Immediate Coverage).We say a coverage goal ∆ U : V is immediately covered by a collection of patterns ∆ i U i : V i if there is an i and a substitution ∆ σ i : Coverage has an infinitary definition, requiring immediate coverage of every ground instance of a goal.
In this formulation the problem of coverage is very general, because the type of U and the type of the U i 's are not the same.It turns out that the algorithm is significantly easier to describe and prove correct if we restrict U and U i to be types, and V = V i = type.Definition 3 (Type-Level Coverage).We say a goal ∆ A : type is covered by a collection of patterns ∆ i A i : type if every ground instance • A[τ ] : type for • τ : ∆ is immediately covered by ∆ i A i : type.
The implementation in Twelf transforms any coverage problems that arise into this type-level form.This translation is straightforward and only sketched here.Given a coverage goal ∆ M : A .Assume first that A = a N 1 . . .N k for a : Πx 1 :A 1 . . . .Πx n :A n .type.In this case we declare a new type family a : Πx 1 :A 1 . . .Πx n :A n .ax 1 . . .x n → type.The new coverage goal is now simply ∆ a N 1 . . .N k M : type.All patterns are transformed in the analogous way, using the same a to replace a .If A starts with some leading Π-quantifiers we carry them over from the general to the restricted form.
To summarize, without loss of generality, in the remainder of this paper we consider only coverage goals of the form ∆ A : type and patterns of the form ∆ i A i : type.

Strict Patterns
To determine if a goal is immediately covered we have to solve a higher-order matching problem, instantiating the patterns A i to match the goal A.Not incidentally, this is also the operation that is performed when matching a case subject against the patterns in each arm of a case branch, or when unifying the input arguments to a predicate with the clause head. 1n order for this pattern matching to be decidable (for the coverage algorithm) and also so that the operational semantics is well-defined (for the execution of a functional or logic program), we require the patterns to be strict.Strictness for a pattern ∆ i A i : type requires that each variable in ∆ i must occur in A at least once in a rigid position [10,17].Definition 4 (Strictness).We say that u has a strict occurrence in U if ∆; Γ u U as defined by the rules depicted in Figure 1.A pattern Informally, an occurrence of u is strict if it is not below another variable in ∆ and if that occurrence forms a higher-order pattern in the sense of Miller [13], that is, u is applied to distinct parameters as expressed by the judgment Γ u x 1 . . .x n pat.Unlike higher-order patterns in the sense of Miller, however, other forms of occurrences of u are allowed, which is a practically highly significant generalization.All of the examples in Twelf are strict, but many higher-order examples are not patterns in the sense of Miller.Strictness is sufficient here because we are only interested in matching and not full unification.

Theorem 1. Given a coverage goal ∆
A : type and a strict pattern Moreover, if such a substitution exists it is uniquely determined.
Note that in the above theorem there is no requirement on the coverage goal A except that it be well-typed.Indeed, in practice, it will often not be a higherorder pattern, nor will it be strict.This failure of strictness is the result of the splitting operation described in the next section.

Splitting
In this section we present the second cornerstone of our coverage checking algorithm, namely splitting.This is a generalization of a similar operation proposed by Coquand [3].Splitting is the answer to the question on how to proceed if the current coverage goal is not immediately covered by any of the patterns.In this case coverage might still hold, since we require only that all ground instances of the goal be immediately covered.Since there may be infinitely many ground instances, we instantiate the coverage goal only partially, one layer at a time.
In this situation the coverage goal may be refined into a new set of coverage goals, each of which must be covered in order for the initial coverage goal to succeed.This refinement of a coverage goal is determined by a finite complete collection of non-redundant substitutions for its free variables.Applied to the current coverage goal, each substitution generates a new coverage goal that can be checked for coverage recursively.
Implementation of refinement will be via the splitting operation on a coverage goal, which requires higher-order unification rather than just matching.It is discussed in the remainder of this section.The strategy for how to invoke this operation is the subject of the next section (3.4).

Definition 5 (Non-redundant complete set of substitutions).
Let ∆ A : type be a coverage goal.We say a finite collection ∆ i τ i : ∆ is a non-redundant complete set of substitutions if for every • τ : ∆ there exists a unique i and a unique Refining coverage goals through non-redundant complete set of substitutions is a conservative operation.Coverage of the refined set of coverage goals implies coverage of the original goal.
Theorem 2 (Conservativity of refinement).Let ∆ A : type be a coverage goal and ∆ i τ i : ∆ a non-redundant complete collection of substitutions.All ∆ i A[τ i ] : type are covered by a given set of patterns if and only if ∆ A : type is covered.
Proof.Coverage depends only on the set of ground instances of a coverage goal.But the collection of all ground instances of ∆ i A[τ i ] is exactly the same as the set of ground instanced of ∆ A : type since the τ i form a complete set.Hence coverage is preserved by refinement. 2 Next we address the question of how to construct such a refinement.The method we are using is called splitting, and is inspired by a similar operation present in ALF [3,1] which in turn goes back to the basic steps in Huet's algorithm for higher-order unification [10].
Among all the goals that are not immediately covered we select one goal ∆ A : type, and from its context ∆ one declaration u:A.We refer to u as the splitting variable.A may be a function type, therefore, without loss of generality, it is of the following form For the sake of conciseness, we consolidate all successive Π-abstraction into one context Γ .This is only an abbreviation and does not properly extend LF.We also use the following abbreviations p Γ which stands for p x 1 . . .x m if Γ = x 1 :A 1 , . . ., x m :A m where p is a constant or a parameter.Furthermore p (∆ Γ ) is a shorthand for p (u 1 Γ ) . . .(u n Γ ) for ∆ = u 1 :A 1 , . . ., u n :A n .
We now want to determine the possible top-level structures of a term M : ΠΓ. a M 1 . . .M m .Because of the existence of canonical forms, it is enough to search the signature and the local context for constants that may occur in a head position in M .All we have to do is to verify that types unify, but this is far from trivial, since we are in the higher-order setting and have dependent types.We will discuss our choice of unification algorithm in more detail later; here we simply describe how to invoke it to obtain a complete and non-redundant set of substitutions.
Let Γ be the local context of variables under which we have to consider a constant application.In general, the type of a constant is Πu 1 :A 1 . . . .Πu n :A n .B with an atomic type B. For the purpose of splitting, each u i is intuitively interpreted as an existential variable that can be instantiated to terms valid in Γ .To account for those local dependencies, we raise those variables by Γ and turn all u i into variables of functional type abstracting over Γ .Definition 6 (Raising).Let Γ be a context of local parameters, A the type of a constant c.Raising A by Γ yields a ∆ A , a context ∆ of raised existential variables and a raised type A (that always has the form ΠΓ. B).
What makes raising such a tricky operation is that the u i may occur elsewhere in the the type, and need to be replaced by their raised versions u i applied to Γ .The ∆ that is computed during raising contains all u i 's in raised form.
Next, we describe the central definition of this section: splitting.We follow standard practice and describe unification as a first-order formula over equations U ≈ U .The particular unification algorithm that we use is higher-order pattern unification that postpones unresolved unification equation as constraints.The algorithm is described in detail in [5].For our coverage algorithm, however we restrict its generality a bit: Although we allow constraints to arise during the process of unification, we require that after completion all constraints have been resolved.Otherwise we do not allow splitting over the specified variable.This is handled in our algorithm for selecting variables to split by trying another variable instead.Unfortunately, successive selections of splitting variables are not independent and it is possible that some sequences of splitting operations fail (with spurious counterexamples) while other sequences could succeed.In principle we could backtrack here, but this is currently not implemented.Let ∆ σ c : ∆, ∆ c be the most general unifier of the higher-order unification problem if it exists.2. Bound Variables: Let y : Π∆ y .B y ∈ Γ , and ∆ y ΠΓ.B y = raise Γ Π∆ y .B y .Let ∆ σ y : ∆, ∆ y be the most general unifier of the higherorder unification problem if it exists.
Since we collect all such most general unifiers, cases for which the unification problem fails2 simply do not contribute a substitution to the result of the splitting operation.
The main result of this section is that splitting generates always a set of substitutions that is non-redundant and complete.Obviously, raising will play a major role in this algorithm, prompting us to prove an auxiliary lemma about raising that guarantees that any instantiation σ = M 1 /u 1 , . . ., M n /u n of variables in ∆ with respect to Γ can be raised to the empty context as σ = (λΓ.M 1 )/u 1 . . .(λΓ.M n )/u n .Because of space considerations, we have omitted a generalized formulation of this lemma that one would prove by induction over the structure of the context ∆. and for all corresponding u from ∆ and ∆, respectively, the following equation holds: Finally, we state and prove the main theorem of this section that informally states that no cases are lost due to splitting.
Therefore, by concatenating σ and τ we obtain a new substitution η, that satisfies • η : ∆, ∆ c .By uniqueness of types for LF, the following types are equivalent: Furthermore, also from Corollary 1, we can infer that for all u i ∈ ∆ c , Consequently, η is a unifier for Equation (1).Recall, that by construction σ c is most general.Therefore, there exists a • σ : ∆ , such that η = σ c • σ .By restriction to ∆, we obtain that there exists an σ such that σ = σ c • σ .
Case: Almost identical to the one above, except that η will be a unifier for Equation (2). 2

The Coverage Algorithm
Recall that a coverage goal ∆ A : type is immediately covered by a collection of terms ∆ i A i : type if there is an i and ∆ σ i : Immediate coverage is central to the naive, non-deterministic coverage algorithm which we discuss next.We assume we have a set of coverage goals, all of which must be covered for the algorithm to succeed.In the first step, this is initialized with the goal ∆ A : type.We pick one of the coverage goals and determine, via strict higher-order matching, if it is immediately covered by any covering type A i .If so we remove it from the set and continue.If not, we nondeterministically select a variable in coverage goal and split it into multiple goals, which replace it in the collection of coverage goals.
This coverage algorithm is naive because it may not terminate, even if the goal is covered.Even if types are non-empty and coverage holds, splitting the wrong variable can lead to non-termination.
The procedure we propose in this section always terminates and either indicates that coverage holds, or outputs a set of potential counterexamples.Some of these may fail to be actual counterexamples, because we me may not be able to instantiate the remaining variables to a ground term that is not covered.If the counterexample is ground, however, it is guaranteed to be an actual counterexample.We analyze the possible forms of counterexamples in more detail at the beginning of Section 3.6.
The basic idea is to record why immediate coverage fails and not just if it does.Assume we are given a coverage goal ∆ A : type and a pattern ∆ A : type .Instead of just applying our matching algorithm, we then construct a conjunction of equations E and the symbols (success) or ⊥ (failure) such that ∆, ∆ ; • A < A =⇒ E. This is accomplished by using the rules for the judgment ∆; Γ U < U =⇒ E defined in Figure 2. We should read this judgment as: Match U against pattern U in the parameter context Γ to obtain the residual equations E. ∆ is the disjoint union of the (existential) variables in U and U , of which only those in U may be instantiated during matching.Initially, the context Γ is always empty, and both U and U are types.However, internally we require the context Γ of shared local parameters.We can think of the algorithm as a rigid decomposition, which corresponds to the simplify function in Huet's algorithm for higher-order unification.If all residual equations can be solved (and there is no ⊥), then matching is successful.Otherwise, we have to interpret the equations to determine candidates for splitting that will make progress (as defined below).

Fig. 2. Rigid Matching Algorithm
Note that during rigid matching, no variable assignment takes place: where the two terms disagree, we record an equation.But if matching is not possible, we might either record an equation or return ⊥.
In order the state the lemmas in the generality required for an inductive proof, we say that for ∆; Γ U : V and ∆ ; Γ U : Proof.By induction on the given derivation.
Because U cannot immediately cover any instance of U , we do not generate any candidate variables for splitting in ∆ in this case.Lemma 3. If ∆, ∆ ; Γ U < U =⇒ E where E does not contain ⊥, but contains equations of the form u . . .≈ c . . .or u . . .≈ x . ... Then U does not immediately cover U (but U could possibly cover some instance of U ).
Proof.By induction on the given derivation.In the base cases, x and c are rigid and therefore cannot be instantiated to u.
In this case, any variable u occurring in an equation of the given form is added to the set of candidate variables for splitting, since it is possible that splitting might make progress.
is a valid match and shows that U covers U .
Proof.Again, by induction on the given derivation.The base cases are evident.The tricky part in the inductive argument is that the two matched terms do not necessarily have the same type or kind (even though the do initially) because we postpone non-rigid equations.However, as in the case of higher-order dependently typed unification [?], it is enough to maintain well-typedness modulo postponed equations if we eventually solve them from left-to-right.This means that if we have no candidates from the first two kinds of equations, we call a strict higher-order matching algorithm [23] on the residual equations.If this succeeds then A covers A. Otherwise, A does not cover A and we suggest no candidate variables for splitting because it would be difficult to guarantee termination.
When considering a particular coverage goal ∆ A : type, we apply the above algorithm with each pattern.If one of them immediately covers, we are done.If not, we take the union of all the suggested candidates and pick one non-deterministically.The current implementation picks the rightmost candidate in ∆, because internal dependencies might further constrain variables to its left during the splitting step.If splitting fails because higher-order unification with the algorithm in [5] can not determine a complete and non-redundant set of substitutions, then we try another candidate, and so on.If there are no remaining splitting candidate, we add the coverage goal to the set of potential counterexamples and pick another goal.

Termination
The overall structure of the algorithm is such that the splitting step replaces a coverage goal by several others.In order to show termination with respect to a simple multi-set ordering, we must show that each of the subgoals that replace a given goal are smaller according some well-founded measure.
We calculate this measure as follows.Given a coverage goal ∆ A : type apply rigid matching against each pattern.Eliminate those equations that contain ⊥.Among the remaining ones, consider only equations u U 1 . . .U n ≈ h U 1 . . .U m where h = x or h = c.Note that all candidates for splitting appear on the lefthand side of such an equation.Take the sum of the sizes of the right-hand sides as measured by the number of bound variable and constant occurrences.
When we apply splitting to any candidate variables in ∆, that is, one of the variables u that appears on the left-hand side of an equation as given above, then this measure decreases.
Lemma 5. Given a coverage goal ∆ A : type and a fixed set of patterns proposed to cover it.If we split the coverage goal along a variable u suggested by rigid matching, each of the resulting subgoals has a smaller measure than the original goal.
Proof.u occurs on the left-hand side of at least one residual equation Γ u U 1 . . .U n ≈ h U 1 . . .U m .After splitting, this residual equation may disappear altogether (say, because the case has become impossible).However, if rigid matching reaches again this subterm in the subgoal, it will now have the form . .U m for some h = x (a local parameter in Γ ) or h = c (a constant).If h = h then the this equation drops out altogether, since it generates ⊥ instead.If h = h , then k = m and the algorithm recurses by comparing each U * j with U j for 1 ≤ j ≤ m.But this eliminates at least one constant or variable occurrence (namely h), thereby decreasing our measure.2 Theorem 4. Coverage checking terminates after a finite number of steps, yielding either an indication of coverage or a finite set of potential counterexamples.
Proof.Immediate by the previous lemma by a multi-set ordering on the set of coverage goals.

Finitary splitting
The failure-directed algorithm described above works well in most practical cases, within or outside the pattern fragment.There are two remaining difficulties: one are remaining constraints during splitting as discussed in Section 3.3, the other is that occasionally the generated counterexamples fail to be actual counterexamples.The latter is a common occurrence.In large part this is because meta-theoretic proofs represented as dependently typed functions or relations often have a number of cases that are impossible.Instead of explicitly proving that the cases are impossible, one usually just lists the cases that can arise if it is syntactically obvious that the others can not arise.
What are the types of spurious counterexamples that may be produced by the algorithm?The most obvious one is a coverage goal that is incompatible with all patterns, but has no ground instances.We explain below how to handle some of these case.A less obvious problem is that matching the residual equations fail because of a spurious dependency that cannot be an actual dependency because of subordination considerations.We treat this case by applying strengthening [23] to eliminate these spurious dependencies throughout the algorithm.Finally, it is possible that two distinct variables of the coverage goal fail to match, yet they must be identical because the type has only one element.Finitary splitting will often catch these cases and correctly report coverage.
In order to handle as many spurious counterexamples as possible, we extend the algorithm described above as follows.Once the algorithm terminates with a set of proposed counterexamples to coverage, we examine each such counterexample to see if we can determine if it is impossible, that is, if it quantifies over an empty type.More concretely, let ∆ A : type be a counterexample, that is, coverage goal that is not covered and does not produce any splitting candidates.We now attempt to split each variable u:A in ∆ in turn, leading to a new set of coverage goals ∆ i U i : V i for 0 ≤ i < n.If n = 0 we know that the case is impossible.
If n > 0 we could, in principle, continue the algorithm recursively to see if each of the subgoals ∆ i U i : V i are impossible.However, in general this would not terminate (and cannot, because inhabitation is undecidable).Instead, we only continue to split further if all of the new variables u k : A k in ∆ i have a type that is strictly subordinate to the type A [26,23].Otherwise, we fail and report the immediate supergoal as a potential counterexample.
Theorem 5. Finitary splitting terminates, either with an indication that the given coverage goal has no ground instances, or failure.
Proof.There are only a finite number of variables in a given coverage goals.During each step of splitting we either stop or obtain subgoals where a variable u : A has been replaced by several variables u i : A i each of which has a type strictly lower in the subordination hierarchy.Since this hierarchy is well-founded, finitary splitting will terminate.This process can be very expensive.Fortunately, we have not found it to be a bottleneck in practice, because finitary splitting is applied only to remaining counterexamples.Usually, there are not many, and usually it is immediate to see that they are indeed possible because most types are actually inhabited.We do not presently try to verify if the types are actually inhabited (that is, start a theorem prover), although it may be useful for debugging purposes to distinguish between definite and potential counterexamples.However, in a future extension this could be done at the user's direction if he or she cannot easily detect the source of the failure of coverage.

Implementation
The coverage checking algorithm is implemented as part of the current Twelf [18] distribution, available from the Twelf webpage at http://www.twelf.org/.From the user's perspective, it can be employed in two different ways.
First, Twelf ascribes an operational meaning to LF signatures, that can be executed with a logic programming interpreter.Verifying coverage via the coverage checker means that execution will always be able to make progress and can not fail assuming the program is well-moded, that is, the role of arguments for input and output are properly respected.That it also terminates is an entirely different issue enforced by a termination and reduction checker [21,19].In this relational form, the coverage algorithm distinguishes between input coverage (the argument position that will be matched when a logical program is called) and the output coverage (the argument position that will be matched after a subgoal has been successfully evaluated).Although the interaction between the two is well understood up to programs of order 2, output coverage is a difficult operation to implement for higher-order logic programs of order 3 and greater.
Second, the internal data structures of Twelf are taken advantage of by the functional programming language Delphin that supports function definition by cases over arbitrary LF terms.Its type theory is based on T + ω [24].Although a suitable Delphin parser is still under construction, a specialized converter allows Twelf logic programs to be translated and run natively in Delphin.The differentiating features of T + ω to type theories used in other functional programming languages are dependently typed data, pattern matching against functions, a world system that controls the dynamic extension of a datatype by new constructors at run-time.Delphin programs do not distinguish between input and output coverage since they are functional programs, which renders it an attractive target platform for coverage checking of Twelf logic programs.And indeed, we have managed to overcome the limitations of the Twelf coverage checker due to order restriction by translating Twelf logic programs into Delphin functional programs, and subsequently applying the Delphin coverage checker.

Related Work
Coquand has considered the problem of coverage for a type theory in the style of Martin-Löf [3].He defines coverage and splitting in much the same way we do here, except that no matching against the structure of λ-expressions is allowed.He also suggests a non-deterministic semi-decision procedure for coverage by guessing the correct sequence of variable splits.In an implementation this split can be achieved interactively.
Most closely related to ours is the work by McBride [12].He refines Coquand's idea by suggesting an algorithm for successive splitting that is quite similar to ours in the first-order case.He also identifies the problem of empty types and suggest to recognize "obviously" empty types, which is a simpler variant of finitary splitting.Our main contribution with respect to McBride's work is that we allow matching against the structure of higher-order terms which poses significant additional challenges.
Another related development is the theory of partial inductive definitions [7], especially in its finitary form [6] and the related notion of definitional reflection [22].This calculus contains a rule schema that, re-interpreted in our context, would allow any (finite) complete set of unifiers between a coverage goal ∆ A : type and the heads of the clauses defining A. Because of the additional condition of so-called a-sufficiency for the substitutions this was never fully automated.Also, it appears that a simple, finite complete set of unifiers was computed as in the splitting step, but that the system could not check whether an arbitrary given set of premises could be obtained as a finite complete set of unifiers.In the Coq system [11] functions defined by patterns can be compiled to functions defined by standard primitive recursive elimination forms.Because of the requirement to compile such functions back into pure Coq and the lack of matching against functional expressions, the algorithm is rather straightforward compared to our coverage checker and does not handle variable dependencies, non-linearity, or empty types.It does, however, treat polymorphism which we have not considered.

Conclusion
We have presented a solution to the coverage checking problem for LF, generalizing and extending previous approaches.The central technical developments are strict patterns (which significantly generalize higher-order patterns in the sense of Miller), strict higher-order matching, splitting in the presence of full higherorder unification, and a two-phase control structure to guarantee termination of the algorithm.
Our coverage algorithm is sound and terminating, but it is necessarily incomplete.Applied to a given set of patterns, it either reports "yes", or it generates a set of potential counterexamples, that often contain the vital information about why coverage has failed.Because coverage is undecidable in the case of LF, the algorithm sometimes generates spurious counterexamples, that can sometimes be removed with a highly specialized albeit incomplete algorithm called finitary splitting and has proven tremendously useful in practice.
All algorithms and techniques described in this paper are implemented in the Twelf system, Version 1.4 (December 2002).Many examples of coverage are available in the example directories of the Twelf distribution.The current implementation is somewhat more general than what we describe here since it also accounts for regular worlds [23].We plan to extend the rigorous treatment given here to this larger class of coverage problems in a future paper.

Definition 7 (
Splitting).Let ∆ A : type a coverage goal, and u in ∆ = ∆ 1 , u : ΠΓ.B u , ∆ 2 a splitting variable.The splitting operation considers each constant c declared in the signature Σ and each local parameter y declared in Γ in turn, and determines a set of substitutions σ c , σ y as follows.1. Constants: Let c : Π∆ c .B c ∈ Σ, and ∆ c ΠΓ. B c = raise Γ Π∆ c .B c .