ON T HE AXIOMATIC T R E A T M E NT OF C O N C U R R E N CY

This paper describes a semantically-based axiomatic treatment of a simple parallel programming language. We consider an imperative language with shared variable concur rency and a critical region construct. After giving a structural operational semantics for the language we use the semantic structure to suggest a class of assertions for expressing semantic properties of commands. T he structure of the assertions reflects the structure of the semantic representation of a command. We then define syntactic operations on asser tions which correspond precisely to the corresponding syntactic constructs of the program ming language; in particular, we define sequential and parallel composition of assertions. This enables us to design a truly compositional proof system for program properties. Our proof system is sound and relatively complete. We examine the relationship between our proof system and the Owicki-Gries proof system for the same language, and we see how Owicki's parallel proof rule can be reformulated in our setting. Our assertions are more expressive t h an Owicki's, and her proof outlines correspond roughly to a special subset of our assertion language. Owicki's parallel rule can be thought of as being based on a slightly different form of parallel composition of assertions; our form does not require interference-freedom, and our proof system is relatively complete without the need for auxiliary vari ables. Connections with the "Generalized Hoare Logic"


Introduction.
It is widely accepted that formal reasoning about program properties is desirable.Hoare s paper [12] has led to attempts to give axiomatic treatments for a wide variety of programming languages.Hoare's paper treated partial correctness properties of commands in a sequential programming language, using simple assertions based on pre-and postconditions; the axiom system given in that paper is sound and relatively complete [8].The proof system was syntax-directed, in that axioms or rules were given for each syntactic construct.The assertions chosen by Hoare are admirably suited to the task: they are concise in structure and have a clear correlation with a natural state transformation semantics for the programming language; this means that fairly straightforward proofs of the soundness and completeness of Hoare's proof system can be given [1,8].
When we consider more complicated programming languages the picture is not so simple.Many existing axiomatic treatments of programming languages have turned out to be cither unsound or incomplete [25].The task of establishing soundness and completeness of proof systems for program properties can be complicated by an excessive amount of detail used in the semantic description of the programming language.This point seems to be quite well known, and is made, for instance in [l].Similar problems can be caused by the use of an excessively intricate or poorly structured assertion language, or by overly complicated proof rules.Certainly for sequential languages with state-transformation semantics the usual Hoare-style assertions with pre-and post-conditions are suitable.But for more complicated languages which require more sophisticated semantic treatment we believe that it is inappropriate to try to force assertions to Bt into the pre-and postcondition mould; such an attempt tends to lead to pre-and post-conditions with a rather complex structure, when it could be simpler to use a class of assertions with a different structure which more accurately corresponds to the semantics.The potential benefits of basing an axiomatic treatment directly on a well chosen semantics has been argued, for instance, in [7], where an axiomatic treatment of aliasing was given.Parallel programming languages certainly require a more sophisticated semantic model than sequential languages, and this paper attempts to construct a more sophisticated axiomatic treatment based on the resumption model of Hennessy and Plotkin [22].
Proof systems for reasoning about various forms of parallelism have been proposed by several authors, notably [2,3,4,11,15,16,17,18,19,20,21].Owicki and Gries [20,21] gave a Hoare-style axiom system for a simple parallel programming language in which parallel commands can interact through their effects on shared variables.Their proof rule for parallel composition involved a notion of interference-freedom and used proof outlines for parallel processes, rather than the usual Hoare-style assertions.In order to obtain a complete proof system Owicki found it necessary to use auxiliary variables and to add proof rules for dealing with them.These features have been the subject of considerable discussion in the literature, such as [5,16].Our approach is to begin with an appropriate semantic model, chosen to allow compositional reasoning about program properties.We use the structure of this model more directly than is usual in the design of an assertion language for program properties, and this leads to proof rules with a very simple structure, although (or rather, because) our assertions are more powerful than conventional Hoarestyle assertions; Owicki's proof outlines emerge as special cases of our assertions.The soundness and completeness of our proof system are arguably less difficult to establish, as the proof system is closely based on the semantics and the semantics has been chosen to embody as little complication as possible while still supporting formal reasoning about the desired properties of programs.
The programming language discussed here is a subset of the language considered by Owicki [20,21], and by Hennessy and Plotkin [22].Adopting the structural operational semantics of [22,26] for this language, we design a class of assertions for expressing semantic properties of commands.We then define syntactic operations on assertions which correspond to the semantics of the various syntactic constructs in the programming language; in particular, we define sequential and parallel composition for assertions.This leads naturally to compositional, or syntax-directed, proof rules for the syntactic constructs.We do not need an interference-freedom condition in our rule for parallel composition, in contrast to Owicki's system.Similarly, we do not need an auxiliary variables rule in order to obtain completeness.We show how to construct Owicki's rule for parallel composition and the need for her interference-freedom condition, using our methods.Essentially, Owicki's system uses a restricted subset of our assertions and a variant form of parallel composition of assertions.
We compare our work briefly with that of some other authors in this field, discuss some of its present limitations, and the paper ends with a few suggestions for further research and some conclusions.In particular, we indicate that our ideas can be extended to cover features omitted from the body of the paper, such as conditional critical regions, loops and conditionals.We also believe that with a few modifications in the assertion language we will be able to incorporate guarded commands [9,10], and with an appropriate definition of parallel composition for assertions we will be able to treat CSP-like parallel composition [13], in which processes do not share variables but instead interact solely by means of synchronized communication.

2.
A Parallel Programming Language.
We begin with a simple programming language containing assignment and sequential composition, together with a simple form of parallel composition, and a "critical region" construct.Parallel commands interact solely through their effects on shared variables.For simplicity of presentation we omit conditionals and loops, at least for the present, as we want to focus on the problems caused by parallelism.We will return briefly to these features later.As usual for imperative languages, we distinguish the syntactic categories of identifiers, expressions, and commands.The abstract syntax for expressions and identifiers will be taken for granted.an await statement in [20], where the notation await true do F would have been used.
In describing the semantics of this language, we will focus mainly on commands.The set S of states consists simply of the (partial) functions from identifiers to values: where V is some set of expression values (typically containing integers and truth values).
We use s to range over states, aijd we write s + [I H-> V] for the state which agrees with s except that it gives identifier / the value v.As usual, the value denoted by an expression may depend on the values of its free identifiers.Thus, we assume the existence of a semantic We specify the semantics of commands in the structural operational style [26], and our presentation follows that of [22], where identical program constructs were considered.We define first an abstract machine which specifies the computations of a command.The abstract machine is given by a labelled transition system represents a step in a computation in which the state and remaining command change as indicated, and in which the atomic action labelled a occurs.We write (F, s) -> (T f ,s f ) when there is an a for which (r, s)-^(T', s').And we use the notation -•* for the reflexive transitive closure of this relation.Thus (r, s) -•* (F', s') iff there is a sequence of atomic actions from the first configuration to the second.
The transition relations are defined by the following syntax-directed transition rules; the transition relations are to be the smallest satisfying these laws.This means that a transition is possible if and only if it can be deduced from the rules.

Transition Rules
From our definition of the transition system, we see that we have specified that a parallel composition terminates only when both components have terminated.This is because of our conventions about null: we have ([Y\ \\ r 2 ], s)-^U(r 2 , s') whenever (F\, s)-^->{iiull, s f ), for instance.It is also clear from the definitions that all computations eventually terminate in this transition system, and that no computation gets "stuck": the only configurations in which no further action is possible are the terminal configurations.These properties would not hold if we add guarded commands or loops to the language.This point will be mentioned again later; for now we will concentrate on the language as it stands. Examples.
These are the only possible computations from this initial configuration.|

Example 3.
Let T be the command [a:x:=l ]| fi:y:=l\.Then we have: (r, 8)^(/3: This command sets both x and y to 1. B

Semantics,
Using the transition system we may now extract a semantics.For a partial correctness semantics, we should examine the (terminating) computations of a command and extract the initial and final states.Of course, in the present language there is no need to distinguish between total and partial correctness because all computations terminate, but this issue will arise in treatments of an extended language containing loops (for example).For uniformity, we still refer to partial correctness, as the definition we give adapts even to the extended language and does then correspond to partial correctness.

Definition 1.
The semantic function Examples.
We have already seen that

Reasoning about commands.
In conventional Hoare logics for sequential imperative programs, assertions of the form {P}r{Q} are used, with P and Q being called the pre-and post-condition.These conditions are typically drawn from a simple first order language, and are interpreted as predicates of the state.Given a satisfaction relation f= on conditions and states, we say that In other words, a Hoare assertion of this type describes the relationship between an initial state and the possible final states of a computation of a command.However, it is well known [20,22,23] that in a language involving parallel composition it is not possible to reason about partial correctness properties of a command in isolation: account must be taken of the context in which the command is to be run.This is exemplified by the commands x:=2, and x:-l; x:=x + 1, which clearly have the same partial correctness properties in isolation, i.e.

M[[x:=2]l = Mlx:=l;
x:=x + ll but which exhibit different partial correctness properties in some programming language contexts; for instance, the commands do not have the same partial correctness properties, as the latter command may set x to 3. Thus, the At semantics does not always distinguish between pairs of commands if there is a program context in which they exhibit different partial correctness behaviour.
Technically, the relational semantics At fails to be fully abstract [22,23] with respect to partial correctness; it makes too few distinctions between commands, and is therefore "too abstract".In order to reason about the correctness of a parallel combination of commands in a manner independent of the context in which the command appears, we need to know more about the individual commands than simply their relational semantics At.Similarly, we cannot axiomatize partial correctness of commands solely on the basis of partial correctness properties of components: conventional pre-and post-condition assertions are not going to suffice.
Hennessy and Plotkin [22] showed that the transition system above can be used to define a semantics which will distinguish between terms if there is a context in which they can exhibit different partial correctness properties.This semantics uses the notion of a resumption.
For our subset of the language, we may adapt these ideas slightly to define the following semantics for labelled commands: with the definition being Justification for this use of a recursively defined domain R of resumptions can be given if we interpret P as a powerdomain construct, and the interested reader should consult [22] for details.
Note that according to this definition we have £[[null]]s = 0 for all states s.Note also that for any state s, £[[r]]s will be a finite set.This can be represented as a tree structure as follows, with a branch for each member of the set, labelled by the corresponding atomic action label, with a son consisting of a resumption-state pair.

<r >s >
The tree structure suggests a class of assertions with components representing the branch structure of trees.We therefore introduce a class of assertions of the form n where as before P and the P{ are drawn from some condition language, and where the oci are labels.This notation obviously corresponds with Milncr's linear notation for synchronization trees [24]; in addition to labelling the arcs with action labels, we also incorporate conditions at nodes.We make no distinction between assertions which differ only in the order in which their branches are written.A tree representation of such a <f> will often be preferable to the linear notation; for example, the assertion P^Z^= ==i aiP{(f>i may be represented as: p We will feel free to use set braces to delimit conditions as an aid to the eye, and we use NIL for the tree with no branches (this corresponds to termination, since in this language inability to perform any action coincides with termination).Thus, an assertion in which n = 0 will be written {P}NIL; we also introduce the special notation « to stand for the assertion {true}NIL.Finally, it will be convenient to adopt the convention that {P}a{Q} (which does not conform to the syntax above) abbreviates the assertion {P}a{Q}{Q }NIL (which does).
Note that there is an obvious definition of the depth of an assertion </ >, and that all assertions have finite depth.The terminal assertions are those with zero depth.
In order to express the property that a command T satisfies an assertion <f> we write This type of formal property will be the subject of our proof system, and we will see later that we have a generalization of conventional Hoare-style assertions.
When (j) is the assertion X)?--1 a i^>i ( l > i we interpret F sat </ > in the following way.If the command is started in a state satisfying P, then its initial action must be an ct{ drawn from the set of initial labels of the assertion, and these labels are precisely the initial actions possible for the command.If the command starts with an a t -action it reaches a state where P t -is true and where the remaining command satisfies <f>i.Specifically, we write H r sat <( > to indicate that T satisfies (j).This means that, with the above notation, and, in addition, that so that all of the actions specified in (j) are indeed possible for F when the initial state satisfies P.These definitions can be rephrased in terms of the semantic function JZ.
Note that we always have f= null sat and indeed (non-trivial) terminal assertions can only be satisfied by null.
Examples.. Let (j) be the assertion P^l'i'-i ^iPi^i-Tree structure sug; notation.DeBnc the root and leaf conditions for <j> as follows: OLiPi4>i.Tree structure suggests the use of the following root(^) leaf(0) otherwise.
The root condition characterizes the state at the root of a computation tree, and the leaf condition characterizes the leaf nodes, i.e. the terminal states.This is just the disjunction of the conditions at the leaves of the assertion.Using the conventional abbreviations introduced earlier, we see for example that the assertion has leaf condition Pi V PA-We also have Note that in the syntactic definition of the class of assertions, we have not required that any logical connection exist between adjacent "intermediate" conditions inside an assertion.
Although in Example 4 the condition x -1 appears as an intermediate condition, we do not insist that the "following" condition x -99 be a logical consequence.Assertions in which this constraint is satisfied correspond very closely with computation trees and proof outlines.There are good semantic reasons for not making this constraint on the syntax of our assertion language, since assertions satisfying the constraint describe the behaviour of a command in isolation and we know that in general this information is insufficient to characterize the behaviour of a command in all parallel contexts.Now that we have designed an assertion language for our programming language, let us build a proof system.We will find that we can give a set of syntax-directed proof rules, by constructing syntactic operations on assertions to correspond to the syntactic operations of the programming language.The important point is that we are going to use the semantics directly to suggest how to design otir rules.

Atomic assertions.
A terminal assertion {P}NIL represents termination.An atomic assertion has the form {P}a{Q}{R }NIL, and the special abbreviated forms {P}a{Q} and {P}a{Q}m are thus atomic.Atomic commands satisfy atomic assertions, and the axioms expressing this fact for skip and assignment are simple: We use the notation [E\I]P for the result of replacing every free occurrence of I in P by E, with suitable name changes to avoid clashes.
A critical region also creates an atomic action out of a command.In order to axiomatize this construct we need to single out a class of assertions which state properties of a command when run in isolation as an indivisible atomic action, since the effect of the critical region construct is to run a command without allowing interruption.Define safe(<?!>) for <j> of the form PY17=i a i p i ( f ) i by

n n saf e( ^) « => /\ {Pi => root(^)) & A s af e (&)-t=i i =i
This is precisely the constraint mentioned earlier: at each node of the tree the postcondition established by the previous atomic action is required to imply the root condition of the remaining subtree.When n = 0 this is trivially true, and the two abbreviated forms of atomic assertion { P }a{ Q } and {P}a{Q} o are always safe.
Intuitively, if T satisfies <f> and (j> is safe, then <f> describes a possible execution of T in which no non-trivial interruption is allowed or assumed.Thus, a safe assertion gives information about the command's behaviour in isolation.We can therefore use safe assertions in the proof rule for critical regions: T sat <j>, safe(^>) a : (T) sat { root(^) }a{ leaf(0) } •' ^ The soundness of this rule is easy to establish.

PARALLEL COMPOSITION.
It is possible to define a parallel composition for assertions.The definition is given inductively.For the base case, when one of the assertions has zero depth, we specify that and similarly when the two terms are exchanged.In particular, it follows that (Strictly speaking, these are logical equivalences rather than syntactic identities).The inductive clause is an extension of the well known INTERLEAVING operation on synchronization trees [6,24,28] which handles the node conditions in an appropriate manner.F"or assertions ) In tree form, this is represented as follows: Thus we are led to the proof rule: As an example, we can show that the command We will return to this point later.

Sequential composition.
We may also define a sequential composition for assertions.The definition is straightforward, again by induction on depth.The operation grafts ip on to the leaf nodes of the tree corresponding to <j>.In the base case, we put As an example, we can now prove that the command a:x:=x + l;/?:x:=x + l satisfies the assertion { x = 0 }a{ x = 1 }{ x = 99 }/?{ x = 100 }, by forming the sequential composition of the assertions In summary, the rules so far introduced are: The system presented above is sound but not complete.One reason for incompleteness is rather trivial: every command satisfies an assertion 0 whose root is false, but we have no way of proving this from the above rules.One solution is to add a rule to this effect: Even this does not guarantee completeness by itself.We saw earlier (Examples 2 and 3) that we were unable to prove some assertions about parallel commands.Example 2, for instance, showed that there is no proof from these rules alone that the command satisfies the assertion Rule (BO) does not help in these examples.Essentially, the reason for this is that we really need to use two assertions about each component command here: we need to be able to say thai x:-x -|-1 will change the value of x from 0 to 1, and that it will equally well change the value of x from 1 to 2. Of course, in general the number of separate assertions required may be more than two.We will therefore allow conjunction of assertions and include a natural rule which expresses an appropriate notion of implication for our assertions.For conjunction we simply add to the syntax of our assertion language the clause 0 ::= (01 0 02).
We use 0 rather than & merely to keep a distinction between conjunction at this level and conjunction in the condition language.The interpretation is simple: Conjunction is clearly associative, and we may therefore omit parentheses and write For sequential composition we merely put (0t 0 02); tp = (0i; tp) 0 (02; ifr) and similarly when we have a conjunction in the second place: in other words, sequential composition distributes over conjunction.With these additions, the axioms and rules given earlier remain sound, with (B3) applicable for conjunction-free assertions as we have not specified a definition of safe(0) when 0 is a conjunction.
We add rules for conjunction introduction and elimination: Implication between assertions is defined as follows for simple assertions without conjunction; the definition extends in the obvious way to conjunctions: we certainly want to have (0 0 1/;) => 0 and (0 0 ip) => -0 for example.For n </ > = P(£at.p,.^), In the case when n = 0 this merely requires that Q => P. Also, when 0 is {P }a{ Q } and V> is {P' }a{ Q'} we have 0 => ^ iff ^' ^ and Q => Q'; this is analogous to the usual Rule of Consequence of conventional Hoare logic [1,12]: Our rule for implication is a form of modus ponens: From the definitions above it follows, for example, that because Q ==> true.This means, in particular, that we may derive the following assertion schemas for assignment and skip, by using the axioms (Bl) and (B2) together with (B8): a:skip sat {P}a{P}, (Bl') These forms resemble the usual Hoare axioms for these constructs [12].

Examples.
Consider again the problematic examples introduced earlier.

Example 1.
We wish to prove that T sat 0, where r = [a:x:=x + l ||/?:x:=x + 1], We have the following assertions (by rules B2 and B6): The parallel composition of these assertions implies the desired assertion: "true To this end, let 0 and I/J be the following assertions: Then we have a:x:=l sat (j) and /3:y:=l sat t/;.And [0|| T/;] => {true}(a{x = l}i/; + ^{T/ = 1}0) By choosing the appropriate conjuncts in 0 and IP we see that this assertion implies Q.
That completes the proof.B

Soundness and
Completeness.
Although we do not provide a proof in this paper, the proof system formed by (BO)-(B8) is sound: all provable assertions arc valid.The system is also relatively complete in the sense of Cook [8]: every true assertion of the form F sat 0 is provable, given that we can prove all of the conditions necessary in applications of the critical region rule and of modus ponens.Both of these rules require assumptions which take the form of implications between conditions.Let Th be the set of valid conditions (including implications between conditions).Write Th (-T sat 0 if this can be proved from (B0)-(B8) using assumptions from Th.The soundness result is: If Th h T sat 0 then h T sat 0. | Relative completeness is expressed as follows: We omit the proof of this result.
In Owicki's proof system, conventional Hoare-style assertions of the form {PMQ} arc used, although the parallel composition rule requires the use of a proof outline above the inference line.A proof outline is a command text annotated with conditions, one before and one after each syntactic occiirrence of an atomic action.At least for sequential commands, safe assertions in our assertion language correspond precisely with such proof outlines because computations of sequential commands follow the syntactic structure of the command.The analogy can be extended to parallel commands too, although the syntactic structure of a proof outline is no longer so close to that of the corresponding safe assertion.The following proof rule forms a connection between our proof system and that of Owicki.Above the line, we have a safe assertion of our form, and below we have a Hoare-style partial correctness assertion.The rule states that a safe assertion implies the partial correctness of the command with respect to its root and leaf conditions.The rule is: T sat 0, safe(0) To see why Owicki's proof rule for parallel composition required an extra constraint, that of interference-freedom, let us see how to model her rule in our notation.

1=1 j = l
We also specify that 771 171 and a similar definition when the terms are exchanged.In particular,

[• Ho 4>] = [ V-WO •] =
The essential difference between this operation and our earlier one is that this one carries pre-conditions through into post-conditions.For example, Unfortunately, this form of composition does not always produce an assertion which correctly describes the behaviour of a parallel composition of commands.We need the notion of interference-freedom to guarantee this.
Define the set atoms(0) of atomic sub-assertions of <f> by induction on the depth of <f>.For the assertion <f> = PYl7=i a iPi ( f > i we P u ^ n A terminal assertion { P }NIL has no atomic sub-assertions.The interference-free condition is defined as follows: Two assertions cf> and if) are interference-free, written int-free(0, if)), iff for every pair of atomic assertions { P }<*{ P' } £ atoms(0), { q }/?{ q' } £ atoms(i/>), the (ordinary Hoare-style assertions) In view of the above theorem we may include the following rule in our system: Note that this theorem and the proof rule are stated in a form applicable to all assertions, not just to safe assertions.This can, therefore, be regarded as a slight extension of Owicki's ideas to encompass a more expressive assertion language.The following result shows that interference-feedom guarantees the preservation of safeness.

Theorem 6.
If 0 and if) are safe and interference-free, then [(f) \\o if>] is safe.I the assertion discussed in Example 1 earlier.Owicki achieved completeness by adding "auxiliary variables" to programs and adding new proof rules to allow their use.We can formalise this as follows.We say that a set X of identifiers is auxiliary for a command P if all free occurrences of identifiers from this set in F are inside assignments to identifiers also in X.Thus, for instance, for the command x:-x + 1; y:-z; a:=x the sets { y }, { y, z }, { a, x } and {x,y,z,a} are auxiliary, but { x } is not.Let us write T aux X when X is an auxiliary set of identifiers for T.
Given any set X of identifiers and any command T, we can define a command F\X resulting from the deletion in F of all assignments to identifiers in X.The definition is syntax-directed: skip\X = skip (i---E With this definition, it is clear (and provable) that if X is auxiliary for T then T\X has the same partial correctness effect on identifiers outside X as T does, and T\X leaves the values of all identifiers in X fixed.
Let freeJP, QJ stand for the set of identifiers having a free occurrence in either P or Q. Owicki's auxiliary variables rule is: In addition to this rule, for completeness of the Owicki proof system we also need a rule for eliminating "unnecessary" critical regions and irrelevant atomic actions which have been Owieki's proof system uses a rule based on these equivalences, which we may formalise as follows:

{P}V'{Q} '
As an example, we can now prove (as in [20]) the assertion The Owicki-Gries proof system can, then, be thought of as built from the rules (BO), (Bl), (B2), (B3), (B9), (B5), (AV), (EQ) and (C).It is arguable whether or not our proof system, which does not require the use of auxiliary variables in proofs, is preferable to Owieki's.The reader might like to compare the styles of proof in the two systems for the example above.Just as it is necessary to exercise skill in the choice and use of auxiliary variables in Owieki's system, our system requires a judicious choice of conjunctions.
However, the details of auxiliary variables and reasoning about their values can be ignored in our system.At least we are able to demonstrate that there are alternatives to the earlier proof rules of [18,19] which do not explicitly require the manipulation of variables purely for proof-theoretical purposes and which do not require a notion of interference-freedom to guarantee soundness.
Other authors have proposed compositional proof systems for concurrent programs in which the underlying assertions are temporal in nature.In particular, we refer to [4] and [19].In contrast to these methods, we have avoided temporal assertions at the expense of using conjunction and implication as operations on more highly structured assertions built from conventional pre-and post-conditions.We still obtained a compositional proof system.In fact, our assertions do have some similarity with temporal logic in the sense that an assertion has built into it a specification of the possible atomic actions and the behaviour of the command after each of them, so that one might be able to represent one of our assertions (p in a more conventional temporal or dynamic logic. We also believe that similar ideas to those used in this paper may be adopted in an axiomatic treatment of other forms of parallel programming.In particular, CSP [13] may be axiomatized if we modify the class of assertions to represent the potential for communication and if we design a suitable parallel composition of assertions.In CSP, the inclusion of guarded commands will necessitate a distinction between deadlock (a stuck configuration) and successful termination, but this may be handled by an appropriate choice of assertion language.We plan to investigate this topic in a future paper, and we hope that some connections with earlier work [2,18,27] will become apparent when this is done.
Another possibility for future development is to investigate an appropriate generalization of predicate transformers, weakest pre-conditions and strongest post-conditions (see [10], for example) for parallel commands, using our more general assertions instead of Hoare-style assertions.For instance, there is a reasonable notion of strongest safe assertion for a (labelled) command and an initial condition, provided we have strongest postconditions of conventional type for atomic actions.If sp[[a]](P) is the strongest postcondition of atomic action a with respect to the pre-condition P, we may build a safe assertion $(r,P) as follows.If the initial actions for T (from states satisfying P) are { CKI, .. ., a n }, and if I\-is the remaining command after a;, we put For convenience we put $(null, P ) = P.For example, the assertion built in this way from the command [a:x:=x + 1 || f3:x:=x + 1] and the initial condition x = 0 is: Of course, when we include loops and conditionals we should be more careful in our definitions, but at least for finite commands this type of strongest safe assertion seems to be of interest.We plan to investigate this topic further.
skip | I:=E | r, ; T2 | [r t || T2] | <r).The notation is fairly standard.The command skip is an atomic action having no effect on program variables.An assignment, denoted I:=E, is also an atomic action; it sets the value of / to the (execution-time) value of E, Sequential composition is represented by ri;T2.A parallel composition [V\ \\ T<2] is executed by interleaving the atomic actions of the component commands I\ and F2-A command of the form (F) is a critical region; this construct converts a command into an atomic action, and corresponds to a special case of (a: skip, s)-^-+(null, s) {a: I:=E, s)-^>{null, s + [/ t-+ £ p?] s]) ,s)-^(r' 2 ,s') ([rx uru^aiMi r2],S') (T,s) ->* (nulls') (a:(rVH^(null, s')
When <f> is l a *-P»^») an d n > 0 we put nAgain we can show that the operation has the desired effect: if Fx satisfies <j> and r2satisfies ij) then ri; T 2 satisfies (j>\ if).Theorem 2.If h T x sat <f> and h T 2 sat ij) then (= (ri; T 2 ) sat (0; B This suggests the proof rule:Ti sat <f> T2 sat -