Eﬀective Synthesis of Asynchronous Systems from GR(1) Speciﬁcations (cid:2)

. We consider automatic synthesis from linear temporal logic speciﬁcations for asynchronous systems. We aim the produced reactive systems to be used as software in a multi-threaded environment. We extend previous reduction of asynchronous synthesis to synchronous synthesis to the setting of multiple input and multiple output variables. Much like synthesis for synchronous designs, this solution is not practical as it requires determinization of automata on inﬁnite words and solution of complicated games. We follow advances in synthesis of synchronous designs, which restrict the handled speciﬁcations but achieve scalability and eﬃciency. We propose a heuristic that, in some cases, maintains scalability for asynchronous synthesis. Our heuristic can prove that spec-iﬁcations are realizable and extract designs. This is done by a reduction to synchronous synthesis that is inspired by the theoretical reduction.


Introduction
One of the most ambitious and challenging problems in reactive systems design is the automatic synthesis of programs from logical specifications.It was suggested by Church [3] and subsequently solved by two techniques [2,19].In [15] the problem was set in a modern context of synthesis of reactive systems from Linear Temporal Logic (ltl) specifications.The synthesis algorithm converts a ltl specification to a Büchi automaton, which is then determinized [15].This double translation may be doubly exponential in the size of ϕ.Once the deterministic automaton is obtained, it is converted to a Rabin game that can be solved in time n O(k) , where n is the number of states of the automaton (double exponential in ϕ) and k is a measure of topological complexity (exponential in ϕ).This algorithm is tight as the problem is 2EXPTIME-hard [15].This unfortunate situation led to extensive research on ways to bypass the complexity of synthesis (e.g., [11,7,13]).The work in [13] is of particular interest to us.It achieves scalability by restricting the type of handled specifications.This led to many applications of synthesis in various fields [1,5,24,8,10,6].So, in some cases, synthesis of designs from their temporal specifications is feasible.
These results relate to the case of synchronous synthesis, where the synthesized system is synchronized with its environment.At every step, the environment generates new inputs and the system senses all of them and computes a response.This is the standard computational model for hardware designs.
Here, we are interested in synthesis of asynchronous systems.Namely, the system may not sense all the changes in its inputs, and its responses may become visible to the external world (including the environment) with an arbitrary delay.Furthermore, the system accesses one variable at a time while in the synchronous model all inputs are observed and all outputs are changed in a single step.The asynchronous model is the most appropriate for representing reactive software systems that communicate via shared variables on a multi-threaded platform.
In [16], Pnueli and Rosner reduce asynchronous synthesis to synchronous synthesis.Their technique, which we call the Rosner reduction, converts a specification ϕ(x; y) with single input x and single output y to a specification X (x, r; y).The new specification relates to an additional input r.They show that ϕ is asynchronously realizable iff X is synchronously realizable and how to translate a synchronous implementation of X to an asynchronous implementation of ϕ.
Our first result is an extension of the Rosner reduction to specifications with multiple input and output variables.Pnueli and Rosner assumed that the system alternates between reading its input and writing its output.For multiple variables, we assume cyclic access to variables: first reading all inputs, then writing all outputs (each in a fixed order).We show that this interaction mode is not restrictive as it is equivalent (w.r.t.synthesis) to the model in which the system chooses its next action (whether to read or to write and which variable).
Combined with [15], the reduction from asynchronous to synchronous synthesis presents a complete solution to the multiple-variables asynchronous synthesis problem.Unfortunately, much like in the synchronous case, it is not 'effective'.Furthermore, even if ϕ is relatively simple (for example, belongs to the class of GR (1) formulae that is handled in [13]), the formula X is considerably more complex and requires the full treatment of [15].
Consequently, we propose a method to bypass this full reduction.In the invited paper [14] we outlined the principles of an approach to bypass the complexity of asynchronous synthesis.Our approach applied to specifications that relate to one input and one output, both Boolean.We presented heuristics that can be used to prove unrealizability and to prove realizability.It called for the construction of a weakening that could prove unrealizability through a simpler reduction to synchronous synthesis.This result is naturally extended to multiple variables, based on the extended Rosner reduction presented here, and is presented in an extended version [9].In [14] we also outlined an approach to strengthen specifications and an alternative reduction to synchronous synthesis for such strengthened specifications.Here we substantiate these conjectured ideas by completing and correcting the details of that approach and extending it to multiple value variables and multiple outputs.We show that the ideas portrayed in [14] require to even further restrict the type of specifications and a more elaborate reduction to synchronous synthesis (even for the Boolean oneinput one-output case of [14]).We show that when the system has access to the 'entire state' of the environment (this is like the environment having one multiple value variable) there are cases where a simpler reduction to synchronous synthesis can be applied.We give a conversion from the synchronous implementation to an asynchronous implementation realizing the original specification.
To our knowledge, this is the first 'easy' case of asynchronous synthesis identified.With connections to partial-information games and synthesis with nondeterministic environments, we find this to be a very important research direction.
Proofs, which are omitted due to lack of space, are available in [9].

Preliminaries
Temporal Logic.We describe an extension of Quantified Propositional Temporal Logic (QPTL) [21] with stuttering quantification.We refer to this extended logic as QPTL.Let X be a set of variables ranging over the same finite domain D. The syntax of QPTL is defined according to the following grammar.
Ltl does not allow the ∃ and ∃ ≈ operators.We stress that a formula ϕ is written over the variables in a set X by writing ϕ(X).If variables are partitioned to inputs X and outputs Y , we write ϕ(X; Y ).We call such formulae specifications.We sometimes list the variables in X and Y , e.g., ϕ(x 1 , x 2 ; y).
The semantics of QPTL is given with respect to computations and locations in them.A computation σ is an infinite sequence a 0 , a 1 , . .., where for every i ≥ 0 we have a i ∈ D X .That is, a computation is an infinite sequence of value assignments to the variables in X.For an assignment a ∈ D X and a variable x ∈ X we write a[x] for the value assigned to x by a.If X = {x 1 , . . ., x n }, we freely use the notation (a The computation squeeze(σ) is obtained from σ as follows.If for all i ≥ 0 we have a i = a 0 , then squeeze(σ) = σ.Otherwise, if a 0 = a 1 then squeeze(σ) = a 0 , squeeze(a 1 , a 2 , . ..).Finally, if a 0 = a 1 then squeeze(σ) = squeeze(a 1 , a 2 , . ..).That is, by removing repeating assignments, squeeze returns a computation in which every two adjacent assignments are different unless the computation ends in an infinite suffix of one assignment.A computation σ is a stuttering variant of σ if squeeze(σ) = squeeze(σ ).
Satisfaction of a QPTL formula ϕ over computation σ in location i ≥ 0, denoted σ, i |= ϕ, is defined as usual.We define here only the case of quantification.
Realizability of Temporal Specifications.We define synchronous and asynchronous programs.While the programs themselves are not very different the definition of interaction of a program makes the distinction clear.
Let X and Y be the sets of inputs and outputs.We stress the different roles of the system and the environment by specializing computations to interactions.In an interaction we treat each assignment to X ∪ Y as different assignments to X and Y .Thus, instead of using c ∈ D X∪Y , we use a pair (a, b), where a ∈ D X and b ∈ D Y .Formally, an interaction is σ = (a 0 , b 0 ), (a 1 , b 1 ), . . .
A synchronous program P s from X to Y is a function P s : (D X ) + → D Y .In every step of the computation (including the initial one) the program reads its inputs and updates the values of all outputs (based on the entire history).An interaction σ is called synchronous interaction of P if, at each step of the interaction i = 0, 1, . .., the program outputs (assigns to Y ) the value P s (a 0 , a 1 , . . ., a i ), i.e., b i = P s (a 0 , . . ., a i ).In such interactions both the environment, which updates input values, and the system, which updates output values, 'act' at each step (where the system responds in each step to an environment action).
A synchronous program is finite state if it can be induced by a Labeled Transition System (LTS).A LTS is T = S, I, R, X, Y, L , where S is a finite set of states, I ⊆ S is a set of initial states, R ⊆ S × S is a transition relation, X and Y are disjoint sets of input and output variables, respectively, and L : S → D X∪Y is a labeling function.For a state s ∈ S and for Z ⊆ X ∪ Y , we define L(s)| Z to be the restriction of L(s) to the variables of Z.The LTS has to be receptive, i.e., be able to accept all inputs.Formally, for every a ∈ D X there is some s 0 ∈ I such that L(s 0 )| X = a.For every a ∈ D X and s ∈ S there is some s a ∈ S such that R(s, s a ) and L(s a )| X = a.The LTS T is deterministic if for every a ∈ D X there is a unique s 0 ∈ I such that L(s 0 )| X = a and for every a ∈ D X and every s ∈ S there is a unique s a ∈ S such that R(s, s a ) and L(s a )| X = a.Otherwise, it is nondeterministic.A deterministic LTS T induces the synchronous program P T : (D X ) + → D Y as follows.For every a ∈ D X let T (a) be the unique state s 0 ∈ I such that L(s 0 )| X = a.For every n > 1 and a 1 . . .a n ∈ (D X ) + let T (a 1 , . . ., a n ) be the unique s ∈ S such that R(T (a 1 , . . ., a n−1 ), s) and L(s)| X = a n .For every a We note that nondeterministic LTS do not induce programs.As nondeterministic LTS can always be pruned to deterministic LTS, we find it acceptable to produce nondeterministic LTS as a representation of a set of possible programs.
An asynchronous program P a from X to Y is a function P a : (D X ) * → D Y .Note that the first value to outputs is set before seeing inputs.As before, the program receives all inputs and updates all outputs.However, the definition of an interaction takes into account that this may not happen instantaneously.
A schedule is a pair (R, W ) of sequences R = r 1 1 , . . ., r n 1 , r 1 2 , . . ., r n 2 , . . .and W = w 1  1 , . . ., w m 1 , w 1 2 , . . ., w m 2 , . . . of reading points and writing points such that r 1  1 > 0 and for every i > 0 we have It identifies the points where each of the input variables is read and the points where each of the output variables is written.The order establishes that reading and writing points occur cyclically.When the distinction is not important, we call reading points and writing points I\O-points.
An interaction is called asynchronous interaction of P a for (R, W ) if b 0 = P a ( ), and for every i > 0, every j ∈ {1, . . ., m}, and every . Also, for every j ∈ {1, . . ., m} and every 0 < k < w j 1 , we have that b k [j] = b 0 [j].In asynchronous interactions, the environment may update the input values at each step.However, the system is only aware of the values of inputs at reading points and responds by outputting the appropriate variables at writing points.In particular, the system is not even aware of the amount of time that passes between the two adjacent time points (read-read, read-write, or write-read).That is, output values depend only on the values of inputs in earlier reading points.
An asynchronous program is finite state if it can be asynchronously induced by an Initialized LTS (ILTS).An ILTS is T = T s , i , where T s = S, I, R, X, Y, L is a LTS, and i ∈ D Y is an initial assignment.We sometimes abuse notations and write T = S, I, R, X, Y, L, i .Determinism is defined just as for LTS.Similarly, given a 1 , . . ., a n ∈ (D X ) + we define T (a 1 , . . ., a n ) as before.A deterministic ILTS T asynchronously induces the program P T : (D X ) * → D Y as follows.Let P T ( ) = i and for every a 1 . . .a n ∈ (D X ) + we have P T (a 1 , . . ., a n ) as before.As i is a unique initial assignment, we force ILTS to induce only asynchronous programs that deterministically assign a single initial value to outputs.All our results work also with a definition that allows nondeterministic choice of initial output values (that do not depend on the unavailable inputs).

Definition 1 (realizability).
A ltl specification ϕ(X; Y ) is synchronously realizable if there exists a synchronous program P s such that all synchronous interactions of P s satisfy ϕ(X; Y ).Such a program P s is said to synchronously realize ϕ(X; Y ).Synchronous realizability is often simply shortened to realizability.Asynchronous realizability is defined similarly with asynchronous programs and all asynchronous interactions for all schedules.
Synthesis is the process of automatically constructing a program P that (synchronously/asynchronously) realizes a specification ϕ(X; Y ).We freely write that a LTS realizes a specification in case that the induced program satisfies it.

Theorem 1 ([15]). Deciding whether a specification ϕ(X; Y ) is synchronously realizable is 2EXPTIME-complete. Furthermore, if ϕ(X; Y ) is synchronously realizable the same decision procedure can extract a LTS that realizes ϕ(X; Y ).
Normal Form of Specifications.We give a normal form of specifications describing an interplay between a system s and an environment e.Let X and Y be disjoint sets of input and output variables, respectively.For α ∈ {e, s}, the formula ϕ α (X; Y ), which defines the allowed actions of α, is a conjunction of: 1.I α (initial condition) -a Boolean formula (equally, an assertion) over X ∪ Y , describing the initial state of α.The formula I s may refer to all variables and I e may refer only to the variables X.
2. ¼ S α (safety component) -a formula describing the transition relation of α, where S α describes the update of the locally controlled state variables (identified by being primed , e.g., x for x ∈ X) as related to the current state (unprimed, e.g., x), except that s can observe X's next values.

L
where p is a Boolean formula.
In the case that a specification includes temporal past formulae instead of the Boolean formulae in any of the three conjuncts mentioned above, we assume that a pre-processing of the specification was done to translate it into another one that has the same structure but without the use of past formulae.This can be always achieved through the introduction of fresh Boolean variables that implement temporal testers for past formulae [18].Therefore, without loss of generality, we discuss in this work only such past-formulae-free specifications.
We abuse notations and write ϕ α also as a triplet I α , S α , L α .Consider a pair of formulae ϕ α (X; Y ), for α ∈ {e, s} as above.We define the specification Imp(ϕ e , ϕ s ) to be (I e ∧ ¼ S e ∧ L e ) → (I s ∧ ¼ S s ∧ L s ).For such specifications, the winning condition is the formula L e → L s , which we call GR (1).Synchronous synthesis of such specifications was considered in [13].
The Rosner Reduction.In [16], Pnueli and Rosner show how to use synchronous realizability to solve asynchronous realizability.They define, what we call, the Rosner reduction.It translates a specification ϕ(X; Y ), where X = {x} and Y = {y} are singletons, into a specification X (x, r; y) that has an additional Boolean input variable r.The new variable r is called the Boolean scheduling variable.Intuitively, the Boolean scheduling variable defines all possible schedules for one-input one-output systems .When it changes from zero to one it signals a reading point and when it changes from one to zero it signals a writing point.Given specification ϕ(X; Y ), we define the kernel formula X (x, r; y): According to α(r), the first I\O-point, where r changes from zero to one, is a reading point and there are infinitely many reading and writing points.Then, β(x, r; y) includes three parts: (a) the original formula ϕ(x; y) must hold, (b) outputs obey the scheduling variable, i.e., in all points that are not writing points the value of y does not change, and (c) if we replace all the inputs except in reading points, then the same output still satisfies the original formula.

Theorem 2 ([16]
).The specification ϕ(x; y) is asynchronously realizable iff the specification X (x, r; y) is synchronously realizable.Given a program that synchronously realizes X (x, r; y) it can be converted in linear time to a program asynchronously realizing ϕ(x; y).
Pnueli and Rosner also show how the standard techniques for realizability of ltl [15] can handle stuttering quantification of the form appearing in X (x, r; y).

Expanding the Rosner Reduction to Multiple Variables
In this section we describe an expansion of the Rosner reduction to handle specifications with multiple input and output variables.The reduction reduces asynchronous synthesis to synchronous synthesis.Without loss of generality, fix a ltl specification ϕ(X; Y ), where X = {x 1 , . . ., x n } and Y = {y 1 , . . ., y m }.
We propose the generalized Rosner reduction, which translates ϕ(X; Y ) into X n,m (X∪{r}; Y ).The specification uses an additional input variable r, called the scheduling variable, that ranges over {1, . . ., (n + m)}, which defines all reading and writing points.Variable x i may be read by the system whenever r changes its value to i. Variable y i may be modified whenever r changes to n + i.Initially, r = n + m and it is incremented cyclically by 1 (hence, in the first We also denote [r = (n + i)] ∧ [r = (n + i)] by write n (i) to indicate a state that is a writing point for y i , (r = i) ∧ (r = i) by read (i) to indicate a state that is a reading point for x i , d∈D [(z = d) ↔ (z = d)] by unchanged (z) to indicate a state where z did not change its value, and ¬ t by first to indicate a state that is the first one in the computation.
The kernel formula There is a 1-1 correspondence between sequences of assignments to r and schedules (R, W ). As r is an input variable, the program has to handle all possible assignments to it.This implies that the program handles all possible schedules.

Proof (Sketch):
Suppose we have a synchronous program realizing X n,m (X ∪ {r}; Y ) and we want an asynchronous program realizing ϕ(X; Y ).An input to the asynchronous program is stretched in order to be fed to the synchronous program.Essentially, every new input to the asynchronous program is stretched so that one variable changes at a time and in addition the new valuation of all input variables is repeated enough time to allow the synchronous program to update all the output variables.This is forced to happen immediately by increasing the scheduling variable r (cyclically) in every input for the synchronous program.This forces the synchronous program to update all output variables and this is the value we use for the asynchronous program.Then, the stuttering quantification over the synchronous interaction shows that an asynchronous interaction that matches these outputs does in fact satisfy ϕ(X; Y ).
In the other direction we have an asynchronous program realizing ϕ(X; Y ) and have to construct a synchronous program realizing X n,m (X ∪ {r}; Y ).The reply of the synchronous program to every input in which the scheduling variables behaves other than increasing (cyclically) is set to be arbitrary.For inputs where the scheduling variable behaves properly, we can contract the inputs to the reading points indicated by r and feed the resulting input sequence to the asynchronous program.We then change the output variables one by one as indicated by r according to the output of the asynchronous program.In order to see that the resulting synchronous program satisfies X , we note that the stuttering quantification relates precisely to the possible asynchronous interactions.
In principle, this theorem provides a complete solution to the problem of asynchronous synthesis (with multiple inputs and outputs).This requires to construct a deterministic automaton for X n,m and to solve complex parity games.In particular, when combining determinization with the treatment of ∀ ≈ quantification, even relatively simple specifications may lead to very complex deterministic automata and (as a result) games that are complicated to solve.
Since the publication of the original Rosner reduction, several alternative approaches to asynchronous synthesis have been suggested.Vardi suggests an automata theoretic solution that shows how to embed the scheduling variable directly in the tree automaton [22].Schewe and Finkbeiner extend these ideas to the case of branching time specifications [20].Both approaches require the usage of determinization and the solution of general parity games.Unlike the generalized Rosner reduction they obfuscate the relation between the asynchronous and synchronous synthesis problems.In particular, the simple cases identified for asynchronous synthesis in the following sections rely on this relation between the two types of synthesis.All three approaches do not offer a practical solution to asynchronous synthesis as they have proven impossible to implement.

A More General Asynchronous Interaction Model
The reader may object to the model of asynchronous interaction as over simplified.Here, we justify this model by showing that it is practically equivalent (from a synthesis point of view) to a model that is more akin to software thread implementation.Specifically, we introduce a model in which the environment chooses the times the system can read or write and the system chooses whether to read or write and which variable to access.We formally define this model and show that the two asynchronous models are equivalent.We call our original asynchronous interaction model round robin and this new model by demand.
That is, for a given history of values read\written by the program (and the program should know which variables it read\wrote) the program decides on the next variable to read\write.In case that the decision is to write in the next I\O point, the program also chooses the value to write.Furthermore, the program starts by writing all the output variables according to their order y 1 , y 2 , . . ., y m .
We define when an interaction matches a by-demand program.Recall that an interaction over X and  , d k )) tells us which variable the program P b is going to access in the next I\O-point.Given an interaction σ, an I\O sequence C, and an index i ≥ 0, we define the view of P b , denoted v(P b , σ, C, i), as follows.
That is, the view of the program is the part of the interaction that is observable by the program.The view starts with the values of all outputs at time zero.Then, the view at c i extends the view at

Definition 2 (by-demand realizability). A ltl specification ϕ(X; Y ) is bydemand asynchronously realizable if there exists a by-demand program P a such that all by-demand asynchronous interactions of P a (for all I\O-sequences) satisfy ϕ(X; Y ).
Theorem 4. A ltl specification ϕ(X; Y ) is asynchronously realizable iff it is by-demand asynchronously realizable.Furthermore, given a program that asynchronously realizes ϕ(X; Y ), it can be converted in linear time to a program that by-demand asynchronously realizes ϕ(X; Y ), and vice versa.Showing that if a specification is by-demand realizable then it is also roundrobin realizable is more complicated.Given a by-demand program, a round-robin program can simulate it by waiting until it has access to the variable required by the by-demand program.This means that the round-robin program may idle when it has the opportunity to write outputs and ignore inputs that it has the option to read.However, the resulting interactions are still interactions of the by-demand program and as such must satisfy the specification.

Proving Realizability of a Specification, and Synthesis
As mentioned, the formula X n,m does not lead to a practical solution for asynchronous synthesis.Here we show that in some cases a simpler synchronous realizability test can still imply the realizability of an asynchronous specification.We show that when a certain strengthening can be found and certain conditions hold with respect to the specification we can apply a simpler realizability test maintaining the structure of the specification.In particular, this simpler realizability test does not require stuttering quantification.When the original formula's winning condition is a GR(1) formula, the synthesis algorithm in [13] can be applied, bypassing much of the complexity involved in synthesis.
We start by definition of a strengthening, which is a formula of the type ψ(X, r; Y ).Intuitively, the strengthening refers explicitly to a scheduling variable r and should imply the truth of the original specification and ignore the input except in reading points so that the stuttering quantification can be removed.

is valid.
Checking the conditions in Def. 3 requires to check identity of propositional formulae and validity of a ltl formulae, which is supported, e.g., by jtlv [17].
The formula needs to satisfy two more conditions, which are needed to show that the simpler synchronous realizability test (introduced below) is sufficient.Stuttering robustness is very natural for asynchronous specifications as we expect the system to be completely unaware of the passage of time.Memory-lessness requires that the system knows the entire 'state' of the environment.
We can test stuttering robustness by converting a specification to a nondeterministic Büchi automaton [23], adding to it transitions that capture all stuttering options [16], and then checking that it does not intersect the automaton for the negation of the specification.In our case, when handling formulae with GR(1) winning conditions, in many cases, all parts of the specifications are relatively simple and stuttering robustness can be easily checked.
Specifications of the form ϕ e = I e , S e , L e are always memory-less.The syntactic structure of S e forces a relation between possible current and next states that does not depend on the past.Furthermore L e is a conjunction of properties of the form ¼ ½ p, where p is a Boolean formula.If the specification includes past temporal operators, these are embedded into the variables of the environment (c.f.[18]), and must be accessible by the system as well.
In the general case, memory-lessness of a specification ϕ(X; Y ) can be checked as follows.We convert both ξ and ¬ξ to nondeterministic Büchi automata N + and N − .Then, we create a nondeterministic Büchi automaton A that runs two copies of N + and one copy of N − simultaneously.The two copies of N + 'guess' two computations that satisfy ϕ(X; Y ) and the copy of N − checks that the two computations can be combined in a way that does not satisfy ϕ(X; Y ).Thus, the language of A would be empty iff ϕ(X; Y ) is not memory-less.
Note that if ϕ(X; Y ) has a memory-less environment then every asynchronous strengthening of it has a memory-less environment.This follows from the two sharing the initial and safety parts of the specification.
The kernel formula defined in Fig. 2 under-approximates the original.The formula declare n,m ensures that the declared outputs are updated only at reading points.Indeed, for every i, ỹi is allowed to change only when r changes to a value in {1, . . ., n}.Furthermore, the outputs themselves copy the value of the declared outputs (and only when they are allowed to change).Thus, the system 'ignores' inputs that are not at reading points in its next update of outputs.
If ψ(x, r; Y ) is a stutteringly robust asynchronous strengthening of ϕ(x; Y ) and X 1,m  ψ (x, r; Y ∪ Ỹ ) is synchronously realizable then ϕ(x; Y ) is asynchronously realizable.Furthermore, given a program that synchronously realizes X 1,m ψ it can be converted in linear time to a program that asynchronously realizes ϕ.

Proof (Sketch):
The algorithm takes a program T s that realizes ψ and converts it to a program T a .The program T a 'jumps' from reading point to reading point in T s .By using the declared outputs in Ỹ the asynchronous program does not have to commit on which reading point in T s it moves to until the next input is actually read.By ψ being a strengthening of ϕ we get that the computation on T s satisfies ϕ.Then, we use the stuttering robustness to make sure that the time that passes between reading points is not important for the satisfaction of ϕ.Memoryless-ness and single input are used to justify that prefixes of the computation on T s can be extended with suffixes of other computations.Essentially, allowing us to 'copy-and-paste' segments of computations of T s in order to construct one computation of T a .
We note that restricting to one input is similar to allowing the system to read multiple inputs simultaneously.
In the case that ϕ has a GR(1) winning condition then so does X 1,m ψ .It follows that in such cases we can use the algorithm of [13] to check whether X ψ is synchronously realizable and to extract a program that realizes it.We show how to convert a LTS realizing X ψ to an ILTS realizing ϕ.
For a LTS T s = S s , I s , R s , {x, r}, Y, L s , state st es ∈ S s is an eventual successor of state st ∈ S s if there exists m ≤ |S s | and states {s 1 , . . ., s m } ⊆ S s such that the following hold: s 1 = st and s n = st es ; For all 0 < i < m, (s i ; s i+1 ) ∈ R s ; we also call st es an eventual read successor, otherwise an eventual write successor.Note that the way the scheduling variable r updates its values is uniform across all eventual successors of a given state.Given a LTS T s = S s , I s , R s , {x, r}, Y, L s such that Y = {y 1 , . . ., y m } the algorithm in Fig. 3 extracts from it an ILTS T a = S a , I a , R a , {x}, Y, L a , i a .In the first part of the algorithm that follows its initialization, between lines 5 and 15, all reading states reachable from I s are found, and used to build I a (as part of S a ).In the second part, between lines 16 and 43, the (m+1)-th eventual successors of each reading state are added to S a .This second part ensures that all writing states are 'skipped' so that R a transitions include only transitions between consecutive reading states.
As T s is receptive, so is T a .In particular the algorithm transfers sink states that handle violations of environment safety or initial conditions to T a .

Applying the Realizability Test
We illustrate the application of the realizability test presented in Section 5. To come up with an asynchronous strengthening we propose the following heuristic.Heuristic 1.In order to derive an asynchronous strengthening ψ(X ∪ {r}; Y ) for a specification ϕ(X; Y ), replace one or more occurrences of atomic formulae of inputs, e.g., The rationale here is to encode the essence of the stuttering quantification into the strengthening.Since this quantification requires indifference towards input values outside reading points, we state this explicitly.
In [14] we showed how to strengthen the specification ¼ (x ↔ y) to an asynchronously realizable specification with the same idea: a Boolean output y copies the value of an input x.
This specification has a GR(1) winning condition, it is stutteringly robust with a memory-less environment, and therefore it is potentially a good candidate to apply our heuristic.As suggested, we obtain the specification ψ 1 (x, r; y): We establish that ψ satisfies all our requirements.We then apply the synchronous realizability test of [13] to the kernel formula X ψ1 (x, r; y).This formula is realizable and we get a LTS S 1 with 30 states and 90 transitions, which is then minimized, using a variant of the Myhill-Nerode minimization, to a LTS S   [4] we ensure that all asynchronous interactions of A S 1 satisfy ϕ 1 (x; y).
We devise similar specifications that copy the value of a Boolean input to one of several outputs according to the choice of the environment.Thus, we have a multi-valued input variable encoding the value and the target output variable and several outputs variables.The specification ϕ 2 (x; y 0 , y 1 ) is given below.Using the same idea, we strengthen ϕ 2 to ψ 2 (x, r; y 0 , y 1 ), which passes all the required tests.We then apply the synchronous realizability test in [13] to X ψ2 (x, r; y 0 , y 1 ) and get a LTS S 2 with 340 states and 1544 transitions, which is then minimized to 196 states and 1056 transitions.Our algorithm extracts an ILTS A S 2 , which, as model checking confirms, asynchronously realizes ϕ 2 .

Conclusions and Future Work
In this paper we extended the reduction of asynchronous synthesis to synchronous synthesis proposed in [16] to multiple input and output variables.We identify cases in which asynchronous synthesis can be done efficiently by bypassing the well known 'problematic' aspects of synthesis.
One of the drawbacks of this synthesis technique is the large size of resulting designs.However, we note that the size of asynchronous designs is bounded from above by synchronous designs.Thus, improvements to synchronous synthesis will result also in smaller asynchronous designs.We did not attempt to minimize or choose more effective synchronous programs, and we did not attempt to extract deterministic subsets of the nondeterministic controllers we worked with.
We believe that there is still room to explore additional cases in which asynchronous synthesis can be approximated.In particular, restrictions imposed by our heuristic (namely, one input environment and memory-less behavior) seem quite severe.Trying to remove some of these restrictions is left for future work.
Finally, asynchronous synthesis is related to solving games with partial information.There may be a connection between the cases in which synchronous synthesis offers a solution to asynchronous synthesis and partial information games that can be solved efficiently.

c i− 1
by adding the value of the variable that the program decides to read\write based on its view at point c i−1 .The interaction σ is a by-demand asynchronous interaction of P b for I\O sequence C if for every 1 ≤ j ≤ m we have P b (b 0 [1], . . ., b 0 [j−1]) = (b 0 [j], (n+j)), and for every i > 1 and every k > 0 such that c i ≤ k < c i+1 , we have -If t(P b (v(P b , σ, C, i − 1))) ≤ n, forall j ∈ {1, . . ., m} we have b k [j] = b k−1 [j].-If t(P b (v(P b , σ, C, i − 1))) > n, forall j = t(P b (v(P b , σ, C, i − 1))) we have b k [j] = b k−1 [j] and for j = t(P b (v(P b , σ, C, i − 1))) we have P b (v(P b , σ, c, i − 1)) = (b k [j], j).Also, for every j ∈ {1, . . ., m} and every 0 < k < c 1 , we have b k [j] = b 0 [j].That is, the interaction matches a by-demand program if (a) the interaction starts with the right values of all outputs (as the program starts by initializing them) and (b) the outputs do not change in the interaction unless at I\O points where the program chooses to update a specific output (based on the program's view of the intermediate state of the interaction).