Verifying Opacity of Discrete-Timed Automata Artifact
Opacity is a powerful confidentiality property that holds if a system cannot leak secret information through observable behavior. In recent years, time has become an increasingly popular attack vector. The notion of opacity has therefore been extended to timed automata (TA). However, the verification of opacity of TA has been proven to be undecidable for the commonly used dense time model. To make the problem decidable, state of the art approaches consider weaker notions of opacity or heavily restrict the class of considered TA, resulting in unrealistic threat models.
We address the problem of verifying opacity of TA without restrictions. For this purpose, we consider a discrete time setting. We present a novel algorithm to transform TA to equivalent finite automata (FA) and then use known methods to verify opacity of the resulting FA. To improve the efficiency of our algorithm, we use a novel time abstraction that significantly reduces the state space of the resulting FA, improving the scalability of our approach. We validate our method using randomized systems, as well as four case studies from the literature showing that our approach is applicable in practice.
We provide a VM with all software pre-installed to run our evaluation.
The code is also provided in our gitlab repository: https://gitlab.com/julianklein/opacity-verification-of-discrete-timed-automata
The VM has commit b3b18a0f8bb071a6c0e0b29be5c11e5a1caa4c84 (on main branch) installed.