Abstract 1 Introduction 2 Background 3 The Language 𝗠𝗢𝗞𝗔 4 Formulae as 𝗠𝗢𝗞𝗔 programs 5 Abstract Interpretation of 𝗠𝗢𝗞𝗔 6 Locally Complete Analyses 7 Related Work 8 Conclusion and future work References

Model Checking as Program Verification by Abstract Interpretation

Paolo Baldan ORCID Department of Mathematics, University of Padua, Italy Roberto Bruni ORCID Department of Computer Science, University of Pisa, Italy Francesco Ranzato ORCID Department of Mathematics, University of Padua, Italy Diletta Rigo ORCID Department of Mathematics, University of Padua, Italy
Abstract

Abstract interpretation offers a powerful toolset for static analysis, tackling precision, complexity and state-explosion issues. In the literature, state partitioning abstractions based on (bi)simulation and property-preserving state relations have been successfully applied to abstract model checking. Here, we pursue a different track in which model checking is seen as an instance of program verification. To this purpose, we introduce a suitable language – called 𝖬𝖮𝖪𝖠 (for 𝖬𝖮del checking as abstract interpretation of 𝖪leene 𝖠lgebras) – which is used to encode temporal formulae as programs. In particular, we show that (universal fragments of) temporal logics, such as ACTL or, more generally, universal μ-calculus can be transformed into 𝖬𝖮𝖪𝖠 programs. Such programs return all and only the initial states which violate the formula. By applying abstract interpretation to 𝖬𝖮𝖪𝖠 programs, we pave the way for reusing more general abstractions than partitions as well as for tuning the precision of the abstraction to remove or avoid false alarms. We show how to perform model checking via a program logic that combines under-approximation and abstract interpretation analysis to avoid false alarms. The notion of locally complete abstraction is used to dynamically improve the analysis precision via counterexample-guided domain refinement.

Keywords and phrases:
ACTL, μ-calculus, model checking, abstract interpretation, program analysis, local completeness, abstract interpretation repair, domain refinement, Kleene algebra with tests
Copyright and License:
[Uncaptioned image] © Paolo Baldan, Roberto Bruni, Francesco Ranzato, and Diletta Rigo; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Modal and temporal logics
; Theory of computation Verification by model checking ; Theory of computation Program verification ; Theory of computation Programming logic ; Theory of computation Abstraction
Related Version:
Extended Version: https://arxiv.org/abs/2506.05525 [1]
Funding:
This research was partially funded by the Italian MUR, under the PRIN 2022 PNRR project no. P2022HXNSC on “Resource Awareness in Programming: Algebra, Rewriting, and Analysis”, and the PNRR project SEcurity and RIghts In the CyberSpace (SERICS, PE00000014 – CUP H73C2200089001), by the INdAM-GNCS Projects RISICO (CUP E53C22001930001) and MARQ (CUP E53C24001950001), by a WhatsApp Research Award and by an Amazon Research Award for AWS Automated Reasoning.
Editors:
Patricia Bouyer and Jaco van de Pol

1 Introduction

Abstraction is a fundamental craft for mastering complexity. In model checking, abstraction-guided space reduction allows to mitigate the well-known state explosion problem [15]. Abstract interpretation [16, 17] is the de facto standard framework for designing sound analyses. The idea of applying abstract interpretation to model checking has been widely investigated, e.g., [3, 23, 50, 33, 30, 28, 14, 10, 4, 46, 21, 11, 12, 49, 25, 26, 44, 22, 19, 38, 45, 20, 29, 2, 37, 47, 35, 5, 40, 55] (§ 7 accounts for more closely related work). These approaches often rely on over-approximations that preserve some logical properties, like state partitioning abstractions or simulation-preserving relations.

Objective.

In this work, we pursue a different approach, which consists in recasting model checking of temporal formulae directly in terms of program verification, for which the whole tool-suite of abstract interpretation is readily available. To this aim, we exploit an instance of a Kleene algebra with tests and fixpoints (𝖪𝖠𝖥[36], whose primitives allow to map each temporal formula to a program that can single out all and only the counterexamples to the formula. This program can then be analysed through a sound abstract interpretation that over-approximates the concrete behaviour, so that all possible counterexamples are exposed. However, this over-approximation might not faithfully model the program behaviour in the abstract domain, thus possibly mixing true and false alarms. This lack of precision cannot happen when the abstract interpretation is complete [18]. However, completeness happens quite seldom, and even if, in principle, it can be achieved by refining the abstract domain [27], the most abstract domain refinement ensuring completeness is often way too concrete (it might well coincide with the concrete domain itself).

To remove false alarms, the validity of temporal formulae can be analysed by deriving certain judgements for the corresponding program in a suitable program logic. This enables the use of techniques borrowed from locally complete abstract interpretation [9, 7], where the over-approximation provided by the abstract domain is paired with an under-approximating specification, in the style of O’Hearn’s incorrectness logic [43]. This way, any alarm raised by an incorrectness logic proof corresponds to a true counterexample and local completeness guarantees that derivable judgements either exposes some true counterexamples (if any) or proves that there are none. Moreover, the failure of a proof obligation during inference can point out how to dynamically refine the underlying abstraction to enhance the precision and expressiveness of the analysis, a technique called abstract interpretation repair [8].

Methodology (and a toy example).

Our main insight is to design a meta-programming language, called 𝖬𝖮𝖪𝖠 (𝖬𝖮del checking as abstract interpretation of 𝖪leene 𝖠lgebras), where suitable temporal formulae can be mapped to. We show how this can be done for ACTL or, more generally, for the single variable μ-calculus without the existential diamond modality. 𝖬𝖮𝖪𝖠 is a language of regular commands, based on Kleene algebra with tests (𝖪𝖠𝖳[34] augmented with fixpoint operators (𝖪𝖠𝖥[36]. 𝖬𝖮𝖪𝖠 leverages a small set of primitives to extend and filter computation paths. They can be combined with the usual 𝖪𝖠𝖥 operators of sequential composition, join (i.e., nondeterministic choice), Kleene iteration, and least fixpoint computation. The corresponding programs are then analysed by abstract interpretation.

The language 𝖬𝖮𝖪𝖠 operates on computation paths σ,Δ, where σΣ represents the current system state, and Δ𝒫(Σ) represents the set of traversed states. In the following, we informally overload the symbol σ to denote σ,. Several computation paths can be stacked and unstacked through the 𝖬𝖮𝖪𝖠 primitives 𝗉𝗎𝗌𝗁 and 𝗉𝗈𝗉 for dealing with nested temporal formulae. A generic stack is denoted σ,Δ::S and we call σ its current state. Each temporal formula φ is mapped to a 𝖬𝖮𝖪𝖠 program φ¯ that intuitively computes counterexamples to φ, whence the bar over φ in the application of the encoding . Letting denote the usual (collecting) denotational semantics, a key result is, therefore, that a state σ satisfies the formula φ iff the execution of φ¯ on σ returns the empty set. Of course, the results can be generalised additively to sets of stacks, for which φ¯ filters out exactly those stacks whose current state satisfies φ. We can thus summarise the semantic correspondence between formulae satisfaction and execution of their 𝖬𝖮𝖪𝖠 programs by φ¯P={σPσ⊧̸φ}. As a special case, it follows that σφ iff φ¯{σ}=. For example, for an atomic proposition p, we let p¯¬𝗉?, where ¬𝗉? is a 𝖬𝖮𝖪𝖠 primitive that filters out those states where p is valid. As a further example, the 𝖬𝖮𝖪𝖠 program for the formula 𝖠𝖦φ (stating that φ along every possible path always holds) is 𝖠𝖦φ¯=𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;φ¯;𝗉𝗈𝗉.

The intuition is quite simple: 𝗇𝖾𝗑𝗍 generates all the successors by iterating the 𝗇𝖾𝗑𝗍 operation, and they are filtered by φ¯, to expose those which fail to satisfy φ. The operations 𝗉𝗎𝗌𝗁 and 𝗉𝗈𝗉 manage the stack structure. In the example, to attain the semantic correspondence above, whenever a state δ such that δ⊧̸φ is reachable from some initial state σ, we want 𝖠𝖦φ¯{σ} to return σ and not δ: the operation 𝗉𝗎𝗌𝗁 serves to stash σ when looking for δ, and the operation 𝗉𝗈𝗉 to discard δ and unstash σ, to conclude σ⊧̸𝖠𝖦φ. To make the next introductory examples easier to follow, we suggest that the reader ignore the 𝗉𝗎𝗌𝗁 and 𝗉𝗈𝗉 symbols, hence their faded rendering in 𝖬𝖮𝖪𝖠 programs.

To get a flavour of the proposed approach, we sketch an easy example, which is a variation of [12, Examples 3.4, 3.7]. Consider the transition system in Figure 1(a), which models the behaviour of cars at a US traffic light. State names consist of two letters, denoting, respectively, the traffic light status, i.e. red, yellow, green, and the car behaviour, i.e. stop, drive. We model check the validity in the initial state 𝗋𝗌 of the safety property φ=𝖠𝖦(¬𝗋𝖽), stating that it will never happen that the light is red and the car is driving. This is obviously true for the system, since 𝗋𝖽 is an unreachable state. We therefore consider the 𝖬𝖮𝖪𝖠 program φ¯=𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗋𝖽?;𝗉𝗈𝗉, and compute its semantics φ¯{𝗋𝗌}, which turns out to be the empty set, thus allowing us to conclude that 𝗋𝗌φ. In fact, after a few concrete steps of iteration, 𝗋𝖽?𝗇𝖾𝗑𝗍{𝗋𝗌}=𝗋𝖽?{𝗋𝗌,𝗀𝗌,𝗀𝖽,𝗒𝖽,𝗒𝗌}=.

(a) Behaviour of cars at US traffic light.
(b) Abstract system.
(c) Abstract domain.
Figure 1: Variations on the traffic light example from [12].

How the abstraction works.

We show how any sound abstraction of state properties in (Σ) of the original system can be lifted to a range of sound abstractions on stacks with varying degrees of precision, in a way that 𝖬𝖮𝖪𝖠 programs can be analysed by an abstract interpreter, denoted by . The analysis output is always sound, meaning that the set of counterexamples is over-approximated. In particular, letting α denote the abstraction map to an abstract lattice, and the bottom element of the corresponding abstract computational domain, if φ¯α({σ})=, then σφ holds. Vice versa, if φ¯α({σ}), then the abstract analysis might raise a false alarm because, in general, abstractions are not complete.

Back to the previous example, let us consider the (non-partitioning) abstract domain A{A,ac,bc,a,b,c,ac,bc,A} in Figure 1(c), induced by the abstract properties a, b and c, whose concretizations are represented as dotted boxes in Figure 1(b). The abstract interpreter computes 𝖠𝖦(¬𝗋𝖽)¯α({𝗋𝗌})=𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗋𝖽?;𝗉𝗈𝗉a=, thus allowing us to conclude that 𝗋𝗌𝖠𝖦(¬𝗋𝖽). Since 𝗋𝗌 has a transition to 𝗀𝗌, we have 𝗋𝖽?𝗇𝖾𝗑𝗍a=𝗋𝖽?(ac)=. It is worth remarking that in the abstract domain A, computing 𝗇𝖾𝗑𝗍a requires two iterations instead of four, as it happens in the concrete computation. Consider now the formula ψ=𝖠𝖦(𝗀𝖠𝖷𝖽), namely, “whenever the semaphore is green, then at the next state the car is driving”. Spelling out the 𝖬𝖮𝖪𝖠 encoding, we have: ψ¯=𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗀?;𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;¬𝖽?;𝗉𝗈𝗉;𝗉𝗈𝗉. Again, ψ holds for the concrete system in Figure 1(a), but here the abstract interpretation is imprecise, because 𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗀?α({𝗋𝗌})=c::a and 𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;¬𝖽?(c::a)=(ac)::c::a, so that ψ¯α({𝗋𝗌})=a, thus raising a false alarm. As explained above, we exploit 𝗉𝗎𝗌𝗁/𝗉𝗈𝗉 operators to manage the stack structure of domain elements and to recover the (abstract) states from which the computation started when property violations are found. For instance, in the case illustrated above, once the stack (ac)::c::a is obtained, representing an abstract trace where the property fails, its initial abstract state, a, is recovered by applying two successive 𝗉𝗈𝗉 operations.

To eliminate false alarms, we leverage the concept of local completeness in abstract interpretation. Roughly, the idea consists in focusing on the (abstract) computation path produced by some input of interest, and then refining the abstraction only when needed to make it complete locally to such computation. More precisely, we apply a variation of local completeness logic (LCL) [9, 7], which is here extended to deal with fixpoint operators. The LCL proof system, parametrised by a generic state abstraction A, works with O’Hearn-like judgements A[P]𝗋[Q], where 𝗋 is a program and P,Q denote state properties or, equivalently, the underlying sets of states satisfying those properties. The triple A[P]𝗋[Q] is valid when Q is an under-approximation of the states reachable by 𝗋 from P while the abstraction of Q over-approximates such reachable states, and the abstract computation of 𝗋 on the precondition P is locally complete. Roughly, this can be expressed as Q𝗋Pγα(Q). In our setting, where the program φ¯ associated with a formula φ “returns” all the counterexamples to the validity of φ, an inference of A[P]φ¯[Q] shows that Q{σPσ⊧̸φ}γα(Q), thus bounding the set of counterexamples to φ in P. Hence, if Q then φ does not hold for each σQP, while φ holds for all states σP(γα(Q)). If, instead, Q=, then α(Q)=, and φ holds in the whole P. As LCL derivations cannot succeed with locally incomplete abstractions, if some proof obligation fails, the abstract domain needs to be “fixed” to achieve local completeness. This can be accomplished, e.g., by applying the abstraction repair techniques defined in [8].

For the toy traffic light example, we can derive in LCL the program triple A[{𝗋𝗌}]φ¯[], that confirms the validity of the formula φ. Vice versa, an attempt to derive the triple A[{𝗋𝗌}]ψ¯[] fails because of local incompleteness. Roughly, we can successfully derive A[{𝗋𝗌}]𝗇𝖾𝗑𝗍;𝗀?;[{𝗀𝗌,𝗀𝖽}], but then the execution of 𝗇𝖾𝗑𝗍 on {𝗀𝗌,𝗀𝖽} is not locally complete, because α(𝗇𝖾𝗑𝗍{𝗀𝗌,𝗀𝖽})=α({𝗀𝖽,𝗒𝖽})=bc, while 𝗇𝖾𝗑𝗍α({𝗀𝗌,𝗀𝖽})=𝗇𝖾𝗑𝗍c=ac. The abstraction repair procedure of [8] would then lead to refine the abstract domain by adding a new abstract element c1 to represent the concrete set γ(c1)={𝗀𝗌,𝗀𝖽}. Since abstract domains must be closed under meets, the addition of c1 will also entail the addition of c1b such that γ(c1b)={𝗀𝖽}. Letting A1A{c1b,c1}, we can still derive A1[{𝗋𝗌}]𝗇𝖾𝗑𝗍;𝗀?[{𝗀𝗌,𝗀𝖽}], then A1[{𝗀𝗌,𝗀𝖽}]𝗇𝖾𝗑𝗍[{𝗀𝖽,𝗒𝖽}], and finally A1[{𝗀𝖽,𝗒𝖽}]¬𝖽?[], which can be composed together to conclude A1[{𝗋𝗌}]ψ¯[]. In fact, in A1 we just have ψ¯α1({𝗋𝗌})=.

Original contribution.

We set up a theoretical framework for the systematic reduction of model checking of temporal logics to program verification, thereby enabling the re-use of abstract interpretation techniques and verifiers either directly or with minimal effort. This framework consists of:

  • 𝖬𝖮𝖪𝖠, a meta-programming language in which different temporal logics can be encoded so that the 𝖬𝖮𝖪𝖠 code for a formula φ computes exactly the counterexamples to φ;

  • a systematic technique for lifting abstractions over state properties to abstractions suitable for the static analysis of 𝖬𝖮𝖪𝖠 programs;

  • an extension of the LCL program logic to handle least fixpoints, introducing a form of “false alarm-guided” abstraction refinement loop in the analysis of 𝖬𝖮𝖪𝖠 programs.

Synopsis.

In § 2 we provide some basics about abstract interpretation and the (fragments) of temporal logics considered in the paper. In § 3 we introduce the language 𝖬𝖮𝖪𝖠, and then prove in § 4 the key results that relate formula satisfaction with program execution. In § 5 we define a general technique for deriving abstract domains for the static analysis of 𝖬𝖮𝖪𝖠 programs. In § 6 we showcase how local completeness logic reasoning can be exploited in our framework. In § 7 we discuss related work. Finally, in § 8 we draw some conclusions and sketch future avenues of research. Full proofs of our results and additional material are available in the extended version of this paper [1].

2 Background

A complete lattice is a poset (L,L) where every subset XL has both least upper bound (lub) and greatest lower bound (glb), denoted by LX and LX, respectively, with LL and LL. When no ambiguities can arise, a lattice will be denoted as L and subscripts will be omitted.

Given two complete lattices L1 and L2, a function f:L1L2, is monotone if x1y implies f(x)2f(y), and additive (resp., co-additive) if it preserves arbitrary lub (resp., glb). Any monotone function f:LL on a complete lattice has both a least and a greatest fixpoint, denoted by lfp(f) and gfp(f), respectively. The set of functions f:SL from a set S to a complete lattice L, denoted LS, forms a complete lattice when endowed with the pointwise order s.t. fg if for all sS, f(s)Lg(s). If L1,L2 are complete lattices, then L1×L2 is their product lattice endowed with the componentwise order s.t. (x1,x2)(y1,y2) if x11y1 and x22y2. We write Ln for the n-ary product of L with itself and L+{,}n1Ln for the complete lattice of non-empty finite sequences in L, ordered by x1xny1ym if n=m and xiLyi for all i[1,n], with top and bottom .

2.1 Abstract Interpretation

Let us recall the basics of abstract interpretation [17] (see [16] for a thorough account).

Given two complete lattices C and A, called the concrete and the abstract domain, respectively, a Galois connection (GC) α,γ:CA is a pair of functions α:CA and γ:AC s.t. α(c)Aa cCγ(a) for any cC and aA.

The function α is referred to as abstraction map and turns out to be additive, while γ is the concretization map which is always co-additive. Intuitively, any abstract element aA such that cγ(a) is a sound over-approximation for the concrete value c, while the abstraction α(c) is the most precise over-approximation of c in the abstract domain A, i.e., α(c)=C{acCγ(a)} holds. The notation Aα,γ denotes an abstract domain endowed with its underlying GC, and we will omit subscripts when α and γ are clear from the context.

Example 2.1 (Image adjunction).

Given any function f:XY, let f> and f< denote the direct and inverse image of f, respectively. The pair f>,f<:𝒫(X)𝒫(Y) is a GC that we refer to as the image adjunction (this is an instance of [24, Exercise 7.18]).  

The class of abstract domains on C, denoted by Abs(C){Aα,γα,γ:CA}, can be preordered by the domain refinement relation: AA when γA(A)γA(A).

Example 2.2 (Control flow graphs and predicate abstraction).

Any program can be represented by its control flow graph (CFG). Let 𝑉𝑎𝑟 be a set of variables valued in 𝕍 and denote by 𝐸𝑛𝑣=𝕍𝑉𝑎𝑟 the set of environments. A CFG is a graph (N,E,s,e), where N is a finite set of nodes, representing program points, s,eN are the start and end nodes, respectively, and EN×F×N is a set of edges, labelled over a set of transfer (additive) functions F𝒫(𝐸𝑛𝑣)𝒫(𝐸𝑛𝑣). For example, the program c in Figure 2(a) is decorated with program points nN={s,1,2,3,e}, has variables 𝑉𝑎𝑟={x,y,z,w}, and values ranging in the finite domain 𝕍=k of integers modulo a given k>0. The CFG of c is depicted in Figure 2(b).

(a) Program c.
(b) Control flow graph of c.
(c) Predicate abstraction .
Figure 2: Program from [4, Figure 1], with control flow graph and predicate abstraction domain.

Predicate abstraction allows to approximate program invariants [32]. Given a set of predicates 𝑃𝑟𝑒𝑑𝒫(𝐸𝑛𝑣) (where any p𝑃𝑟𝑒𝑑 has a representation p={ρ𝐸𝑛𝑣ρp}), the predicate abstraction domain is defined by adding to 𝑃𝑟𝑒𝑑 the complement predicates 𝑃𝑟𝑒𝑑¯{p¯p𝑃𝑟𝑒𝑑}, and then by closing 𝑃𝑟𝑒𝑑𝑃𝑟𝑒𝑑¯ under logical conjunction. We define a function π:𝒫(𝐸𝑛𝑣) that associates to each set of environments the strongest predicate it satisfies. For example, given the two predicates p(z=0) and q(x=y), the predicate abstraction domain induced by the set 𝑃𝑟𝑒𝑑={p,q} is depicted in Figure 2(c). Correspondingly, the product abstraction N allows us to represent the abstract state of the program as a function that associates to each program point n the strongest predicate in that holds at n. Hence, for instance, the set of all possible initial states of c, which is {(s,xyzw)x,y,z,w𝕍}, is represented by the function (s,1,2,3,e), often written (s), omitting the program points which are mapped to .  

An abstract interpreter computes in the underlying abstract domain through correct (and effective) abstract approximations of concrete functions. Given Aα,γAbs(C) and a function f:CC, an abstract function f:AA is a correct approximation of f if αffα, and it is a complete approximation of f when αf=fα. The best correct approximation (bca) of f in A, is defined as fAαfγ:AA, and turns out to be the most precise correct abstraction, i.e., fAf for any other correct approximation f of f. When f is complete, then f=fA, thus making completeness an abstract domain property defined by the equation αf=αfγα. In program analysis, abstract domains are commonly endowed with correct but incomplete abstract transfer functions. When completeness holds, the abstract interpreter is as precise as possible for the given abstract domain and cannot raise false alarms when verifying properties that are expressible in the abstract domain [27].

2.2 Transition Systems and Logics

Temporal logics are typically interpreted over unlabeled, finite, directed graphs, whose nodes and edges model states and transitions between them (i.e., state changes), respectively.

A transition system T is a tuple (Σ,I,𝐏,,), where Σ is a finite set of states ranged over by σ, IΣ is the set of initial states, Σ×Σ is the transition relation, 𝐏 is a (finite) set of atomic propositions, ranged over by p, and Σ×𝐏 is the satisfaction relation.

We write σσ instead of (σ,σ) and let post(σ)={σσσ} denote the set of direct successors of σΣ. As common in model checking [13], we consider systems whose transition relation is total, i.e., for all σΣ there exists σΣ such that σσ.

A path is an infinite denumerable state sequence (σi)i such that σiσi+1 for all i.

Example 2.3 (Control flow graphs as transition systems).

A CFG (N,E,s,e) can be viewed as a transition system with states Σ=N×𝐸𝑛𝑣 and transition relation defined by (n,ρ)(n,ρ) if there is an edge (n,f,n)E such that ρf({ρ}). Additionally, in order to make the transition relation total we add self-loops to all the states (e,ρ) involving the end node e. For instance, by using the shorthand (n,xyzw) for the states in Example 2.2, the transition (s,0111)(1,1101) is induced by the edge (s,x:=yz:=0,1) in Figure 2(b), while (3,1111)(e,1111) by (3,x=y,e) in Figure 2(b).

Let us point out that the predicate abstraction in Example 2.2 naturally lifts to the powerset of states with codomain A=N, with the abstraction map α(X)(n)π({ρ(n,ρ)X}) for X𝒫(Σ)=𝒫(N×𝐸𝑛𝑣). Given σA, we define 𝑠𝑢𝑝𝑝(σ){nNσ(n)}.  

ACTL.

ACTL is the fragment of CTL whose temporal formulae are universally quantified over all paths leaving the current state. Thus, given a set of atomic propositions p𝐏:

ACTLφ::=p¬pφ1φ2φ1φ2𝖠𝖷φ1𝖠𝖥φ1𝖠𝖦φ1φ1𝖠𝖴φ2
Definition 2.4 (ACTL semantics).

Given a transition system T=(Σ,I,𝐏,,), the semantics φΣ of ACTL formulae over T is as follows:

p {σΣσp} ¬p {σΣσp}
φ1φ2 φ1φ2 φ1φ2 φ1φ2
𝖠𝖷φ1 {σΣσ.σσσφ1}
𝖠𝖥φ1 {σ0Σfor all path(σi)ik.σkφ1}
𝖠𝖦φ1 {σ0Σfor all path(σi)ij.σjφ1}
φ1𝖠𝖴φ2 {σ0Σfor all path(σi)ik.(σkφ2j<k.σjφ1)}

Universal fragment of single variable 𝝁-calculus.

The modal μ-calculus is a well known extension of propositional modal logic with least and greatest fixed point operators. We will focus on its universal fragment only allowing the modal operator that quantifies over all transitions. Moreover, for the sake of simplicity, we restrict to the single variable fragment where, roughly speaking, nested fixpoints cannot have mutual dependencies.

Given a set of atomic propositions p𝐏, the μ-calculus is defined as follows:

μφ::=p¬pφ1φ2φ1φ2φ1xμx.φxνx.φx
Definition 2.5 (μ-calculus semantics).

Given T=(Σ,I,𝐏,,), and a valuation 𝒱:Var𝒫(Σ), the semantics φ𝒱Σ of μ-calculus formulae over T is as follows:

p𝒱 {σΣσp} ¬p𝒱 {σΣσp}
φ1φ2𝒱 φ1𝒱φ2𝒱 φ1φ2𝒱 φ1𝒱φ2𝒱
φ1𝒱 {σΣσ.σσσφ1𝒱} x𝒱 𝒱(x)
μx.φx𝒱 lfp(λS.φx𝒱[xS]) νx.φx𝒱 gfp(λS.φx𝒱[xS])

where 𝒱[xS] is the usual notation for function update.

We write φ instead of φ𝒱 when the valuation is inessential, and σφ when σφ.

3 The Language 𝗠𝗢𝗞𝗔

We define a meta-language, called 𝖬𝖮𝖪𝖠 (for 𝖬𝖮del checking as abstract interpretation of 𝖪leene 𝖠lgebras), as a (generalised) Kleene Algebra with a set of basic expressions suited for identifying counterexamples to the validity of temporal formulae.

𝗞𝗔𝗙.

We rely on Kozen’s Kleene Algebra with tests [34] with a Fixpoint operator [36], 𝖪𝖠𝖥 for short. Given a set 𝖤𝗑𝗉 of basic expressions 𝖾, 𝖪𝖠𝖥 is defined below:

𝖪𝖠𝖥𝗋::=𝟣𝟢𝖾𝗋1;𝗋2𝗋1𝗋2𝗋1𝖷μ𝖷.𝗋1

The term 𝟣 represents the identity, i.e. no action, 𝟢 represents divergence, 𝗋1;𝗋2 represents sequential composition, 𝗋1𝗋2 represents non-deterministic choice, 𝗋1 represents the Kleene iteration of 𝗋1, i.e. 𝗋1 performed zero or any finite number of times, 𝖷 is a variable ranging in a set 𝖵𝖺𝗋, and μ𝖷.𝗋1 represents the least fixpoint operator with respect to variable 𝖷.

Commands are interpreted as functions over a complete lattice C. Given a semantics (||):𝖤𝗑𝗉CC for basic expressions, the semantics of regular expressions η:𝖪𝖠𝖥CC is inductively defined as follows, where η:𝖵𝖺𝗋CC is an environment:

𝟣η λx.x 𝟢η λx.
𝖾η (|𝖾|) 𝗋𝟣;𝗋𝟤η 𝗋𝟤η𝗋𝟣η
𝗋𝟣𝗋𝟤η 𝗋𝟣η𝗋𝟤η 𝗋𝟣η {𝗋𝟣ηkk}
𝖷η η(𝖷) μ𝖷.𝗋𝟣η lfp(λf:CC.𝗋1η[𝖷f])

It can be seen that the Kleene star 𝗋 can be encoded as a fixpoint μ𝖷.(𝟣𝗋;𝖷), and, when the semantics of basic expressions (||) is additive, also as 𝗋=μ𝖷.(𝟣𝖷;𝗋). Although redundant, the inclusion of the Kleene star in our syntax is very convenient, as it enables significant simplifications in some encodings, particularly when the full expressiveness of least fixpoint calculations is not necessary (see the remark at the end of Section 4). The same applies to the proof logic studied in Section 6: while basic LCL suffices for the 𝖪𝖠𝖥 fragment without least fixpoint operator, its extension μLCL is otherwise needed. For closed 𝖪𝖠𝖥 terms the environment is inessential and we write just 𝗋 instead of 𝗋η.

The Language 𝗠𝗢𝗞𝗔.

Given a transition system T=(Σ,I,𝐏,,), 𝖬𝖮𝖪𝖠 is an instance of 𝖪𝖠𝖥 interpreted over stacks of frames, each frame representing a computation path.

Definition 3.1 (Frame, stack).

A frame is a pair σ,ΔΣ×𝒫(Σ). We denote by FΣ the set of frames. A stack is a finite non-empty sequence of frames, i.e. an element of FΣ+, denoted by σ,Δ::S, where S is a stack or the empty sequence ε.

A frame σ,Δ represents a computation in T where σ is the current state and Δ is the set of traversed states, used for loop-detection. The order of the traversed states and their possible multiple occurrences are abstracted away as they are irrelevant when checking the satisfaction of a formula. Frames are stacked to deal with formulae with nested operators.

The language 𝖬𝖮𝖪𝖠 is defined as instance of 𝖪𝖠𝖥 with basic expressions for extending and filtering frames, for (de)constructing stacks, and to keep track of fixed-point equations.

Definition 3.2 (𝖬𝖮𝖪𝖠 language).

The language 𝖬𝖮𝖪𝖠 is an instance of 𝖪𝖠𝖥 with the following basic expressions, where 𝗉 ranges over atomic propositions:

𝖾::=𝗉?¬𝗉?𝗅𝗈𝗈𝗉?𝗇𝖾𝗑𝗍𝖺𝖽𝖽𝗋𝖾𝗌𝖾𝗍𝗉𝗎𝗌𝗁𝗉𝗈𝗉

The (concrete) semantics of 𝖬𝖮𝖪𝖠 is given over the powerset of the set of stacks, C=𝒫(FΣ+), ordered by subset inclusion. Although we could focus on stacks of uniform length, we use a larger domain to simplify the notation.

The semantics of 𝖬𝖮𝖪𝖠 commands 𝗋η:𝒫(FΣ+)𝒫(FΣ+) follows from the general definition for 𝖪𝖠𝖥, once we specify the semantics of its basic expressions.

Definition 3.3 (Basic expression semantics).

Given T=(Σ,I,𝐏,,), the semantics of 𝖬𝖮𝖪𝖠 basic expressions is the additive extension of the functions below, where σ,Δ::SFΣ+:

(|𝗉?|){σ,Δ::S} {σ,Δ::Sσp} (|¬𝗉?|){σ,Δ::S} {σ,Δ::Sσp}
(|𝗅𝗈𝗈𝗉?|){σ,Δ::S} {σ,Δ::SσΔ} (|𝗇𝖾𝗑𝗍|){σ,Δ::S} {σ,Δ::Sσσ}
(|𝖺𝖽𝖽|){σ,Δ::S} {σ,Δ{σ}::S} (|𝗋𝖾𝗌𝖾𝗍|){σ,Δ::S} {σ,::S}
(|𝗉𝗎𝗌𝗁|){σ,Δ::S} {σ,Δ::σ,Δ::S} (|𝗉𝗈𝗉|){σ,Δ::S} {SSε}

The basic expressions 𝗉?, ¬𝗉?, 𝗅𝗈𝗈𝗉?, 𝗇𝖾𝗑𝗍, 𝖺𝖽𝖽, 𝗋𝖾𝗌𝖾𝗍 operate on the top frame. The filter 𝗉? (resp. ¬𝗉?) checks the validity of the proposition p (resp. ¬p), 𝗅𝗈𝗈𝗉? checks if the current state loops back to some state in the trace, 𝗇𝖾𝗑𝗍 extends the trace by one step in all possible ways, 𝖺𝖽𝖽 adds the current state to the trace and 𝗋𝖾𝗌𝖾𝗍 empties the trace. Instead, 𝗉𝗎𝗌𝗁 and 𝗉𝗈𝗉 change the stack length: 𝗉𝗎𝗌𝗁 extends the stack to start a new trace, while 𝗉𝗈𝗉 restores the previous trace from the stack. Given a set of states XΣ, we write 𝗋ηX as a shorthand for 𝗋η({σ,σX}).

4 Formulae as 𝗠𝗢𝗞𝗔 programs

We show that ACTL and μ-calculus formulae φ can be transformed into 𝖬𝖮𝖪𝖠 programs φ¯ so that for any transition system T, the semantics of φ¯ over T consists exactly of the counterexamples to the validity of φ, that is, the states which do not satisfy φ.

Theorem 4.1 (Model checking as program verification).

Given T=(Σ,I,𝐏,,), for any ACTL or μ formula φ and set of stacks PFΣ+, it holds φ¯ηP={σ,Δ::SPσφ}.

It follows that φ¯ηPP and that φ¯η{σ}= iff σφ. It is also worth noting that the semantics of 𝖬𝖮𝖪𝖠 programs encoding formulae is a lower closure on sets of states, being monotone, reductive and idempotent (and it preserves arbitrary unions) [39, Section 3.2.3]. In this respect, it behaves analogously to the collecting semantics of Boolean tests which filter out memory states, keeping only those that satisfy the condition. As a consequence, similarly to what happens when abstracting Boolean tests, in any abstract domain A approximating sets of states the identity λaA.a, is a correct (over-)approximation of φ¯η.

ACTL as 𝗠𝗢𝗞𝗔.

To each ACTL formula φ we assign the 𝖬𝖮𝖪𝖠 program φ¯ as follows:

p¯ ¬𝗉? ¬p¯ 𝗉?
φ1φ2¯ φ1¯φ2¯ φ1φ2¯ φ1¯;φ2¯
𝖠𝖷φ¯ 𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;φ¯;𝗉𝗈𝗉
𝖠𝖥φ¯ φ¯;𝗉𝗎𝗌𝗁;𝗋𝖾𝗌𝖾𝗍;(𝖺𝖽𝖽;𝗇𝖾𝗑𝗍;φ¯);𝗅𝗈𝗈𝗉?;𝗉𝗈𝗉
𝖠𝖦φ¯ 𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;φ¯;𝗉𝗈𝗉
φ1𝖠𝖴φ2¯ φ2¯;𝗉𝗎𝗌𝗁;𝗋𝖾𝗌𝖾𝗍;(𝖺𝖽𝖽;𝗇𝖾𝗑𝗍;φ2¯);(𝗅𝗈𝗈𝗉?φ1¯);𝗉𝗈𝗉

The intuition is that 𝖬𝖮𝖪𝖠 programs φ¯ act as (negative) filters: applied to a set of frames σ,Δ::S, each representing a computation that reached state σ, they filter out states where φ is satisfied, keeping only candidate counterexamples to the validity of φ. In general, when the check requires exploring the future of the current state, a new frame is started with 𝗉𝗎𝗌𝗁; if the search is successful, a closing 𝗉𝗈𝗉 recovers the starting state violating the formula. We explain the main clauses defining φ¯ (see [1, Theorem B.1] for the proof of the correctness of this transform). It turns out that σ,Δ::S is a counterexample for

  • 𝖠𝖷φ if φ is violated in at least one successor state of σ (as computed by 𝗇𝖾𝗑𝗍;φ¯).

  • 𝖠𝖥φ if there is an infinite trace where φ never holds. This is encoded by saying that φ does not hold in the current state, i.e. φ¯, and after that, there is an infinite trace traversing only states (collected through the command (𝖺𝖽𝖽;𝗇𝖾𝗑𝗍;φ¯)) that do not satisfy φ. Since we deal with finite state systems, infinite traces can be identified with looping traces (whence the check 𝗅𝗈𝗈𝗉?); for this to work, after the 𝗉𝗎𝗌𝗁 we 𝗋𝖾𝗌𝖾𝗍 the past.

  • 𝖠𝖦φ if we can reach, by repeatedly applying 𝗇𝖾𝗑𝗍, a counterexample to φ.

  • φ1𝖠𝖴φ2 if φ2 does not hold in the current state, i.e. φ2¯, and progressing through states that do not satisfy φ2 with (𝖺𝖽𝖽;𝗇𝖾𝗑𝗍;φ2¯), either we detect a maximal (looping) trace or a state where φ1 does not hold.

Example 4.2 (Control flow graphs and ACTL encoding).

In our running example (see Examples 2.2 and 2.3), we check the property stating that if the program c terminates then the variable z is zero, which in the CFG means “when the exit node e is reached, z=0 holds. This is expressed by the ACTL formula φ=𝖠𝖦(n=ez=0), where we use pφ as syntactic sugar for ¬pφ. The corresponding 𝖬𝖮𝖪𝖠 program is: 𝗆φφ¯=𝖠𝖦(n=ez=0)¯=𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗇=𝖾?;𝗓𝟢?;𝗉𝗈𝗉.  

𝝁-calculus as 𝗠𝗢𝗞𝗔.

To each μ-formula φ we assign the 𝖬𝖮𝖪𝖠 program φ¯ as follows:

p¯ ¬𝗉? ¬p¯ 𝗉?
φ1φ2¯ φ1¯φ2¯ φ1φ2¯ φ1¯;φ2¯
φ¯ 𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;φ¯;𝗉𝗈𝗉 x¯ 𝖷
μx.φx¯ 𝗉𝗎𝗌𝗁;𝗋𝖾𝗌𝖾𝗍;μ𝖷.(𝗅𝗈𝗈𝗉?(𝖺𝖽𝖽;φx¯));𝗉𝗈𝗉 νx.φx¯ μ𝖷.φx¯

The second component of a frame σ,Δ is still used to identify looping computations. This intervenes in the encoding of least fixpoints μx.φx: when checking the formula, the current state first logged to the current frame (𝖺𝖽𝖽;φx¯ branch); a counterexample is found when we try to verify the fixpoint property in a state where the check has already been tried (filtered by 𝗅𝗈𝗈𝗉?). The encoding of greatest fixpoints νx.φx instead is simpler: searching for counterexamples naturally translates to a least fixpoint, which is offered natively by 𝖬𝖮𝖪𝖠.

The correctness of this transform (see  [1, Theorem B.2 and Corollary B.3]) leverages a small variation of the tableau construction in [53], with judgements roughly of the form σ,Δφ, meaning that the formula φ holds in a state σ assuming that all the states in Δ have been visited while checking the current fixpoint subformula. Then, it can be shown that given a formula φ and a stack σ,Δ::S, if there is no successful tableau for σ,Δφ then φ¯η{σ,Δ::S}={σ,Δ::S} holds, while φ¯η{σ,Δ::S}= otherwise.

Example 4.3 (Control flow graphs and μ-calculus encoding).

Consider again Examples 2.2 and 2.3. We now check the property “if after three steps we end up in program point 3 then we will loop forever, every 4 steps, on this program point 3.” It is known that regular properties of this kind cannot be expressed in ACTL (see, e.g., [54]). This property can be expressed in the μ-calculus as ψ3(n=3νx.(n=34x)), where k is a shortand for repeated k times. Letting 𝗋k as a shorthand for a k times composition 𝗋;;𝗋 of a 𝖬𝖮𝖪𝖠 program 𝗋, the 𝖬𝖮𝖪𝖠 program encoding ψ is ψ¯=(𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍)3;(𝗇=𝟥;νx.(n=34x)¯);𝗉𝗈𝗉3, with νx.(n=34x)¯=μ𝖷.([𝗇𝟥?((𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍)4;𝖷;𝗉𝗈𝗉4)]).  

It is well known that all ACTL formulae can be expressed as μ-calculus formulae. For example, 𝖠𝖷φ=φ, 𝖠𝖥φ=μx.(φx), and 𝖠𝖦φ=νx.(φx). Hence one could obtain programs generating counterexamples for ACTL by encoding ACTL in the μ-calculus and then generating the corresponding program. However, this in general produces programs which are (unnecessarily) more complex than those produced for ACTL formulae. For instance, the program for 𝖠𝖥φ obtained through the μ-calculus encoding above would be 𝗉𝗎𝗌𝗁;𝗋𝖾𝗌𝖾𝗍;μ𝖷.(𝗅𝗈𝗈𝗉?(𝖺𝖽𝖽;φ¯;𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝖷;𝗉𝗈𝗉));𝗉𝗈𝗉.

5 Abstract Interpretation of 𝗠𝗢𝗞𝗔

We show how to lift an abstraction over the states of a transition system to an abstraction over the domain of stacks. This is achieved stepwise by first considering an abstraction applied to each single stack, and then by merging classes of stacks through a suitable equivalence. The resulting stack abstraction will induce the abstract interpretation of 𝖬𝖮𝖪𝖠 programs.

Lifting the abstraction.

Let (Σ,I,) be a fixed transition system and let α,γ:𝒫(Σ)A be an abstraction of state properties. We consider the lattice FAA×A, with componentwise order, whose elements σ,δ are called abstract frames, and, in turn, FA+ whose elements σ,δ::S are called abstract stacks. As in the concrete case, we abbreviate σ,A as σ.

The abstraction map α on state properties can be extended to a frame abstraction, that by abusing the notation, we still denote by α, and is defined by α(σ,Δ)α({σ}),α(Δ). In turn, the abstraction is inductively defined on stacks as α(σ,Δ::Sn)α(σ,Δ)::α(Sn).

Now, a set of stacks is abstracted to a set of abstract stacks by first applying α pointwise using the image adjunction (see Example 2.1) and then joining classes of abstract stacks in a way which is parameterised by a suitable equivalence. Given an equivalence L×L, we let [x] denote the equivalence class of xL w.r.t. . Given xL we let x{yLyx}.

Definition 5.1 (Equivalence adjunction).

Given a complete lattice L, a compatible equivalence is an equivalence L×L such that for all xL, it holds that [x] is closed by joins of non-empty subsets. Let 𝒫(L){X𝒫(L)xX.[x]X={x}}, ordered as follows: XY if for all xX there is yY such that xy and xy.
Then, the pair α,γ:𝒫(L)𝒫(L) defined, for X𝒫(L), Y𝒫(L), by

α(X){(X[x])xX}γ(Y){[y]yyY}

is a Galois connection, called equivalence adjunction.

In words, 𝒫(L)𝒫(L) consists of the subsets of L containing at most one representative for each equivalence class, and α(X) abstracts X in the subset (in 𝒫(L)) consisting of the least upper bounds of -equivalent elements in X.

We can now define the stack abstraction parameterised by an equivalence on the lattice of abstract frames. Given an equivalence X×X on any set X, + denotes the equivalence on X+ induced by as follows: for all x1xn,y1ymA+, x1xn+y1ymX+ if n=m and xiyi for all i{1,,n}.

Definition 5.2 (Stack abstraction).

Let be a compatible equivalence on the abstract domain A and let us extend it to the lattice of abstract frames FA by σ1,δ1σ2,δ2 if σ1σ2. The -stack abstraction is the domain As𝒫(FA+) along with the Galois connection αs,γs:𝒫(FΣ+)As defined by αsα+α> and γsα<γ+.

In words, given a set X of stacks, its abstraction αs(X) first applies pointwise the underlying α to each stack in X, and then joins equivalent abstract stacks. As corner cases we can have the identity relation =id, which joins abstract frames with identical first component, and =A×A, the trivial relation, which joins all abstract stacks into one.

An abstract interpreter for counterexamples.

Given an abstraction for state properties α,γ:𝒫(Σ)A and a compatible equivalence on A, following the standard approach in abstract interpretation [16], we consider an abstract semantics 𝗋η:AsAs, defined inductively as explained in § 3 and using as abstract semantics for basic expressions 𝖾 their BCAs on As, denoted by (|𝖾|)As. We write 𝗋 instead of 𝗋η when the abstract environment is inessential. Notably, the BCAs of basic expressions independent from the underlying system can be effectively defined, and some of them result to be complete.

Theorem 5.3 (Basic abstract operations).

Let be a compatible equivalence on the abstract domain A. The BCAs of the basic expressions 𝖺𝖽𝖽, 𝗋𝖾𝗌𝖾𝗍, 𝗉𝗎𝗌𝗁, 𝗉𝗈𝗉 for the -stack abstraction are as follows: for all σ,δ::SFA+

(|𝖺𝖽𝖽|)As{σ,δ::S} = {σ,δσ::S} (|𝗋𝖾𝗌𝖾𝗍|)As{σ,δ::S} = {σ,::S}
(|𝗉𝗎𝗌𝗁|)As{σ,δ::S} = {σ,δ::σ,δ::S} (|𝗉𝗈𝗉|)As{σ,δ::S} = {S}

and the operations 𝗋𝖾𝗌𝖾𝗍, 𝗉𝗎𝗌𝗁, 𝗉𝗈𝗉 are globally complete. Moreover

(|𝗉?|)As{σ,δ::S} {σ𝗉?As,δ::S}
(|¬𝗉?|)As{σ,δ::S} {σ¬𝗉?As,δ::S}
(|𝗅𝗈𝗈𝗉?|)As{σ,δ::S} {σ,δ::Sσδ}

where 𝗉?As=α(𝗉?Σ) is the abstraction of the set of concrete states satisfying p (and similarly for ¬𝗉?As).

For 𝗅𝗈𝗈𝗉? one can get (|𝗅𝗈𝗈𝗉?|)As{σ,δ::S}={σδ,δ::Sσδ} under additional conditions on (cf. [1, Definition C.6]).

The soundness-by-design of this abstract semantics entails a sound program verification.

Proposition 5.4 (Program verification for satisfaction).

Given a transition system T=(Σ,I,𝐏,,), for all ACTL or μ-calculus formulae φ, abstract domain A, and a compatible equivalence on A, if φ¯η(αs(I))= then, for all σI, we have σφ.

As in the concrete case, the abstract semantics of programs encoding formulae is a lower closure. This, together with the observation that the semantics only depends on the top frame of a stack, is relevant for the concrete implementation.

 Remark 5.5.

We recalled in § 4 that for any Boolean test b and abstract domain A, the identity function λaA.a is a correct approximation of the filtering semantics b of b. This function can be enhanced to λaA.α(bΣ)Aa, which is also a correct approximation of b [39, Section 4.1]. If A is a partitioning abstraction then one can show that λaA.α(bΣ)Aa is indeed the best correct approximation of b. However, this property does not hold in general, even assuming that A is a disjunctive abstraction, i.e., the additivity of γ (this is substantiated in  [1, Example C.10]).  

Example 5.6 (Control flow graphs and abstract frames).

In our running example, abstract frames are of the shape A×A=N×N. The abstraction of singletons is α({(n,ρ)})=(na) with a(n)= for all nn. With the aim of joining past states when they correspond to the same program point, we consider the equivalence N×N on abstract frames defined by σ1σ2 when 𝑠𝑢𝑝𝑝(σ1)=𝑠𝑢𝑝𝑝(σ2). The property ψ=3(n=3νx.(n=34x)) from Example 4.3 holds in the system, and we can prove it with this abstraction. Let σ=(s)=α({(s,xyzw)}) we can compute ψ¯η(σ)=, implying that the formula holds from any initial states (convergence is after a single full iteration). Instead, the abstract computation for φ=𝖠𝖦(n=ez=0) from Example 4.2 yields a false positive.  

6 Locally Complete Analyses

If the abstract interpretation of a 𝖬𝖮𝖪𝖠 program returns an alarm, i.e., ψ¯αs(I), then any initial state in the concretisation of ψ¯αs(I) is a candidate counterexample to the validity of ψ. However, due to over-approximation, we cannot distinguish spurious counterexamples from true ones. Here, we discuss how to combine under- and over-approximation for the analysis of 𝖬𝖮𝖪𝖠 programs ψ¯ to overcome this problem. In particular we leverage Local Completeness Logic (LCL) [9, 7] possibly paired with Abstract Interpretation Repair (AIR) strategies [8], to improve the analysis precision.

LCL.

Most abstract domains are not globally complete for program analysis, so that the corresponding analyses may well yield false alarms. Accordingly, [9, 7] studies how completeness can be locally weakened to an analysis of interest: given a function f:CC, an abstract domain Aα,γAbs(C) is locally complete on a value cC, denoted by cA(f), when αf(c)=αfγα(c) (hence global completeness amounts to cA(f) for all cC). Intuitively, the absence of false alarms in an abstract computation comes as a consequence of the local completeness of the abstract transfer functions on the traversed concrete states.

Moreover, local completeness is a convex property, and this allows to check local completeness on suitable under-approximations uc such that α(u)=α(c), as uA(f) implies cA(f). The work [9, 7] implements this idea through LCL, an under-approximating program logic (in the style of incorrectness logic [43]), parameterised by the abstract domain A which provides an over-approximation. The LCL proof rules for 𝖪𝖠𝖳 programs are recalled in Table 1. A provable LCL triple A[P]𝗋[Q] ensures that each state satisfying Q is reachable from some state satisfying P, and also guarantees that Q and 𝗋P have the same abstraction in A. This means that the LCL program logic is sound w.r.t. the following notion of validity.

Definition 6.1 (Valid LCL triples).

Let P,QC and 𝗋 be a 𝖪𝖠𝖳 program. A program triple A[P]𝗋[Q] is valid if PA(𝗋)Q𝗋Pα(𝗋P)=α(Q).

The condition PA(𝗋) states that A is locally complete for 𝗋 on P, while Q𝗋P that Q under-approximates 𝗋P and α(𝗋P)=α(Q) that γ(α(Q)) over-approximates 𝗋P.

Table 1: Rules of the Local Completeness Logic [9].

LCL for 𝗠𝗢𝗞𝗔.

The LCL proof system has been defined for 𝖪𝖠𝖳 programs only, hence it is enough for 𝖬𝖮𝖪𝖠 programs induced by ACTL formulae. For analysing general 𝖪𝖠𝖥 and thus 𝖬𝖮𝖪𝖠 programs, we extend it to μLCL including the inference rules in Table 2, where 𝗋[𝗌/𝖷] denotes the capture-avoiding substitution of the free occurrences of 𝖷 for 𝗌 in 𝗋 and the term μn𝖷.𝗋 represents the n-th fixpoint approximant with the expected semantics (μ0𝖷.𝗋ηλx. and μn+1𝖷.𝗋η𝗋η[𝖷μn𝖷.𝗋η]). The syntax μn𝖷.𝗋 is used only in μLCL derivations. Intuitively, since μ0𝖷.𝗋 behaves as 𝟢, the unique valid judgement is A[P]μ0𝖷.𝗋[]. The rule (μ+) serves just to unfold μn+1𝖷.𝗋 as many times as needed. The key rule is (fix) which can be used whenever the n-th approximant provides enough information: in case the premise holds, by local completeness we will have μn𝖷.𝗋α(P)=α(Q). Since it is always the case that μn𝖷.𝗋α(P)Aμ𝖷.𝗋α(P), if the side condition μ𝖷.𝗋α(P)Aα(Q) holds, then local completeness for the fixpoint term μ𝖷.𝗋 can be inferred. These novel rules are sound, in the sense of preserving validity (Definition 6.1).

Table 2: Novel rules for μLCL.
 Remark 6.2.

When a least fixpoint appears in φ¯, we can exploit the following heuristic for choosing the value of n: if the abstract domain satisfies the Ascending Chain Condition (ACC) then, for every PC there is nP such that μnP𝖷.𝗋α(P)=μ𝖷.𝗋α(P). Then, by taking any nnP such that A[P]μn𝖷.𝗋[Q] is provable, the condition μ𝖷.𝗋α(P)Aα(Q) is readily satisfied, so that A[P]μ𝖷.𝗋[Q] is valid. More precisely, the rule below is sound:

The following result relating μLCL proofs with validity of formulae follows as an easy consequence of [9, Corollary 5.6] and Theorem 4.1.

Corollary 6.3 (Precision).

Let T=(Σ,I,𝐏,,) be a transition system. For all ACTL or μ-calculus formulae φ, abstract domain A and compatible equivalence on A, if the triple As[I]φ¯[Q] is derivable, then QI and

  • Q= if and only if for all σI we have σφ;

  • if Q then for all σQ we have σ φ.

Example 6.4 (Control flow graphs and LCL derivations).

As pointed out in Example 5.6 the computation of the program 𝗆φ, which encodes in 𝖬𝖮𝖪𝖠 the property φ=𝖠𝖦(n=ez=0) (see Example 4.2), yields a false alarm when we use the abstract domain A depicted in Figure 2(c). Consider as set of initial states I0={(s,0100),,(s,0011),}, whose abstraction covers the set of possible initial states {(s,xyzw),x,y,z,wk}, because αs(I0)=(s). Since the property holds, trying to derive a triple As[I0]𝗆φ[] leads to the failure of some local completeness assumption, which can drive the refinement of A. The imprecision is found in the fourth iteration of the 𝗇𝖾𝗑𝗍 operator, in which the set {(3,1100),(3,0111)} is abstracted to (3,xyzw), losing the relation between p and q being either both valid or both invalid. Several domain repairs are possible. Following [8] we can repair the domain by adding the abstract point qp (a more abstract repair than adding the element qp, as proposed in [4]). Here we can also exploit a different route, by refining the equivalence so that σ1σ2 when 𝑠𝑢𝑝𝑝(σ1)=𝑠𝑢𝑝𝑝(σ2)3. This intuitively corresponds to abstract separately the states at program point n=3. In both cases, it holds φ¯αs(I0)= in the refined abstract domain A.  

Example 6.5 (Traffic light example, LCL derivation and repair, μ-calculus version).

Consider the toy traffic light example from Figure 1(a) and the property ψ=𝖠𝖦(𝗀𝖠𝖷𝖽), briefly discussed in the introduction. We give some additional details in the light of the results presented. We first translate the formula in μ-calculus as ψμ=νx.((𝗀𝖽)x) whose encoding is ψμ¯=μ𝖷.(𝗋1𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝖷;𝗉𝗈𝗉)μ𝖷.𝗋x, where 𝗋1𝗀?;𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;¬𝖽?;𝗉𝗈𝗉. Then we try to derive the triple As[{𝗋𝗌}]μ𝖷.𝗋x[] in the abstract domain A (see Figure 1(c)) by applying the rule (afix) with n=2 (the number of iterations needed for convergence of the abstract fixpoint in A). By definition, μ0𝖷.𝗋x=𝟢; then the first approximant is μ1𝖷.𝗋x=𝗋1𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝟢;𝗉𝗈𝗉, but we can omit the branch that contains 𝟢, so μ1𝖷.𝗋x=𝗋1; and the second approximant is 𝗋2μ2𝖷.𝗋x=𝗋1𝗉𝗎𝗌𝗁;𝗇𝖾𝗑𝗍;𝗋1;𝗉𝗈𝗉. The attempt to derive the triple As[{𝗋𝗌}]𝗋2[] is sketched below, where the mid row gives an indication of the triples labelling the leaves of the derivation, reporting for each basic command involved, the pre-condition and post-condition of the triple, while the top row reports their abstractions.

While we can derive As[{𝗋𝗌}]𝗋1[], the source of local incompleteness is found in the second branch, at the ! position, where the proof obligation for {𝗀𝗌}As(𝗇𝖾𝗑𝗍) fails, since αs𝗇𝖾𝗑𝗍({𝗀𝗌})=αs({𝗀𝖽,𝗒𝖽})=bc, while αs𝗇𝖾𝗑𝗍γsαs({𝗀𝗌})=αs𝗇𝖾𝗑𝗍{𝗀𝗌,𝗀𝖽,𝗀𝗌,𝗒𝗌}=αs({𝗀𝗌,𝗀𝖽,𝗀𝗌,𝗒𝗌,𝗋𝗌})=ac. We can thus repair the abstract domain A. Following the procedure in [8], we add the new abstract point c1:

c1{TΣTγα{𝗀𝗌},𝗇𝖾𝗑𝗍T𝗇𝖾𝗑𝗍({𝗀𝗌})}={𝗀𝗌,𝗀𝖽}

The repaired domain is A1A{c1,bc1}, where bc1 is also added because the domain must be closed under meets. Now, in A1, the proof obligation for {𝗀𝗌}A1s(𝗇𝖾𝗑𝗍) holds true: 𝗇𝖾𝗑𝗍({𝗀𝗌})=α1s𝗇𝖾𝗑𝗍γα1s({𝗀𝗌})=bc. In fact, in A1 we just have ψ¯α1s({𝗋𝗌})= (we omit the details for the first branch 𝗋1 that are as above):

7 Related Work

Abstract interpretation and model checking have been applied to and combined with each other in several ways [50, 33, 30, 28, 14, 10, 4, 46, 21, 11, 12, 49, 25, 26, 44, 22, 19, 38, 45, 20, 29, 2, 37, 47, 35, 5] (see the survey [23] for further references). Abstract model checking broadly refers to checking a (temporal) specification in an abstract rather than a concrete model. On the one hand, in state partition-based approaches, the abstract model is itself a transition system, possibly induced by a behavioural state equivalence such as (bi)simulation, so that the verification method for the abstract model remains unaltered, e.g., [11, 12, 14, 37, 10]. On the other hand, if the abstract model derives from any, possibly nonpartitioning, state abstraction then the verification on this abstract model relies on abstract interpretation, e.g., [25, 5, 45, 46, 21, 38]. Hybrid abstraction techniques, in between these two approaches, have also been studied, e.g., predicate abstraction [33, 28], and the mixed transition systems with several universal and/or existential abstract transition relations in [22, 3]. Moreover, several works investigated the role of complete/exact/strongly preserving abstractions in model checking, e.g., [37, 20, 26, 44], and how to refine abstract models, e.g., [30, 29, 11, 12, 19, 21, 25, 45, 47, 50]. Let us also mention that [2] develops a theory of approximation for systems of fixpoint equations in the style of abstract interpretation, showing that up-to techniques can be interpreted as abstractions.

Our refinement technique somehow resembles CounterExample-Guided Abstraction Refinement (CEGAR) [12], a popular method for automatising the abstraction generation in model checking. CEGAR deals with state partition abstractions, thus merging sets of equivalent states: one starts from a rough abstraction which is iteratively refined on the basis of spurious counterexample traces arising due to over-approximation. The approach is sound for safety properties, i.e., no false positives can be found, and complete for a significant fragment of ACTL. It turns out that state partition abstractions are a specific instance in our approach. The attempt of proving the absence of counterexamples in LCL using an initial coarse abstraction will yield a computation by some means similar to CEGAR: failing LCL proof obligations lead to abstraction refinements which can be more general than partitions.

The general idea that model checking can be expressed as static analysis has been first investigated in [40]. This work shows that model checking of ACTL can be reduced to a logic-based static analysis formulated within the flow logic approach [42], which is then computed through a solver for the alternation-free least fixed point logic designed in [41]. This reduction technique has been later extended to the μ-calculus in [55]. One major goal of [40] was to show the close relationship and interplay between model checking and static analysis, coupled with early work in [51] and, later, in [48, 49, 52], proving that static program analysis can be reduced to model checking of modal formulae. Let us remark that the reduction of [40, 55] to a flow logic-based static analysis is given for concrete model checking only and does not encompass the chance of dealing with abstract model checking and related abstraction refinement techniques such as CEGAR, that can be instead achieved by-design in our approach.

8 Conclusion and future work

We have introduced a framework where model checking of temporal formulae in ACTL or in the universal fragment of the modal μ-calculus can be reduced to program verification, paving the way to reuse the full range of abstract interpretation techniques. Formulae are mapped to programs of the 𝖬𝖮𝖪𝖠 language, that are then analysed through a sound-by-construction abstract interpretation. This exposes all the possible counterexamples, although false alarms can arise. We show how false alarms can be removed by inspecting the derivability of suitable judgements in LCL, a program logic exploiting under- and over-approximations leveraging the notion of locally complete abstract interpretation. We expect that our approach, relying on 𝖪𝖠𝖥, naturally applies also to logics including operators based on regular expressions (see, e.g., [31, 6]). A first candidate is Propositional Dynamic Logic (PDL) [31], a modal logic closely connected to 𝖪𝖠𝖳. Its distinctive feature is a modal operator [𝗋]φ where 𝗋 is a 𝖪𝖠𝖳 expression, which is satisfied by a state σ when all the computations of 𝗋 starting from σ end up in a state satisfying φ. For instance, the property expressed in the μ-calculus in Example 4.3 can be equivalently written in PDL as [𝗇𝖾𝗑𝗍]3(n=3([𝗇𝖾𝗑𝗍]4)(n=3)), where, since we work in an unlabelled setting, “𝗇𝖾𝗑𝗍” stands for the only action in the system. Indeed, PDL smoothly fits in our setting (we refer the interested reader to [1, Appendix E]).

Future Work.

A number of interesting avenues of future research remain open. Our results are limited to universal fragments of temporal logics and finite state domains. A dual theory can be easily developed for existential fragments, focusing on the generation of witnesses rather than counterexamples. It would be interesting to combine the two approaches for dealing with universal and existential operators at the same time. Some ideas could come from [22] that, for solving the problem, works with two different abstract transition relations. The restriction to finite state domains is due to the fact that our encoding of logical formulae into 𝖬𝖮𝖪𝖠 programs relies on the 𝗅𝗈𝗈𝗉? operator for detecting infinite traces which are identified with looping traces. Further work could overcome this restriction by considering an encoding that captures non-looping infinite traces in the concrete domain and by exploiting ACC domains for the abstraction.

The use of LCL allows us to track the presence of false alarms back to the failure of local completeness proof obligations, which can be resolved by refining the abstract domain. This can be done at different levels: either refining the abstraction over states or refining the equivalence on abstract frames. A proper theory of refinements, possibly identifying optimal ways of patching the domain, is a matter of future investigations.

We point out that providing a general bound for the complexity of the abstract model checking procedure is not straightforward, as it crucially depends on the choice of the abstract domain. Different domains may induce significantly different behaviors, especially for non-partitioning abstractions or when local completeness and domain refinement techniques are applied. A precise complexity analysis tailored to specific domains is an interesting subject for future work.

Concerning the automatisation, the abstract interpreter could be easily implementable leveraging the standard toolset of abstract interpretation. The abstract interpreter should be instrumented to report, when the result is not , an abstract counterexample trace to check whether the counterexample is a false or true alarm. Making refinements effective requires working in a class of domains where local refinements are representable, e.g., by predicate abstractions or state partition abstractions. Developing a theory of refinements within specific subclasses of domains is a direction of future work.

References

  • [1] Paolo Baldan, Roberto Bruni, Francesco Ranzato, and Diletta Rigo. Model checking as program verification by abstract interpretation (extended version). CoRR, abs/2506.05525, 2025. URL: https://arxiv.org/abs/2506.05525.
  • [2] Paolo Baldan, Barbara König, and Tommaso Padoan. Abstraction, up-to techniques and games for systems of fixpoint equations. In Proceedings of CONCUR 2020, volume 171 of LIPIcs, pages 25:1–25:20, 2020. doi:10.4230/LIPICS.CONCUR.2020.25.
  • [3] Thomas Ball, Orna Kupferman, and Greta Yorsh. Abstraction for falsification. In Proceedings of CAV 2005, volume 3576 of LNCS, pages 67–81. Springer, 2005. doi:10.1007/11513988_8.
  • [4] Thomas Ball, Andreas Podelski, and Sriram K. Rajamani. Boolean and cartesian abstraction for model checking C programs. In Proceedings of TACAS 2001, volume 2031 of LNCS, pages 268–283. Springer, 2001. doi:10.1007/3-540-45319-9_19.
  • [5] Gourinath Banda and John P. Gallagher. Constraint-based abstract semantics for temporal logic: A direct approach to design and implementation. In Proceedings of LPAR 2016, volume 6355 of LNCS, pages 27–45. Springer, 2010. doi:10.1007/978-3-642-17511-4_3.
  • [6] Ilan Beer, Shoham Ben-David, and Avner Landver. On-the-fly model checking of RCTL formulas. In Proceedings of CAV 1998, volume 1427 of LNCS, pages 184–194. Springer, 1998. doi:10.1007/BFB0028744.
  • [7] Roberto Bruni, Roberto Giacobazzi, Roberta Gori, and Francesco Ranzato. A logic for locally complete abstract interpretations. In Proceedings of LICS 2021. IEEE, 2021. doi:10.1109/LICS52264.2021.9470608.
  • [8] Roberto Bruni, Roberto Giacobazzi, Roberta Gori, and Francesco Ranzato. Abstract interpretation repair. In Proceedings of PLDI 2022, pages 426–441. ACM, 2022. doi:10.1145/3519939.3523453.
  • [9] Roberto Bruni, Roberto Giacobazzi, Roberta Gori, and Francesco Ranzato. A correctness and incorrectness program logic. Journal of the ACM, 70(2):15:1–15:45, 2023. doi:10.1145/3582267.
  • [10] Doron Bustan and Orna Grumberg. Simulation-based minimization. ACM Transactions on Computational Logic, 4(2):181–206, 2003. doi:10.1145/635499.635502.
  • [11] Edmund M. Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith. Counterexample-guided abstraction refinement. In Proceedings of CAV 2000, volume 1855 of LNCS, pages 154–169. Springer, 2000. doi:10.1007/10722167_15.
  • [12] Edmund M. Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith. Counterexample-guided abstraction refinement for symbolic model checking. Journal of the ACM, 50(5):752–794, 2003. doi:10.1145/876638.876643.
  • [13] Edmund M. Clarke, Orna Grumberg, Daniel Kroening, Doron A. Peled, and Helmut Veith. Model Checking, 2nd Edition. MIT Press, 2018. URL: https://mitpress.mit.edu/books/model-checking-second-edition.
  • [14] Edmund M. Clarke, Orna Grumberg, and David E. Long. Model checking and abstraction. ACM Transactions on Programming, Languages and Systems, 16(5):1512–1542, 1994. doi:10.1145/186025.186051.
  • [15] Edmund M. Clarke, William Klieber, Milos Novácek, and Paolo Zuliani. Model checking and the state explosion problem. In Tools for Practical Software Verification, LASER, International Summer School 2011, volume 7682 of LNCS, pages 1–30. Springer, 2011. doi:10.1007/978-3-642-35746-6_1.
  • [16] Patrick Cousot. Principles of Abstract Interpretation. MIT Press, 2021. URL: https://mitpress.mit.edu/9780262044905/.
  • [17] Patrick Cousot and Radhia Cousot. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of POPL 1977, pages 238–252. ACM, 1977. doi:10.1145/512950.512973.
  • [18] Patrick Cousot and Radhia Cousot. Systematic design of program analysis frameworks. In Proceedings of POPL 1979, pages 269–282. ACM Press, 1979. doi:10.1145/567752.567778.
  • [19] Patrick Cousot and Radhia Cousot. Refining model checking by abstract interpretation. Automated Software Engineering, 6(1):69–95, 1999. doi:10.1023/A:1008649901864.
  • [20] Patrick Cousot and Radhia Cousot. Temporal abstract interpretation. In Proceedings of POPL 2000, pages 12–25. ACM, 2000. doi:10.1145/325694.325699.
  • [21] Patrick Cousot, Pierre Ganty, and Jean-François Raskin. Fixpoint-guided abstraction refinements. In Proceedings of SAS 2007, volume 4634 of LNCS, pages 333–348. Springer, 2007. doi:10.1007/978-3-540-74061-2_21.
  • [22] Dennis Dams, Rob Gerth, and Orna Grumberg. Abstract interpretation of reactive systems. ACM Transactions on Programming, Languages and Systems, 19(2):253–291, 1997. doi:10.1145/244795.244800.
  • [23] Dennis Dams and Orna Grumberg. Abstraction and abstraction refinement. In Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors, Handbook of Model Checking, pages 385–419. Springer, 2018. doi:10.1007/978-3-319-10575-8_13.
  • [24] Brian A. Davey and Hilary A. Priestley. Introduction to Lattices and Order, Second Edition. Cambridge University Press, 2002. doi:10.1017/CBO9780511809088.
  • [25] Roberto Giacobazzi and Elisa Quintarelli. Incompleteness, counterexamples, and refinements in abstract model-checking. In Proceedings of SAS 2001, volume 2126 of LNCS, pages 356–373. Springer, 2001. doi:10.1007/3-540-47764-0_20.
  • [26] Roberto Giacobazzi and Francesco Ranzato. Incompleteness of states w.r.t. traces in model checking. Information and Computation, 204(3):376–407, 2006. doi:10.1016/J.IC.2006.01.001.
  • [27] Roberto Giacobazzi, Francesco Ranzato, and Francesca Scozzari. Making abstract interpretations complete. Journal of the ACM, 47(2):361–416, 2000. doi:10.1145/333979.333989.
  • [28] Susanne Graf and Hassen Saïdi. Construction of abstract state graphs with PVS. In Proceedings of CAV 1997, volume 1254 of LNCS, pages 72–83. Springer, 1997. doi:10.1007/3-540-63166-6_10.
  • [29] Orna Grumberg, Martin Lange, Martin Leucker, and Sharon Shoham. When not losing is better than winning: Abstraction and refinement for the full mu-calculus. Information and Computation, 205(8):1130–1148, 2007. doi:10.1016/J.IC.2006.10.009.
  • [30] Anubhav Gupta and Ofer Strichman. Abstraction refinement for bounded model checking. In Proceedings of CAV 2005, volume 3576 of LNCS, pages 112–124. Springer, 2005. doi:10.1007/11513988_11.
  • [31] David Harel, Jerzy Tiuryn, and Dexter Kozen. Dynamic Logic. MIT Press, Cambridge, MA, USA, 2000. URL: https://mitpress.mit.edu/9780262527668/.
  • [32] Ranjit Jhala and Rupak Majumdar. Software model checking. ACM Computing Surveys, 41(4):21:1–21:54, 2009. doi:10.1145/1592434.1592438.
  • [33] Ranjit Jhala, Andreas Podelski, and Andrey Rybalchenko. Predicate abstraction for program verification. In Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors, Handbook of Model Checking, pages 447–491. Springer, 2018. doi:10.1007/978-3-319-10575-8_15.
  • [34] Dexter Kozen. Kleene algebra with tests. ACM Transactions on Programming, Languages and Systems, 19(3):427–443, 1997. doi:10.1145/256167.256195.
  • [35] Kim Guldstrand Larsen and Bent Thomsen. A modal process logic. In Proceedings of LICS 1988, pages 203–210. IEEE Computer Society, 1988. doi:10.1109/LICS.1988.5119.
  • [36] Hans Leiß. Towards Kleene algebra with recursion. In Proceedings of CSL 1991, volume 626 of LNCS, pages 242–256. Springer, 1991. doi:10.1007/BFB0023771.
  • [37] Claire Loiseaux, Susanne Graf, Joseph Sifakis, Ahmed Bouajjani, and Saddek Bensalem. Property preserving abstractions for the verification of concurrent systems. Formal Methods in System Design, 6(1):11–44, 1995. doi:10.1007/BF01384313.
  • [38] Damien Massé. Semantics for abstract interpretation-based static analyzes of temporal properties. In Proceedings of SAS 2002, volume 2477 of LNCS, pages 428–443. Springer, 2002. doi:10.1007/3-540-45789-5_30.
  • [39] Antoine Miné. Tutorial on static inference of numeric invariants by abstract interpretation. Foundations and Trends in Programming Languages, 4(3-4):120–372, 2017. doi:10.1561/2500000034.
  • [40] Flemming Nielson and Hanne Riis Nielson. Model checking is static analysis of modal logic. In Proceedings of FoSSaCS 2010, volume 6014 of LNCS, pages 191–205. Springer, 2010. doi:10.1007/978-3-642-12032-9_14.
  • [41] Flemming Nielson, Helmut Seidl, and Hanne Riis Nielson. A succinct solver for ALFP. Nordic Journal of Computing, 9(4):335–372, 2002.
  • [42] Hanne Riis Nielson and Flemming Nielson. Flow logic: A multi-paradigmatic approach to static analysis. In The Essence of Computation, Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones [on occasion of his 60th birthday], volume 2566 of LNCS, pages 223–244. Springer, 2002. doi:10.1007/3-540-36377-7_11.
  • [43] Peter W. O’Hearn. Incorrectness logic. Proceedings of POPL 2020, 4:10:1–10:32, 2020. doi:10.1145/3371078.
  • [44] Francesco Ranzato. On the completeness of model checking. In Proceedings of ESOP 2001, volume 2028 of LNCS, pages 137–154. Springer, 2001. doi:10.1007/3-540-45309-1_10.
  • [45] Francesco Ranzato and Francesco Tapparo. Making abstract model checking strongly preserving. In Proceedings of SAS 2002, volume 2477 of LNCS, pages 411–427. Springer, 2002. doi:10.1007/3-540-45789-5_29.
  • [46] Francesco Ranzato and Francesco Tapparo. Generalized strong preservation by abstract interpretation. Journal of Logic and Computation, 17(1):157–197, 2007. doi:10.1093/LOGCOM/EXL035.
  • [47] David Schmidt. Binary relations for abstraction and refinement. Technical report, Kansas State University, 2001.
  • [48] David A. Schmidt. Data flow analysis is model checking of abstract interpretations. In Proceedings of POPL 1998, pages 38–48. ACM, 1998. doi:10.1145/268946.268950.
  • [49] David A. Schmidt and Bernhard Steffen. Program analysis as model checking of abstract interpretations. In Proceedings of SAS 1998, volume 1503 of LNCS, pages 351–380. Springer, 1998. doi:10.1007/3-540-49727-7_22.
  • [50] Sharon Shoham and Orna Grumberg. Monotonic abstraction-refinement for CTL. In Proceedings of TACAS 2004, volume 2988 of LNCS, pages 546–560. Springer, 2004. doi:10.1007/978-3-540-24730-2_40.
  • [51] Bernhard Steffen. Data flow analysis as model checking. In Proceedings of TACS 1991, volume 526 of LNCS, pages 346–365. Springer, 1991. doi:10.1007/3-540-54415-1_54.
  • [52] Bernhard Steffen. Generating data flow analysis algorithms from modal specifications. Science of Computer Programming, 21(2):115–139, 1993. doi:10.1016/0167-6423(93)90003-8.
  • [53] Colin Stirling and David Walker. Local model checking in the modal mu-calculus. In Proceedings of TAPSOFT 1989, volume 351 of LNCS, pages 369–383. Springer, 1989. doi:10.1007/3-540-50939-9_144.
  • [54] Pierre Wolper. Temporal logic can be more expressive. Information and Control, 56(1/2):72–99, 1983. doi:10.1016/S0019-9958(83)80051-5.
  • [55] Fuyuan Zhang, Flemming Nielson, and Hanne Riis Nielson. Model checking as static analysis: Revisited. In Proceedings of IFM 2012, volume 7321 of LNCS, pages 99–112. Springer, 2012. doi:10.1007/978-3-642-30729-4_8.