On relative and probabilistic finite counterability

A counterexample to the satisfaction of a linear property ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document} in a system S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {S}}$$\end{document} is an infinite computation of S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {S}}$$\end{document} that violates ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document}. When ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document} is a safety property, a counterexample to its satisfaction need not be infinite. Rather, it is a bad-prefix for ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document}: a finite word all whose extensions violate ψ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi $$\end{document}. The existence of finite counterexamples is very helpful in practice. Liveness properties do not have bad-prefixes and thus do not have finite counterexamples. We extend the notion of finite counterexamples to non-safety properties. We study counterable languages—ones that have at least one bad-prefix. Thus, a language is counterable iff it is not liveness. Three natural problems arise: (1) given a language, decide whether it is counterable, (2) study the length of minimal bad-prefixes for counterable languages, and (3) develop algorithms for detecting bad-prefixes for counterable languages. We solve the problems for languages given by means of LTL formulas or nondeterministic Büchi automata. In particular, our EXPSPACE-completeness proof for the problem of deciding whether a given LTL formula is counterable, and hence also for deciding liveness, settles a long-standing open problem. In addition, we make finite counterexamples more relevant and helpful by introducing two variants of the traditional definition of bad-prefixes. The first adds a probabilistic component to the definition. There, a prefix is bad if almost all its extensions violate the property. The second makes it relative to the system. There, a prefix is bad if all its extensions in the system violate the property. We also study the combination of the probabilistic and relative variants. Our framework suggests new variants also for safety and liveness languages. We solve the above three problems for the different variants. Interestingly, the probabilistic variant not only increases the chances to return finite counterexamples, but also makes the solution of the three problems exponentially easier.

We extend the notion of finite counterexamples to non-safety specifications. We also make finite counterexamples more relevant and helpful by introducing two variants of the traditional definition of bad-prefixes. The first adds a probabilistic component to the definition. The second makes it relative to the system. We also consider the combination of the probabilistic and relative variants. Before we describe our contribution in detail, let us demonstrate the idea with the following example. Consider a system S and a specification ψ stating that every request is eventually followed by a response. There might be some input sequence that leads S to an error state in which it stops sending responses. While ψ is not safety, the system S has a computation with a prefix that is bad with respect to S: all its extensions in S do not satisfy ψ. Returning this prefix to the user, with its identification as bad with respect to S, is more helpful than returning a lasso-shaped counterexample. Consider now a specification ϕ stating that the system eventually stops allocating memory. There might be some input sequence that leads S to a state in which every request is followed by a memory allocation. A computation that reaches this state almost surely violates the specification. Indeed, it is possible that requests eventually stop arriving and the specification would be satisfied, but the probability of this behavior of the input is 0. Thus, the system S has a computation with a prefix that is bad with respect to S in a probabilistic sense: almost all of its extensions in S do not satisfy ϕ. Again, we want to return this prefix to the user, identified as bad with high probability.
Recall that a language L is liveness if every finite word can be extended to an infinite word in L. Equivalently, L has no bad-prefixes. We say that L is counterable if it has a bad-prefix. That is, L is counterable iff it is not liveness. Note that a language, for example a * · b · (a + b + c) ω , may be counterable and not safety. When a system does not satisfy a counterable specification ψ, it may contain a bad-prefix for ψ, which we would like to return to the user. Three natural problems arise: (1) Given a language, decide whether it is counterable, (2) study the length of minimal bad-prefixes for counterable languages, and (3) develop algorithms for detecting bad-prefixes for counterable languages. In fact, the last two problems are open also for safety languages. Deciding whether a given language is safety is known to be PSPACE-complete for languages given by LTL formulas or nondeterministic Büchi word automata (NBWs, for short) [37]. For the problem of deciding whether a language is counterable, an EXPSPACE upper-bound for languages given by LTL formulas is not difficult [37], yet the tight complexity is open. This is surprising, as recall that a language is counterable iff it is not liveness, and one could expect the complexity of deciding liveness to be settled by now. As it turns out, the problem was studied in [31], where it is stated to be PSPACE-complete. The proof in [31], however, is not convincing, and indeed efforts to solve the problem have continued, and the problem was declared open in [4] (see also [28]). Our first contribution is an EXPSPACE lower bound, implying that the longstanding open problem of deciding liveness (and hence, also counterability) of a given LTL formula is EXPSPACE-complete. In a recent communication with Diekert, Muscholl, and Walukiewicz, we have learned that they recently came up with an independent EXPSPACE lower bound, in the context of monitoring of infinite computations [14]. For languages given by means of an NBW, the problem is PSPACE-complete [31,37]. Thus, interestingly, while in deciding safety the exponential succinctness of LTL with respect to NBWs does not make the problem more complex, in deciding liveness it makes the problem exponentially more complex. This phenomenon is reflected also in the solutions to the problems about the length and the detection of bad-prefixes, as detailed in the table in Fig. 1, which summarizes our results. We also show that when a language given by an LTL formula is safety, the solutions for the three problems become exponentially easier. decidability length of a shortest bad-prefix LTL NBW LTL NBW safety PSPACE exponential [37] counterability EXPSPACE PSPACE doubly exponential exponential probabilistic PSPACE exponential counterability relative EXPSPACE PSPACE doubly exponential counterability exponential probabilistic relative PSPACE exponential counterability Fig. 1 A summary of our results. All the complexity classes are tight. In all cases, the complexity of finding a bad-prefix is equal to the complexity of decidability. In the relative cases, the computational bottleneck is the length of the specification, thus the described complexities are with respect to it. The space complexity is polylogarithmic and the time complexity is linear in the size of the Kripke structure, and the length of the shortest bad-prefix is linear in the size of the Kripke structure Let us return to our primary interest, of finding finite counterexamples. Consider a system modelled by a Kripke structure K over a set AP of atomic propositions. Let Σ = 2 AP , and consider an ω-regular language L ⊆ Σ ω . We say that a finite computation x ∈ Σ * of K is a K -bad-prefix for L, if x cannot be extended to an infinite computation of K that is in L. Formally, for all y ∈ Σ ω , if x · y is a computation of K , then it is not in L. Once we define K -bad-prefixes, the definitions of safety and counterability are naturally extended to the relative setting: A language L is K -counterable if it has a K -bad-prefix and is K -safety if every computation of K that is not in L has a K -bad-prefix for L. Using a product of K with an NBW for L, we are able to show that the solutions we suggest for the three problems in the non-relative setting apply also to the relative one, with an additional NLOGSPACE or linear-time dependency in the size of K . We also study K -safety, and the case L is K -safety.
We note that relative safety and liveness properties have already been considered in the literature, with different motivation and results. In [20] the notion of relative safety and liveness is defined. The goal there is to lift the practical advantages of safety to liveness properties. The idea is that a liveness property may be safety relative to some other property, that states a reasonable assumption about the behavior of the system. For example, when the behavior of a system is described with a sequence of pairs σ i , τ i where σ i is a state of the system and τ i is a time, the bounded-response property that states that every request is followed by a response within some time δ is a liveness property. However, the boundedresponse property is safety relative to some timing assumptions. The notion of lifting the practical advantages of safety to liveness properties is also studied in [5,36], where the finiteness of the system is taken into an account. In [31], the notion of liveness relative to a system is re-interpreted as satisfaction within fairness. The idea there is that if a property is liveness relative to a system, then every prefix of a behavior of the system can be extended to an infinite behavior that satisfies the property, and therefore in crude terms, the system almost satisfies the property-it just needs the "help of some fairness". They also show that satisfaction within fairness is preserved by some abstraction mappings, and therefore can be used also in a setting where an abstract model of the system is used.
We continue to the probabilistic view. 1 A random word over Σ is a word in which all letters are drawn from Σ uniformly at random. 2 In particular, when Σ = 2 AP , then the probability of each atomic proposition to hold in each position is 1 2 . Consider a language L ⊆ Σ ω . We say that a finite word x ∈ Σ * is a prob-bad-prefix for L if the probability of an infinite word with prefix x to be in L is 0. Formally, Pr({y ∈ Σ ω : x · y ∈ L}) = 0. Then, L is prob-counterable if it has a prob-bad-prefix. Now, given a Kripke structure K , we combine the relative and probabilistic views in the expected way: a finite computation x ∈ (2 AP ) * of K is a K -prob-bad-prefix for L if a computation of K obtained by continuing x with some random walk on K , is almost surely not in L. Thus, a computation of K that starts with x and continues according to some random walk on K is in L with probability 0. We show that this definition is independent of the probabilities of the transitions in the random walk on K . Again, L is K -prob-counterable if it has a K -prob-bad-prefix. We note that a different approach to probabilistic counterexamples is taken in [1]. There, the focus is on reachability properties, namely properties of the form "the probability of reaching a set T of states starting from state s is at most p". Accordingly, a counterexample is a set of paths from s to T , such that the probability of the event of taking some path in the set is greater than p. We, on the other hand, cover all ω-regular languages, and a counterexample is a single finite path-one whose extensions result in a counterexample with high probability.
We study the theoretical properties of the probabilistic setting and show that an ω-regular language L is prob-counterable iff the probability of a random word to be in L is less than 1. We also show that ω-regular languages have a "safety-like" behavior in the sense that the probability of a word not to be in L and not to have a prob-bad-prefix is 0. Similar properties hold in the relative setting and suggest that attempts to return to the user prob-bad-prefixes and K -prob-bad-prefixes are likely to succeed.
From a practical point of view, we show that the probabilistic setting not only increases our chances to return finite counterexamples, but also makes the solution of our three basic problems easier: As specified in the table in Fig. 1, deciding prob-counterability and Kprob-counterability for LTL formulas is exponentially easier than deciding counterability and K -counterability! Moreover, the length of bad-prefixes is exponentially smaller, and finding them is exponentially easier. Our results involve a careful analysis of the product of K with an automaton for L. Now, the product is defined as a Markov chain, and we also need the automaton to be deterministic. Our construction also suggest a simpler proof to the known probabilistic NBW model-checking result of [11]. While the blow-up determinization involves is legitimate in the case L is given by an NBW, it leads to a doublyexponential blow-up in the case L is given by an LTL formula ψ. We show that in this case, we can avoid the construction of a product Markov chain and, adopting an idea from [11], generate instead a sequence of Markov chains, each obtained from its predecessor by refining the states according to the probability of the innermost temporal subformula of ψ.
It is easy to see that there is a trade-off between the length of a counterexample and its "precision", in the sense that the longer a finite prefix of an erroneous computation is, the larger is the probability in which it is a K -prob-bad-prefix. We allow the user to play with this trade-off and study both the problem in which the user provides, in addition to K and ψ, also a probability 0 < γ < 1, and gets back a shortest finite computation x of K such that the probability of a computation of K that starts with x to satisfy ψ is less than γ , and the problem in which the user provides a length m ≥ 1 and gets back a finite computation x of K of length at most m such that the probability of a computation of K that starts with x to satisfy ψ is minimal.

Automata and LTL
A nondeterministic automaton on infinite words is a tuple A = Σ, Q, Q 0 , δ, α , where Σ is a finite alphabet, Q is a finite set of states, Q 0 ⊆ Q is a set of initial states, δ : Q × Σ → 2 Q is a transition function, and α is an acceptance condition whose type depends on the class of A. A run of A on a word w = σ 0 · σ 1 · · · ∈ Σ ω is a sequence of states r = q 0 , q 1 , . . . such that q 0 ∈ Q 0 and q i+1 ∈ δ(q i , σ i ) for all i ≥ 0. The run is accepting if it satisfies the condition α. We consider here Büchi and parity automata. In a Büchi automaton, α ⊆ Q and the run r satisfies α if it visits some state in α infinitely often. Formally, let in f (r ) = {q : q = q i for infinitely many i s} be the set of states that r visits infinitely often. Then, r satisfies α iff inf (r ) ∩ α = ∅ [6]. In a parity automaton, α : Q → {0, . . . , k} maps each state to a color in {0, . . . , k}. We refer to k as the index of A. A run r satisfies α if the minimal color that is visited infinitely often is even. Formally, the minimal color c such that , is the set of words that A accepts. When |Q 0 | = 1 and |δ(q, σ )| = 1 for all q ∈ Q and σ ∈ Σ, then A is deterministic. When a state q ∈ Q is such that no word is accepted from q (equivalently, the language of A with initial state q is empty), we say that q is empty. We use the acronyms NBW, DBW, NPW, and DPW to denote nondeterministic and deterministic Büchi and parity word automata, respectively. We also refer to the standard nondeterministic and deterministic automaton on finite words, abbreviated NFW and DFW, respectively. We define the size of A, denoted |A|, as the size of δ, namely, q∈Q,σ ∈Σ |δ(q, σ )|. For every NBW there is an equivalent DPW, and the translation from an NBW to a DPW involves an exponential blow-up [33].
An automaton A induces a graph G A = Q, E where (q, q ) ∈ E iff there is σ ∈ Σ such that q ∈ δ(q, σ ). When we refer to the strongly connected sets (SCSs) of A, we refer to the SCSs of this graph. Formally, a set C ⊆ Q of states is an SCS of A if for all q, q ∈ C, there is a path from q to q in G A . An SCS C is maximal if for all sets C such that C C, the set C ∪ C is no longer an SCS. A maximal SCS is termed a strongly connected component (SCC). An SCC C is accepting if a run that visits exactly all the states in C infinitely often satisfies α. For example, when α is a parity condition, then C is accepting if the minimal color c such that C ∩ α −1 (c) = ∅ is even. An SCC C is ergodic iff for all (q, q ) ∈ E, if q ∈ C then q ∈ C. That is, an SCC is ergodic if no edge leaves it.
The logic LTL is a linear temporal logic [34]. Formulas of LTL are constructed from a set AP of atomic propositions using the usual Boolean operators and the temporal operators G ("always"), F ("eventually"), X ("next time"), and U ("until"). The semantics of LTL is defined with respect to infinite computations over AP. We use w | ψ to indicate that the computation w ∈ (2 AP ) ω satisfies the LTL formula ψ. The language of an LTL formula ψ, denoted L(ψ), is the set of infinite computations that satisfy ψ. For the full syntax and semantics of LTL see [34]. We define the size of an LTL formula ψ, denoted |ψ|, to be the number of its Boolean and temporal operators. Given an LTL formula ψ, one can construct an NBW A ψ that accepts exactly all the computations that satisfy ψ. The size of A ψ is, in the worst case, exponential in |ψ| [41].
We model systems by Kripke structures. A Kripke structure is a tuple of the form K = AP, W, W 0 , R, l , where W is the set of states, R ⊆ W × W is a total transition relation (that is, for every w ∈ W , there is at least one state w such that R(w, w )), W 0 ⊆ W is a set of initial states, and l : W → 2 AP maps each state to the set of atomic propositions that hold in it. A path in K is a (finite or infinite) sequence w 0 , w 1 , . . . of states in W such that w 0 ∈ W 0 and for all i ≥ 0 we have R(w i , w i+1 ). A computation of K is a (finite or infinite) sequence l(w 0 ), l(w 1 ), . . . of assignments in 2 AP for a path w 0 , w 1 , . . . in K . We assume that different states of K are labeled differently. That is, for all states w, w ∈ W such that w = w , we have l(w) = l(w ). The assumption makes our setting cleaner, as it amounts to working with deterministic systems, so all the nondeterminism and probabilistic choices are linked to the specification and the distribution of the inputs, which is our focus. The simplest way to adjust nondeterministic systems to our setting is to add atomic propositions that resolve nondeterminism. The language of K , denoted L(K ), is the set of its infinite computations. We say that K satisfies an LTL formula ψ, denoted K | ψ, if all the computations of K satisfy ψ, thus L(K ) ⊆ L(ψ). We define the size of a Kripke structure K , denoted |K |, as |W | + |R|.
For a set AP of atomic propositions, we define the following Kripke structure:

Safety, liveness, and counterable languages
Consider an alphabet Σ, a language L ⊆ Σ ω , and a finite word u ∈ Σ * . We say that u is a prefix for L if it can be extended to an infinite word in L, thus there is v ∈ Σ ω such that uv ∈ L. Then, u is a bad-prefix for L if it cannot be extended to an infinite word in L, thus for every v ∈ Σ ω , we have that uv / ∈ L. Note that if u is a bad-prefix, so are all its finite extensions. We denote by pref (L) the set of all prefixes for L. Hence, a finite word u is a bad-prefix for L iff u / ∈ pref (L). The following classes of languages have been extensively studied (c.f., [2,3]). A language L ⊆ Σ ω is a safety language if every infinite word not in L has a bad-prefix. For example, {a ω } over Σ = {a, b, c} is safety, as every word not in L has a bad-prefix-one that contains the letter b or c. A language L is a liveness language if every finite word can be extended to a word in L. Thus, pref (L) = Σ * . For example, the language (a + b + c) * · a ω is a liveness language: by concatenating a ω to every word in Σ * , we end up with a word in the language. When L is not liveness, namely pref (L) = Σ * , we say that L is counterable. Note that while a liveness language has no bad-prefix, a counterable language has at least one bad-prefix. For example, L = a * · b · (a + b + c) ω is a counterable language. Indeed, c is a bad-prefix for L. It is not hard to see that if L is safety and L = Σ ω , then L is counterable. The other direction does not hold. For example, L above is not safety, as the word a ω has no bad-prefix.
We extend the definitions and classes above to specifications given by LTL formulas or by NBWs. For example, an LTL formula ψ is counterable iff L(ψ) is counterable.

Probabilistic and relative counterability
In this section we introduce and make some observations on two variants to counterability. The first variant adds a probabilistic component to the definitions. The second makes them relative to a Kripke structure. We also consider the combination of the probabilistic and relative variants.

Probabilistic counterability
For a finite or countable set X , a probability distribution on X is a function Pr : X → [0, 1] assigning a probability to each element in X . Accordingly, x∈X Pr(x) = 1. A finite Markov chain is a tuple M = V, p in , p , where V is a finite set of states, p in : V → [0, 1] is a probabilistic distribution on V that describes the probability of a path to start in the state, and p : V × V → [0, 1] is a function describing a distribution over the transitions. Formally, Thus, G includes transitions that have a positive probability in M. When we talk about the SCCs of M, we refer to these of G.
A random walk on M is an infinite path v 0 , v 1 , . . . in G such that v 0 is drawn at random according to p in and the ith state v i is drawn at random according to p v i−1 . More formally, there is a probability space V ω , F , Pr M defined on the set V ω of infinite sequences of states. The family of measurable sets F is the σ -algebra (also called Borel field) generated by The measure Pr M is defined on C (and can be extended uniquely to the rest of F ) as follows: For more background on the construction of this probability space, see, for example, [22]. A random walk on M from a state v ∈ V is a random walk on the Markov chain The following lemma states two fundamental properties of Markov chains (see, for example, [22]).

Lemma 1 Consider a Markov chain
1. An infinite random walk on M v reaches some ergodic SCC with probability 1.

Once a random walk on M v reaches an ergodic SCC, it visits all its states infinitely often
with probability 1.
A labeled finite Markov chain is a tuple S = Σ, V, p in , p, τ , where Σ is a finite alphabet, M = V, p in , p is a finite Markov chain, and τ : V → Σ maps each state in V to a letter in Σ. We extend τ to paths in the expected way, thus for π = v 0 , v 1 , . . . ∈ V ω , we define τ (π) = τ (v 0 ), τ (v 1 ), . . .. A random walk on S is a random walk on M. The chain S induces a probability space on Σ ω , induced from M. That is, for L ⊆ Σ ω , we have Pr S [L] = Pr M [{π ∈ V ω : τ (π) ∈ L}]. It is known that ω-regular languages are measurable in the probability space induced by S (c.f., [40]).
Consider an alphabet Σ. A random word over Σ is a word in which for all indices i, the ith letter is drawn from Σ uniformly at random. We denote by Pr[L] the probability of a measurable language L ⊆ Σ ω in this uniform distribution. For a finite word u ∈ Σ * , we denote by Pr u [L] the probability that a word obtained by concatenating an infinite random word to u is in L. Consider a language L ⊆ Σ ω . We say that a finite word u ∈ Σ * is a prob-bad-prefix for L if Pr u [L] = 0. That is, u is a prob-bad-prefix if an infinite word obtained by continuing u randomly is almost surely not in L. We say that L is prob-counterable if it has a prob-badprefix. Consider for example the language L = a · (a + b) ω + b ω over Σ = {a, b}. All the words u ∈ b + are not bad-prefixes for L, but are prob-bad-prefixes, as Pr u [L] = Pr[b ω ] = 0. As another example, consider the LTL formula ψ = (req ∧ G Fgrant) ∨ (¬req ∧ FG¬grant). The formula ψ is a liveness formula and does not have a bad-prefix. Thus, ψ is not counterable. All finite computations in which a request is not sent in the beginning of the computation are, however, prob-bad-prefixes for ψ, as the probability of satisfying FG¬grant is 0. Hence, ψ is prob-counterable.
Prob-counterability talks about prefixes after which the probability of being in L is 0. We can relate such prefixes to words that lead to rejecting ergodic SCCs in DPWs that recognize L, giving rise to the following alternative definitions: Theorem 1 Consider an ω-regular language L. The following are equivalent: That is, the probability of an infinite random word to be in L is strictly smaller than 1. 3. Every DPW that recognizes L has a rejecting ergodic SCC.
Let u be a random word and let u i be the suffix of u obtained by removing the first i letters. Then, we have For 2 ⇒ 3, we assume, by way of contradiction, that there is a DPW that recognizes L and its ergodic SCCs are all accepting. We denote the DPW by D and its initial state by s 0 . Note that a random word induces a random walk on a Markov chain on D from s 0 . According to Lemma 1, this random walk visits all the states in some ergodic SCC of D infinitely often with probability 1. Therefore, if all of the ergodic SCCs are accepting, then the random walk is an accepting run in D with probability 1. Thus, Pr[L] = 1, and we have reached a contradiction.
Finally, for 3 ⇒ 1, if a DPW that recognizes L has a rejecting ergodic SCC, we can choose a word u ∈ Σ * whose traversal in the DPW reaches a rejecting ergodic SCC. According to Lemma 1, if we start a random walk from there, it visits every state in the rejecting ergodic SCC infinitely often with probability 1. Therefore Pr u [L] = 0, that is, the word u is a prob-bad-prefix for L. Thus L is prob-counterable and we are done.
Analyzing the SCCs of DPWs for L also implies that ω-regular languages have a "safetylike" behavior in the following probabilistic sense: Proof If a word v ∈ Σ ω is not in L but does not have a prob-bad-prefix, then the run of v in the DPW that recognizes L either (1) does not reach an ergodic SCC or (2) reaches an ergodic SCC but does not visit all its states infinitely often. This is because a word whose run visits all the states of an ergodic SCC infinitely often is either in L (if the ergodic SCC is accepting) or has a prob-bad-prefix (which is a prefix that reaches the rejecting ergodic SCC). A random word induces a random walk on a Markov chain on the DPW that recognizes L, and therefore according to Lemma

Relative counterability
Recall that the standard definitions of bad-prefixes consider extensions in Σ ω . When Σ = 2 AP and the language L is examined with respect to a Kripke structure K over AP, it is interesting to restrict attention to extensions that are feasible in K . Consider a finite word u ∈ Σ * . We say that u is a bad-prefix for L with respect to K (K -bad-prefix, for short) if u is a finite computation of K that cannot be extended to a computation of K that is in L. Thus, u ∈ pref (L(K ))\pref (L(K ) ∩ L). We say that L is safety with respect to K (K-safety, for short) if every computation of K that is not in L has a K -bad-prefix. We say that L is counterable with respect to K (K-counterable, for short) if the language L has a K -bad-prefix.
Theorem 3 Consider an ω-regular language L ⊆ (2 AP ) ω . 1. L is safety iff L is K -safety for every Kripke structure K over A P. 2. For every Kripke structure K over A P, we have that L is K -safety iff L(K ) ∩ L is safety.
Proof We start with the first claim. Clearly, if L is safety, then L is K -safety for every K . Indeed, every computation of K that is not in L has a bad-prefix for L, which is a Kbad-prefix. For the other direction, since L is K -safety for every Kripke structure K , it is K AP -safety. Thus, every w / ∈ L has a bad-prefix for L, which implies that L is safety. We proceed to the second claim. Consider a Kripke structure K . Assume first that L is K -safety. Let w be an infinite computation over 2 AP that is not in , then w has a prefix that is not a finite computation of K , and therefore it is a bad-prefix for L(K ). This prefix is a bad-prefix for L(K ) ∩ L. If w is in L(K ) but not in L, then since L is K -safety, the computation w has a K -bad-prefix for L and this prefix is a bad-prefix for L(K ) ∩ L.
Assume now that L(K ) ∩ L is safety. Let w be a computation of K that is not in L, and therefore w / ∈ L(K ) ∩ L. Thus, the computation w has a bad-prefix w for L(K ) ∩ L. Every computation of K that extends w is in L(K ) but not in L(K ) ∩ L, and therefore it is not in L. Thus, w is a K -bad-prefix for L.
Recall that if L ⊆ Σ ω is safety and L = Σ ω , then L is counterable. Also, if L is Ksafety and L(K ) L then L is K -counterable. Note that it is possible that L(K ) ∩ L is counterable but L is not K -counterable. For example, we can choose K and L such that L(K ) ⊆ L = (2 AP ) ω . Then, L is not K -counterable, but a word u that is not a computation of K is a bad-prefix for L(K ), making it also a bad-prefix for L(K ) ∩ L. Hence, L(K ) ∩ L is counterable.

Probabilistic relative counterability
We now combine the probabilistic and relative definitions. Consider a Kripke structure . A random walk on K with respect to P is a random walk on the Markov chain W, p in , p . We define the probability of an ω-regular language L ⊆ (2 AP ) ω with respect to K and P as Pr K ,P [L] = Pr M K ,P [L]. Namely, Pr K ,P [L] is the probability that a computation obtained by a random walk on K with respect to P is in L. Let u be a finite computation of K and let w 0 , . . . , w k ∈ W * be such that u = l(w 0 ), . . . , l(w k ). We say that an infinite computation u is a continuation of u with a random walk on K with respect to P if u = l(w 0 ), . . . , l(w k−1 ), l(w 0 ), l(w 1 ), . . ., where w 0 , w 1 , . . . is obtained by a random walk on K from w k with respect to P (recall that we assume that different states in K are labeled differently). We define Pr u K ,P [L] as the probability that a computation obtained by continuing u with a random walk on K with Thus, a computation obtained by continuing u with some random walk on K is almost surely not in L. As we show in Lemma 2 below, the existential quantification on the K -walkdistribution P can be replaced by a universal one, or by the specific K -walk-distribution that traverses K uniformly at random. We say that L is prob-counterable with respect to K As we have seen above, a language may be prob-counterable but not counterable. Taking K = K AP , this implies that a language may be K -prob-counterable but not K -counterable. As an example with an explicit dependency in K , consider the counterable LTL formula ψ = G(req → Xack) ∧ FG(open). Let K be a Kripke structure over AP = {req, ack, open} as shown in Fig. 2, such that the atomic propositions in AP are mutually exclusive and L(K ) contains exactly the computations that start in req and in which every req is immediately followed by ack, or computations in which open is always valid. Note that while ψ is not K -counterable, it is K -prob-counterable, as every finite computation of K that starts with req is a K -prob-bad-prefix for ψ.
Consider an NBW A. By [11,40], deciding whether Pr K ,P [L(A)] = 0 or whether Pr K ,P [L(A)] = 1 is independent of the K -walk-distribution P. Consequently, we have the following.
Lemma 2 [11,40] Let u be a finite computation of a Kripke structure K over A P, and let L ⊆ (2 AP ) ω be an ω-regular language. For all pairs P and P of K -walk-distributions, we Recall that the analysis of probabilistic counterability involved reasoning about the SCCs of a DPW that recognizes the language. For probabilistic relative counterability, we need to consider the product of this DPW with the system, and define it as a labeled Markov chain.
Let D = 2 AP , S, s 0 , δ D , α be a DPW for a language L and let K = AP, W, W 0 , R, l be a Kripke structure. We define . Also, note that all of the successors of a state in D K ×D share the same second component.
We now define the product as a labeled Markov chain. Let D and K be as above and let P = p in , p be a K -walk-distribution. We define a labeled Markov chain . Thus, p in and p attribute the states of M = M K ,P by the deterministic behavior of D. In particular, note that for every Note that X and Y both take values in W ω and that they have the same distribution. Therefore, we have Pr . Note that w 0 , . . . , w n induces a single finite path w 0 , s 0 , . . . , w n , s n in D K ×D , and that every infinite path in K induces a single infinite path in D K ×D . For a finite computation u, let reach(u) be the state reached in D K ×D after traversing u. Thus, reach(u) = w n , s n . By the above, Pr u M [L] is the probability that a random walk in M from reach(u) is an accepting run in D K ×D . For a state x of M , we denote by γ x the probability that a random walk from x in M is an accepting run in D K ×D . Note that a finite computation u is a K -prob-bad-prefix iff γ reach(u) = 0.
We can now point to equivalent definitions of K -prob-counterability.

Theorem 4
Consider an ω-regular language L ⊆ (2 AP ) ω and a Kripke structure K over A P. The following are equivalent: Proof The proof for 2 ⇔ 3 and 4 ⇔ 5 follows from Lemma 2.
For 1 ⇒ 4, assume that L is K -prob-counterable. Let u be a K -prob-bad-prefix. Then, for a K -walk-distribution P we have Pr K ,P [comp(L)] ≥ Pr K ,P [the computation starts with u and is not in L] = Pr K ,P [the computation starts with u] > 0.
For 4 ⇒ 2, assume that Pr K ,P [L] < 1. We choose u = ε. Clearly, Pr u K ,P [L] < 1. For 2 ⇒ 1, assume that Pr u K ,P [L] < 1. According to the definitions of D K ×D and γ reach(u) , we have γ reach(u) = Pr u K ,P [L] < 1. Therefore, by Lemma 1, D K ×D has a rejecting ergodic SCC. Let v be a finite computation of K that reaches this ergodic SCC in D K ×D , then v is a K -prob-bad-prefix.
We can also generalize Theorem 2, and show that ω-regular languages have "safety-like" behaviors also with respect to Kripke structures, in the following probabilistic sense: Theorem 5 Consider an ω-regular language L ⊆ (2 AP ) ω , a Kripke structure K over A P, and a K -walk-distribution P. Then, Pr K ,P [{u ∈ (2 AP ) ω : u / ∈ L and u does not have a K -prob-bad-prefix for L}] = 0.
Proof According to the construction of M , a computation obtained by a random walk on M = M K ,P has the same distribution as a computation obtained by a random walk on M . Therefore Pr M [the computation is not in L and does not have a K -prob-bad-prefix] = Pr M [the computation is not in L and does not have a K -prob-bad-prefix]. This equals to the probability that a random walk on M is a rejecting run of D K ×D and its computation does not have a K -prob-bad-prefix (and therefore it does not reach a rejecting ergodic SCC of D K ×D ). According to Lemma 1 this probability is 0.
If L is also K -prob-counterable then we also have Pr K ,P [u has a K -prob-bad-prefix for L | u / ∈ L] = 1 for every K -walk-distribution P. Conceptually, Theorem 4 implies that if an error has a positive probability to occur in a random execution of the system, then the specification is prob-counterable with respect to the system. Theorem 5 then suggests that in this case, a computation of the system that does not satisfy the specification, almost surely has a prob-bad-prefix with respect to the system. Thus, almost all the computations that violate the specification start with a prob-bad-prefix with respect to the system. Hence, attempts to find and return to the user such bad-prefixes are very likely to succeed.

Deciding liveness
Recall that a language L is counterable iff L is not liveness. As discussed in Sect. 1, the complexity of the problem of deciding whether a given LTL formula is liveness is open [4]. In this section we solve this problem and prove that it is EXPSPACE-complete. The result will be handy also for our study of the probabilistic and relative variants.

Theorem 6 The problem of deciding whether a given LTL formula is liveness is EXPSPACEcomplete.
Proof The upper-bound is known [37], and follows from the fact that every LTL formula ψ can be translated to an NBW A ψ with an exponential blow-up [41]. By removing empty states from A ψ and making all other states accepting, we get an NFW for pref (L(ψ)), which is universal iff ψ is liveness. Checking NFW universality can be done in PSPACE by complementing the NFW on-the-fly and checking emptiness, implying the EXPSPACE upper bound.
For the lower bound, we show a reduction from an exponent version of the tiling problem, defined as follows. We are given a finite set T , two relations V ⊆ T × T and H ⊆ T × T of tiles, an initial tile t 0 , a final tile t f , and a bound n > 0. We have to decide whether there is some m > 0 and a tiling of a 2 n × m-grid such that (1) When n is given in unary, the problem is known to be EXPSPACE-complete [39].
We reduce this problem to the problem of deciding whether an LTL formula is not liveness. Given a tiling problem τ = T, H, V, t 0 , t f , n , we construct a formula ϕ such that τ admits tiling iff ϕ has a good-prefix-one all whose extensions satisfy ϕ. Formally, x ∈ Σ * is a good prefix for ϕ iff for all y ∈ Σ ω , we have that x · y satisfies ϕ. Therefore, for ψ = ¬ϕ, we have τ admits tiling iff ψ is not liveness. The idea is to encode a tiling as a word over T , consisting of a sequence of rows (each row is of length 2 n ). Such a word represents a proper tiling if it starts with t 0 , has a last row that starts with t f , every pair of adjacent tiles in a row are in H , and every pair of tiles that are 2 n tiles apart are in V . The difficulty is in relating tiles that are far apart. To do that we represent every tile by a block of length n, which encodes the tile's position in the row. Even with such an encoding, we have to specify a property of the form "for every i, if we meet a block with position counter i, then the next time we meet a block with position counter i, the tiles in the two blocks are in V ". Such a property can be expressed in an LTL formula of polynomial length, but there are exponentially many i's to check. The way to use liveness in order to mimic the universal quantification on all i's is essentially the following: the good prefix for ϕ encodes the tiling. The set of atomic propositions in ϕ includes a proposition $ that is not restricted beyond this prefix. The property that checks V then has to hold in blocks whose counter equals the counter of the block that starts at the last $ in the computation. Thus, universal quantification in i is replaced by the explicit universal quantification on suffixes in the definition of good prefixes.
We proceed to describe the reduction in detail. A block of length n that represents a single tile contains the tile itself in the first letter (each tile has an atomic proposition t i ∈ T ). The block also contains the position of the tile in the row. This position is a number between 0 and 2 n − 1 and we use an atomic proposition c 1 to encode it as an n-bit vector. For simplicity we denote ¬c 1 by c 0 (that is, c 0 is not an atomic proposition). The vector is stored in the letters of the block with the least significant bit stored at the first letter. The position is increased by one from one block to the next. We require that the first letter of each block is marked with the atomic proposition #, that the first letter in the first block of the last row (the mth row) is marked with the atomic proposition @, and that the first letter after the last row is marked with the atomic proposition $.
-The first block is marked by a single # and the position counter value is 0: -Until the first $, the atomic proposition # marks the beginning of a block of length n (the block of the first $ is the last block that must be marked with this pattern): -The following four formulas make sure that the position (that is encoded by c 1 ) is increased by one every #. We use an additional atomic proposition z that represents the carry. Thus, we add 1 to the least significant bit and then propagate the carry to the other bits. Note that the requirements hold until $ (the block of the first $ still has a position counter): -Each letter has at most one tile. This requirement holds until $: -The tiling starts with t 0 , thus f (0, 0) = t 0 : ϕ 8 = t 0 -The requirement that f (0, m) = t f is translated to the requirement that there is a row that starts with t f and the letter right after this row is marked with the first $. We use @ to mark a t f that appears at the beginning of a row (that is, a block with a position counter of 0). By using the formula θ row−start = # ∧ ( 0≤i<n X i c 0 ) we require that the first @ appears on a tile t f at the beginning of a row: ϕ 9 = (¬@)U (@ ∧ t f ∧ θ row−start ).
-The following requires that the first $ appears right after the row that starts with the first @: ϕ 10 = (¬$ ∧ ¬@)U (@ ∧ ¬$ ∧ X ((¬θ row−start ∧ ¬$)U (θ row−start ∧ $))). -For the horizontal condition, namely that for every 0 ≤ i ≤ 2 n −2 and 0 ≤ j ≤ m −1 we -The vertical condition, namely that for every 0 ≤ i ≤ 2 n − 1 and 0 ≤ j ≤ m − 2 we have ( f (i, j), f (i, j + 1)) ∈ V , is the challenging one. We want to relate tiles with the same position counter value in consequent rows. We first define a formula θ eq that holds iff the atomic proposition $ appears finitely many times and the current position counter agrees with the position counter in the block of length n that begins at the last appearance of $: Then, the formula θ next−t i requires that a block that satisfies θ eq in the next row must have a tile t i : Using θ next−t i we can define the formula θ V that requires that if θ eq is satisfied, then the tile with the same position counter in the next row satisfies the vertical condition: By requiring θ V to hold until @, we specify the vertical condition. Thus, ϕ 12 = θ V U @. Note that this formula checks the vertical condition only for positions that agree with the position counter of the block of the last $ (when $ appears finitely often). -Finally, the LTL formula ϕ requires all the above requirements to be satisfied. Thus, We now prove that indeed τ admits tiling iff ϕ has a good-prefix. Assume first that τ admits tiling. We claim that a finite word of length (2 n · m + 1) · n that represents the tiling as explained above is a good-prefix for ϕ. This prefix includes the block of length n right after the mth row (which has $ in its first letter and a position counter of 0 . . . 0. The atomic propositions t i for this block can be defined arbitrarily). Note that ϕ does not restrict $ beyond requiring that its first appearance is in the letter that is right after the mth row. The formulas ϕ 1 , . . . , ϕ 11 depend only on this prefix and therefore are satisfied for every extension of this prefix with an infinite suffix. If the atomic proposition $ appears infinitely often in the suffix then ϕ 12 is also satisfied. Otherwise, there is a position that is encoded by the atomic proposition c 1 in the block of length n that starts at the last $. In this case, the formula ϕ 12 checks the vertical condition for tiles with this position. Since the prefix represents a valid tiling, the vertical condition holds and therefore the formula ϕ 12 holds. Note that although ϕ has a good-prefix, it is not necessarily co-safety. Now, assume that ϕ has a good-prefix. By the formulas ϕ 1 , . . . , ϕ 11 , this prefix must start with the tile t 0 , have m > 0 such that the mth row starts with t f and satisfy the horizontal condition in the first m rows. Note that m is the index of the first row that starts with @, and that since the prefix is a good-prefix for ϕ then it must contain at least 2 n · m + 1 blocks.
Indeed, ϕ requires that such @ appears, and that the first block in the m + 1th row starts with $ and has a position counter of 0 . . . 0. We now show that the vertical condition also holds for every position in the tiling that this prefix represents. Since ϕ 12 holds for every extension of the prefix with an infinite suffix, then in particular it holds for suffixes in which the atomic proposition $ appears finitely often. In this case, the block of length n that starts at the last $ encodes a position, and by ϕ 12 the vertical condition must hold for this position. Since for every possible position there is a suffix such that this position is encoded by the block of the last $, then the vertical condition must hold for every position.
Therefore, we have: τ admits tiling iff ϕ has a good-prefix iff ψ is not liveness. Since the size of ψ is polynomial in n, then our reduction is done.
When a language is given by means of an NBW, the complexity of deciding its liveness is much simpler:

Theorem 7 The problem of deciding whether a given NBW is liveness is PSPACE-complete.
Proof The upper bound is described in [37]. For the lower bound we describe a reduction from a PSPACE Turing machine. As shown in [30], we can generate, given a PSPACE Turing machine T and an input x to it, an NBW A that rejects a word w iff w starts with an encoding of a legal and accepting computation of T over the input x. Therefore, T accepts the input x iff L(A) is not liveness, and we are done.

Counterability
In this section we study the problem of deciding whether a given language is counterable as well as the length of short bad-prefixes and their detection. In order to complete the picture, we also compare the results to those of safety languages.
We start with the complexity of deciding safety and counterability. The results for safety are from [37]. These for counterability follow from Theorems 6 and 7 and the fact that L is counterable iff it is not liveness.

The problem of deciding whether L is counterable is EXPSPACE-complete for L given by an LTL formula and is PSPACE-complete for L given by an NBW.
We find Theorem 8 surprising: both safety and counterability ask about the existence of bad-prefixes. In safety, a bad-prefix should exist to all bad words. In counterability, not all bad words have a bad-prefix, but at least some should have. Theorem 8 implies that there is something in LTL, yet not in NBWs, that makes the second type of existence condition much more complex.
We now turn to study the length of shortest bad-prefixes. Both (non-valid) safety and counterable languages have bad-prefixes. As we show, however, the complexity of counterability continues, and a tight bound on shortest bad-prefixes for counterable languages is exponentially bigger than that of safety languages. As detailed in the proof, the gap follows from our ability to construct a fine automaton A fine ψ for all safety LTL formulas ψ [23]. The NFW A fine ψ is exponential in |ψ|, it accepts only bad-prefixes for ψ, and each computation that does not satisfy ψ has at least one bad-prefix accepted by A fine ψ . A shortest witness to the nonemptiness of A fine ψ can serve as a bad-prefix. On the other hand, nothing is guaranteed about the behavior of A fine ψ when constructed for a non-safety formula ψ, thus it is of no help in the case of counterable languages that are not safety. In particular, the LTL formula used in the proof of Theorem 6 is neither safety nor its complement is safety, thus its doublyexponential shortest bad-prefix does not contradict upper bounds known for these classes of languages.

Theorem 9
The length of shortest bad-prefixes for a language given by an LTL formula ψ is tightly exponential in |ψ| in case ψ is safety, and is tightly doubly-exponential in |ψ| in case ψ is counterable. Proof We start with the case of safety formulas. In [23], the authors associated with each safety LTL formula ψ an NFW A fine ψ that accepts a trap for ψ. That is, it is guaranteed that A fine ψ accepts only bad-prefixes for ψ and that each computation that does not satisfy ψ has at least one bad-prefix accepted by A fine ψ . Unlike an NFW that accepts exactly all the badprefixes for ψ, and which is doubly-exponential in |ψ|, it is possible to define A fine ψ so that it is only exponential in |ψ|. Now, a shortest witness to the nonemptiness of A fine ψ is linear in its size, and it bounds the length of a shortest bad-prefix for ψ. Hence the exponential upper bound. For the lower bound, consider the family L 1 , L 2 , . . . of safety languages where L n contains exactly all words over the alphabet {0, 1, #} that do not start with a description of an n-bit counter that counts from 0 to 2 n − 1. For example, L 2 contains all words that do not start with 00011011. It is not hard to see that L n is a safety language, that it can be specified by an LTL formula of length polynomial in n, and that the shortest bad-prefix for L n is the description of an n-bit counter that counts from 0 to 2 n − 1, which is of length exponential in n. Formally, the LTL formula is over the atomic propositions AP = {b, c, d} where b represents the value of a bit, c represents a carry and d represents a start of an n-bits block.
We proceed to counterable languages. In [25], the authors associated with each LTL formula ψ an NFW A tight ψ that accepts exactly all the bad-prefixes for ψ. The size of A tight ψ is doubly-exponential in |ψ|. A shortest witness to the nonemptiness of A tight ψ is linear in its size, and it bounds from above the length of a shortest bad-prefix for ψ. Hence the doubly-exponential upper bound. For the lower bound, we consider the formula ψ generated in the EXPSPACE hardness proof in Theorem 6. It is known that there are inputs to the tiling problem described there for which the shortest legal tiling has doubly-exponentially many rows. Indeed, such tilings can encode all problems in EXPSPACE, including these whose running time in doubly-exponential. A shortest bad-prefix for the LTL formula ψ constructed in the proof describes the tiling with its doubly-exponentially many rows, and is thus doubly-exponential in |ψ|.
When the specification formalism is automata, the difference between safety and counterable languages disappears.

Theorem 10 The length of shortest bad-prefixes for a safety or a counterable language given by an NBW A is tightly exponential in |A|.
Proof In [25], the authors associated with each NBW A an NFW A tight that accepts exactly all the bad-prefixes for A. The size of A tight is exponential in |A|. A shortest witness to the nonemptiness of A tight is linear in its size, and it bounds from above the length of a shortest bad-prefix for A. Hence the exponential upper bound.
For the lower bound, let Σ = {a, b}, and let p 1 , . . . , p n be the first n prime numbers. For k = 1, . . . , n let A k be an NBW of size O( p k ) such that L(A k ) = {w : w = a ω or w starts with a m b, for m = 0 or m = 0 mod p k }. Let A be an NBW of size O( p 1 + . . . + p n ) such that L(A) = n k=1 L(A k ). Note that L(A) is safety. Since p n = O(nlogn), the size of A is polynomial in n. On the other hand, a bad-prefix for A is of length at least 1 + n k=1 p k , which is greater than n!, and hence exponential in n.

Relative counterability
In this section we add a Kripke structure K to the setting and study K -counterablity and shortest K -bad-prefixes. Our results use variants of the product of automata for K and L, and we first define this product below. Consider a Kripke structure K = AP, W, W 0 , R, l and an NBW A = 2 AP , Q, Q 0 , δ, α . Essentially, the states of the product A K ×A are pairs in W × Q. Recall that the states in K are differently labeled. Thus, we can define the product so that whenever it reads a letter in 2 AP , its next W -component is determined. Formally, we define the NBW where for all σ ∈ 2 AP , we have w , q ∈ ρ( s 0 , q , σ ) iff w ∈ W 0 and l(w ) = σ and q ∈ δ(q, σ ), and for w ∈ W we have w , q ∈ ρ( w, q , σ ) iff R(w, w ) and l(w ) = σ and q ∈ δ(q, σ ). Thus, when the product A K ×A proceeds from state w, q with σ , its new W -component is the single successor of w that is labeled σ , paired with the σ -successors of q. It is easy to see that L(A K ×A ) = L(K ) ∩ L(A). When A is an NBW that corresponds to an LTL formula ψ (that is, L(A) = L(ψ)), we denote the product by A K ×ψ .
We start with the problem of deciding relative safety. By Theorem 3, a language L is K -safety iff L(K ) ∩ L is safety. Thus, the check can be reduced to checking the safety of A K ×A (respectively A K ×ψ ). This check, however, if done naively, is PSPACE in |A K ×A | (respectively |A K ×ψ |), which is PSPACE in |K |. The technical challenge is to find a more efficient way to do the check, and the one we describe in the proof is based on decomposing A K ×A so that the complementation that its safety check involves is circumvented. As for the lower bound, note that using the Kripke structure K AP , one can reduce traditional safety to relative safety. 3 Our reduction, however, shows that the complexity of deciding K -safety coincides with that of model checking in both its parameters.

Theorem 11 Consider a Kripke structure K over A P and a language L ⊆ (2 AP ) ω . The problem of deciding whether L is K -safety is PSPACE-complete for L given by an NBW or by an LTL formula. In both cases it can be done in time linear and space polylogarithmic in |K |.
Proof For the upper bound, we describe an efficient way for checking whether A K ×A (or A K ×ψ ) is safety. By [37], an NBW S is safety iff L(S loop ) ⊆ L(S), where S loop is the NBW obtained from S by removing all empty states and making all other states accepting. For the lower bound, we describe a reduction from the model-checking problem. Given a Kripke structure K and an LTL formula ψ, let q / ∈ AP be a fresh atomic proposition, and let K be a Kripke structure obtained from K by duplicating each state so that w ∈ 2 AP∪{q} is a computation of K iff the projection of w on AP is a computation of K . Note that K is twice as large as |K |. We claim that K | ψ iff the LTL formula ψ ∨ Fq is K -safety. First, if K | ψ, then every computation of K satisfies ψ, implying that all the computations of K satisfy ψ, and thus also satisfy ψ ∨ Fq. Hence, ψ ∨ Fq is K -safety. For the other direction, observe that every finite computation of K can be continued with a suffix that reaches a state in which q holds. Hence, ψ ∨ Fq does not have a K -bad-prefix. Therefore ψ ∨ Fq is K -safety only when all the computations of K satisfy ψ ∨ Fq. By the definition of K , the latter holds only when K | ψ. The reduction for the case of an NBW is similar.
We continue to relative counterability. We first show that the complexity of deciding counterability is carried over to the relative setting. For the upper bound, note that a language L is K -counterable iff pref (L(K )) ∩ comp(pref (L(K ) ∩ L)) = ∅. Again, this check, if done naively, is PSPACE in |K |, and the challenge in the proof is to use the deterministic behavior of A K ×A with respect to the W -component of its states in order to avoid a blow-up in K in its complementation.

Theorem 12 Consider a Kripke structure K over A P and a language L ⊆ (2 AP ) ω . The problem of deciding whether L is K -counterable and finding a shortest K -bad-prefix is PSPACE-complete for L given by an NBW and is EXPSPACE-complete for L given by an LTL formula. In both cases it can be done in time linear and space polylogarithmic in |K |.
Proof We start with the upper bounds. By definition, L is K -counterable iff pref (L(K )) pref (L(K ) ∩ L). Equivalently, pref (L(K )) ∩ comp(pref (L(K ) ∩ L)) = ∅. We construct an NFW U for comp(pref (L(K ) ∩ L)) as follows. Let A be an NBW such that L = L(A). First, we remove from A K ×A empty states and make all its states accepting. This results in an NFW U for pref (L(K ) ∩ L). The NFW U is obtained by determinizing and dualizing U . As explained above, A K ×A has a deterministic behavior with respect to the W -component of its states. Hence, determinizing it with the subset construction results in state space W × 2 Q . It follows that U is linear in |K | and exponential in |A|. Also, an NFW S for pref (L(K )) is clearly linear in |K |. Now, checking whether L is K -counterable is reduced to checking the nonemptiness of S × U. The states of S × U are of the form W × (W × 2 Q ). Since both S and the W -component of U proceed according to the transition relation of K , they coincide, until possibly one of them gets stuck. Hence, the size of the product S × U is linear in |K | and exponential in |A|. The nonemptiness check can be done on-the-fly and in NLOGSPACE. Thus, it can be done in PSPACE for L given by an NBW and in EXPSPACE for L given by an LTL formula. Also it can be done in time linear and space polylogarithmic in |K |. A shortest witness to the nonemptiness is a shortest K -bad-prefix.
We proceed to the lower bounds. According to the reduction of Theorem 6, the problem of deciding whether an LTL formula ψ over a fixed set of atomic propositions AP is liveness, is EXPSPACE-hard. Note that ψ is liveness iff ψ is not K AP -counterable. Since |AP| is independent of |ψ|, it gives a polynomial reduction that shows that the problem of deciding whether ψ is K -counterable is EXPSPACE-hard in |ψ|. Similarly, according to Theorem 7, deciding whether L(A) is K -counterable is PSPACE-hard in |A|.
We now study the length of K -bad-prefixes. A. In both cases, it is also tightly linear in |K |.

Theorem 13 The length of a shortest K -bad-prefix for a K -counterable language L is tightly doubly-exponential in |ψ|, in case L is given by means of an LTL formula ψ, and is tightly exponential in |A|, in case L is given by an NBW
Proof The upper bounds follow from the proof of Theorem 12. For the lower bounds with respect to |ψ| and |A|, note that when we use the Kripke structure K AP , the constructions described in the proofs of Theorems 9 and 10 also apply here.
Finally, for the lower bound with respect to |K |, let AP = {0, . . . , n}, fix ψ to the safety formula G¬0, and consider the family of Kripke structures K 1 , K 2 , . . ., with K n = AP, AP, {n}, R, l , where l(w) = {w}, and the transitions are such that R(k, k − 1) for k ≥ 2, R(2, 0), R(1, 1) and R(0, 0). It is easy to see that ψ is K n -counterable and that the length of the shortest K n -bad-prefix is n. Indeed, every computation of K n starts with the word "n · (n − 1) · . . . · 2" and then it either continues with "1 ω " or with "0 ω ". Therefore, the shortest K n -bad-prefix is "n · (n − 1) · · · · · 2 · 0". Interestingly, when the LTL formula is K -safety, deciding its K -counterability and finding a K -bad-prefix can be done more efficiently, and its K -bad-prefixes are shorter. Formally, we have the following.

Theorem 14 Let K be a Kripke structure and let ψ be a K -safety LTL formula. Deciding whether ψ is K -counterable and finding a K -bad-prefix is PSPACE-complete in |ψ|. The length of shortest K -bad-prefixes is tightly exponential in |ψ|.
Proof Since ψ is K -safety, we have that ψ is K -counterable iff K | ψ. From the standard reduction for LTL model checking PSPACE-hardness [38] it follows that LTL model checking is PSPACE-complete already for the safety fragment of LTL.
We now show an algorithm for finding a K -bad-prefix and study its length. In [23] it is shown that if A is a safety NBW with n states and A is an NBW with n states, m of them are accepting, such that L(A) = comp(L(A)), then there exists an NFW A with n · (m · n + 1) states such that A is fine for L(A). That is, A recognizes only bad-prefixes for L(A) and every word that is not in L(A) has at least one bad-prefix that is recognized by A . In Theorem 3 we showed that if ψ is K -safety then A K ×ψ is safety. LetÃ K ×ψ be an NBW of size exponential in |ψ| and linear in |K |, such that L(Ã K ×ψ ) = comp(L(K )) ∪ comp(L(ψ)) = comp(L(A K ×ψ )). Therefore, according to [23], there exists an NFW A of size exponential in |ψ| which is fine for L(A K ×ψ ).
is not empty then every word in L(A ) is a K -bad-prefix. If there is a K -bad-prefix for ψ then there is a computation π of K (that starts with this K -bad-prefix) that does not satisfy ψ. Since π is not in L(A K ×ψ ), then π has a bad-prefix for L(A K ×ψ ) that is recognized by A. This prefix is recognized also by A pr e f K and therefore by A , and it is a K -bad-prefix. The size of A is exponential in |ψ|, and therefore the task of finding a word in L(A ) can be done on-the-fly in space polynomial in |ψ|, and its size is exponential in |ψ|.
The lower bound for the length of a shortest K -bad-prefix follows from the construction described in the proof of Theorem 9 by using the Kripke structure K AP .
Note that the space complexity of the algorithm described in the proof of Theorem 14 in also polylogarithmic in |K |. Finally, when the specification formalism is automata, the difference between K -safety and K -counterable languages disappears, and a short K -badprefix for a K -safety or K -counterable language given by an NBW A is tightly exponential in |A| and linear in |K |.

Probabilistic relative counterability
In this section we study K -prob-counterability. We start with the corresponding decision problem: Theorem 15 Consider a language L ⊆ (2 AP ) ω and a Kripke structure K over A P. Deciding whether L is K -prob-counterable can be done in time O(|K |·2 O(|L|) ) or in space polynomial in |L| and polylogarithmic in |K |, for L given by an LTL formula ψ, in which case |L| = |ψ|, or by an NBW A, in which case |L| = |A|. In both cases, the problem is PSPACE-complete.
Proof Let P be a K -walk-distribution. By Theorem 4, the language L is K -prob-counterable iff Pr K ,P [L] < 1. By Theorems 3.1.2.1 and 4. 1.7 in [11], the latter can be checked in the required complexities. According to [40], the problem is PSPACE-hard.
Thus, deciding whether an LTL formula is K -prob-counterable is exponentially easier than in the non-probabilistic case.
We turn to the problem of finding a K -prob-bad-prefix for an ω-regular language. Handling a language given by an NBW can proceed by an exponential translation to a DPW [33]. For languages given by LTL formulas, going to a DPW involves a doubly-exponential blow-up. We show that in order to find a K -prob-bad-prefix for an LTL formula, we can carefully proceed according to the syntax of the formula and do exponentially better than an algorithm that translates the formula to automata. We note that the PSPACE-hardness for NBWs in Theorem 15 implies that we cannot hope to obtain a PSPACE algorithm for LTL by translating LTL formulas to NBWs, unless the structure of the latter is analyzed to a level in which it essentially follows the structure of the LTL formula (see, for example [12] for probabilistic LTL model checking).

Probabilistic relative counterability of NBWs
Recall that in Sect. 3.3 we constructed an NPW D K ×D and a labeled Markov chain M . We also defined γ x as the probability that a random walk from a state x in M is an accepting run in D K ×D , and showed that for a finite computation u we have γ reach(u) = Pr u M [L] = Pr u K ,P [L]. That construction, together with the following lemma, is useful for finding K -prob-badprefixes. Proof First we show that determining for every x whether γ x is 0, 1 or in (0, 1) can be done in time linear in |D K ×D | or in space polylogarithmic in |D K ×D |.
We first find the ergodic SCCs in D K ×D . The partition of D K ×D to SCCs can be done in time linear in |D K ×D | [10]. Then, we partition the ergodic SCCs of D K ×D to accepting and rejecting. Note that we can find in time linear in |D K ×D | for each ergodic SCC C the minimal i such that C ∩ α −1 (i) = ∅, and therefore the partition of the ergodic SCCs can be done in time linear in |D K ×D |. By Lemma 1, for each state x in an ergodic SCC of D K ×D , we have γ x = 1 iff the ergodic SCC is accepting and γ x = 0 iff the ergodic SCC is rejecting. Also, for each state x in D K ×D , we have γ x = 1 iff there is no path in D K ×D from x to a rejecting ergodic SCC, and γ x = 0 iff there is no path in D K ×D from x to an accepting ergodic SCC. We can check for each SCC whether it can reach an accepting ergodic SCC and whether it can reach a rejecting ergodic SCC (reverse reachability) in time linear in |D K ×D |.
We now show how to determine for every x whether γ x is 0, 1 or in (0, 1) in space polylogarithmic in |D K ×D |. We need to check whether x can reach a state in an accepting ergodic SCC and whether x can reach a state in a rejecting ergodic SCC. Therefore, we check for every state y in D K ×D the following conditions: first, we check whether x can reach y, if so, we check whether y is in an ergodic SCC and whether the ergodic SCC is accepting or rejecting. In order to check if a state y is in an ergodic SCC we cycle over all possible states z and for each z we check whether there is a path from y to z and a path from z to y. The state y is not in an ergodic SCC iff for some z there is a path from y to z but not from z to y. In order to check whether the ergodic SCC of a state y is accepting or rejecting we cycle over all possible states z and for each z we check whether there is a path from y to z (thus they are in the same ergodic SCC) and check the value of α (z). All of the above reachability problems can be solved in space polylogarithmic in |D K ×D |.
For calculating γ x we first find the ergodic SCCs in D K ×D , and check for every ergodic SCC whether it is accepting or rejecting. By Lemma 1, the probability γ x of a state x in M is the probability of a random walk in M from x to reach an accepting ergodic SCC. This probability can be calculated for every x in M in time polynomial in |D K ×D | (see, for example, Chapter 11.2 in [17]).
We now show an algorithm for finding a shortest K -prob-bad-prefix for a language given by an NBW A. Proof Let D be a DPW equivalent to A and let M be the labeled Markov chain induced by D and K as described in Sect. 3.3. We need to find a shortest word whose path in M reaches a state x for which γ x = 0. By Lemma 3, we can decide in space polynomial in |A| and polylogarithmic in |K | or in time exponential in |A| and linear in |K |, for every state x, whether γ x = 0. Thus, the problem can be reduced to the problem of finding shortest paths in a graph of size exponential in |A| and linear in |K |. The required complexities then follow from the fact a shortest path is simple, and the complexity of known shortest-path algorithms [10]. Finally, the lower bounds with respect to |A| and |K | for the length of a shortest Kprob-bad-prefix follow from the constructions described in the proofs of Theorem 10 (by using the Kripke structure K AP ) and Theorem 13, respectively.

Theorem 16 Consider an NBW
Consider a Kripke structure K over AP and a language L ⊆ (2 AP ) ω that is K -probcounterable. In practice, the user of the model-checking tool often has some estimation of the likelihood of every transition in the system. That is, we assume that the user knows what the typical K -walk-distribution P in a typical behavior of the system is. Clearly, there is a trade-off between the length of a counterexample and its "precision", in the sense that the longer a finite prefix of an erroneous computation is, the larger is the probability in which it is a K -prob-bad-prefix. We want to allow the user to play with this trade-off and thus define the following two problems: -The shortest bounded-prob-K -bad-prefix problem is to return, given K , L, and 0 < γ < 1, a shortest finite computation u of K such that Pr u K ,P [L] < γ . -The bounded-length prob-K -bad-prefix problem is to return, given K , L, and m ≥ 1, a finite computation u of K such that |u| ≤ m and Pr u K ,P [L] is minimal. Using Lemma 3, we can carefully reduce both problems to classical problems in graph algorithms, applied to D K ×D .

Theorem 17
The shortest bounded-prob-K -bad-prefix and the bounded-length prob-Kbad-prefix problems, for a language given by an NBW A, can be solved in time exponential in |A| and polynomial in |K |.
Proof Let D be a DPW equivalent to A. Recall that |D| is exponential in |A|. According to Lemma 3, the probability γ x for every state x in M can be calculated in time exponential in |A| and polynomial in |K |. For a probability 0 < γ < 1 we need to find a shortest finite computation u of K such that Pr u M [L(A)] < γ . Therefore, we need to find a shortest finite computation whose path in M reaches a state x such that γ x < γ . For a length m > 0 we need to find a finite computation u of K of length at most m such that Pr u M [L(A)] is minimal. Therefore we need to find a finite computation of length at most m whose path in M reaches a state x such that γ x is minimal. Both tasks can be done using BFS in D K ×D from its initial states as required.
We note that using our construction of M , together with Lemma 3, we can reduce the calculation of Pr K ,P [L(A)] or the problem of its classification to 1, 0, or (0, 1) to a sequence of calculations in M , simplifying the known result of [11] (Theorem 4.1.7 there). Formally, we have the following.
Theorem 18 [11] Given K , P, and A, we can calculate Pr K ,P [L(A)] in time exponential in |A| and polynomial in |K |. Furthermore, we can determine if this probability is 1, 0, or in (0, 1), in time exponential in |A| and linear in |K |, or in space polynomial in |A| and polylogarithmic in |K |.
Proof Let D be a DPW equivalent to A. We need to compute the probability of a random walk in M to be an accepting run in D K ×D . According to Lemma 3, we can calculate γ x in time exponential in |A| and polynomial in |K |, for every state x in M . Therefore we can calculate Pr K ,P [L(A)] = w∈W 0 p in ( s 0 , w )γ s 0 ,w in the required complexity.
In addition, recall that Pr K ,P [L(A)] = 0 iff for every initial state x of D K ×D we have γ x = 0. Also, Pr K ,P [L(A)] = 1 iff for every initial state x of D K ×D we have γ x = 1. By Lemma 3, we can determine for every x whether γ x is 0, 1 or in (0, 1) in the required complexity.

Probabilistic relative counterability of LTL formulas
We now show an algorithm for finding a K -prob-bad-prefix for an LTL formula θ . Since the algorithm is based on the construction of [11], we start with an overview of that construction.
Let K = AP, W, W 0 , R, l be a Kripke structure and let θ be an LTL formula. We assume that the only temporal operators in θ are X and U . Since F and G can be expressed with U , this does not restrict attention. Let P = p in , p be a K -walk-distribution and let M = M K ,P be the labeled Markov chain induced by K and P. The algorithm of [11] checks whether Pr M (L(θ )) = 0 (or whether Pr M (L(θ ) = 1) in time O(|K | · 2 |θ | ). It proceeds by iteratively replacing the innermost temporal sub-formula of θ by a fresh atomic proposition, and adjusting M so that the probability of satisfying θ is maintained. More formally, in each iteration we transform (M, θ) to (M , θ ) such that θ replaces an innermost temporal subformula of θ by a new atomic proposition ξ , and Pr M (L(θ )) = Pr M (L(θ )). There are two transformations, denoted C U and C X , corresponding to the two temporal operators U and X . We now describe these transformations.
We start with the transformation C U . Let ϕU ψ be an innermost temporal subformula of θ ; that is, ϕ and ψ are Boolean assertions of atomic propositions and can be evaluated on each state of K . The new labeled Markov chain M = M K ,P is defined over a Kripke structure K with a new atomic proposition ξ , which is going to replace ϕU ψ in θ . Formally, The function l is defined as: l ( w, ξ ) = l(w) ∪ {ξ } and l ( w, ¬ξ ) = l(w). The set of initial states is W 0 = W ∩(W 0 ×{ξ, ¬ξ }), and the transition relation R is contained in {( w, ξ 1 , w , ξ 2 ) : The transformation C X follows similar lines. Let X ϕ be an innermost temporal subformula of θ . The partition of the states in W is now such that a state in W Y E S (respectively W N O ) has transitions only to states satisfying ϕ (respectively ¬ϕ), and a state in W ? has transitions to states satisfying ϕ and to states satisfying ¬ϕ.
The K -walk-distribution P is defined so that the transition probability of moving from state w, β to state w , β , for β, β ∈ {ξ, ¬ξ }, is the probability that M, starting from state w, moves to state w and from state w onward satisfies β , conditioned on the event that in state w it satisfies β. The detailed definitions of P can be found in [11].
Let g be the mapping of states in K into states in K by projecting on the first component of the states of K . The properties of M that are important for our algorithm are described below. 4. Pr M (L(θ )) = Pr M (L(θ )).
If θ has k temporal operators, then, by Property 4, we can compute Pr M (L(θ )) as follows: We apply k times the appropriate transformations C U and C X in order to get the sequence (M 1 , θ 1 ), . . . , (M k , θ k ), where θ k does not contain temporal operators. Then, Pr M (L(θ )) = Pr M k (L(θ k )), which is simply the sum of the initial probabilities in M k over all states satisfying θ k . In order to check whether Pr M (L(θ )) > 0 (or whether Pr M (L(θ )) < 1), it is enough to construct the Kripke structures K 1 , . . . , K k corresponding to M 1 , . . . , M k and check whether K k has an initial state satisfying θ k (respectively, ¬θ k ). Thus, according to Theorem 4, the LTL formula θ is K -prob-counterable iff K k has an initial state that does not satisfy θ k . For every i, the construction of the Kripke structure K i+1 is based only on K i and is independent of the probabilities in M i . Thus, we can construct K 1 , . . . , K k from K without constructing M 1 , . . . , M k . The construction of (K 1 , θ 1 ), . . . , (K k , θ k ) can be done in time O(|K | · 2 |θ | ) or in space polynomial in |θ | and polylogarithmic in |K |.
We now use a variant of this idea in order to find K -prob-bad-prefixes:
Proof Let M = M K ,P be a labeled Markov chain over K with respect to some K -walkdistribution P, and let (M 1 , θ 1 ), . . . , (M k , θ k ) be the construction described above. Let K 1 , . . . , K k be the Kripke structures corresponding to M 1 , . . . , M k , respectively. Recall that the construction of (K 1 , θ 1 ), . . . , (K k , θ k ) is independent of the K -walk-distribution P. We first show how to find a K k -prob-bad-prefix for θ k . Then, we show how to find a K i -prob-bad-prefix for θ i given a K i+1 -prob-bad-prefix for θ i+1 , with (K 0 , θ 0 ) = (K , θ). Thus, we end-up with the required K -prob-bad-prefix for θ . Let K k = AP k , W k , W k 0 , R k , l k . Recall that θ k does not contain temporal operators, and thus the formula θ is K -prob-counterable iff K k has an initial state that does not satisfy θ k . Therefore, if θ is K -prob-counterable, then there is a K k -prob-bad-prefix for θ k of length 1, obtained by a state x 1 ∈ W k 0 such that l k (x 1 ) does not satisfy θ k . Let K i = AP, W, W 0 , R, l and K i+1 = AP , W , W 0 , R , l be the Kripke structures corresponding to M i and M i+1 respectively. Let x 1 , . . . , x m ∈ W m be such that l (x 1 ), . . . , l (x m ) is a K i+1 -prob-bad-prefix for θ i+1 and let x 1 , . . . , x m ∈ W m be its projection on the W -element. That is, x j = g(x j ), for 1 ≤ j ≤ m. We construct a K iprob-bad-prefix for θ i .
We define an LTL formula τ as follows. If M i+1 is constructed with transformation C U , we proceed as follows. If x m = x m , ξ , then τ = ϕU ψ where ϕU ψ is the subformula that is replaced in the transformation, and if x m = x m , ¬ξ , then τ = ¬(ϕU ψ). Likewise, if M i+1 is constructed with transformation C X , then if x m = x m , ξ then τ = X ϕ where X ϕ is the subformula that is replaced in the transformation, and otherwise τ = ¬X ϕ.
We now study the length of shortest K -prob-bad-prefixes.

Theorem 20
The length of a shortest K -prob-bad-prefix for a K -prob-counterable language given by an LTL formula ψ is tightly exponential in |ψ| and tightly linear in |K |.
Proof The upper bounds follow from Theorem 19. The lower bound with respect to |ψ| follows from the construction described in the proof of Theorem 9 by using the Kripke structure K AP . The lower bound with respect to |K | follows from the construction described in the proof of Theorem 13.
Note that the K -prob-bad-prefix that our algorithm finds is not necessarily the shortest, however it matches the lower bound from Theorem 20.
Thus, we showed that the probabilistic approach for relative bad prefixes for LTL formulas is exponentially better than the non-probabilistic approach both in its complexity and in the length of the prefixes.

Probabilistic counterability
In this section we study prob-counterability. The solutions to the three basic problems are specified in Theorems 21, 22, and 23 below.

Theorem 21
The problem of deciding whether a language L is prob-counterable is PSPACEcomplete for L given by an LTL formula or by an NBW.
Proof According to Theorem 1, an ω-regular language L is prob-counterable iff Pr[L] < 1. Let ψ be an LTL formula and let A be an NBW. By [11], deciding whether Pr[L(ψ)] < 1 and whether Pr[L(A)] < 1 can be done in PSPACE. A matching lower bound is described in [13] for LTL. The proof in [13] is by a generic reduction and uses the ability of LTL to encode the computations of a PSPACE Turing machine. The same idea works also for NBW, thus deciding whether Pr[L] < 1 is PSPACE-hard for L given by an LTL formula or by an NBW.

Theorem 22 Let A be an NBW. Finding a shortest prob-bad-prefix for L(A) can be done
in time exponential in |A|, or in space polynomial in |A|. Furthermore, the length of the shortest prob-bad-prefix is tightly exponential in |A|.
Proof A computation is a prob-bad-prefix iff it is a K AP -prob-bad-prefix. Hence, a shortest prob-bad-prefix can be found by using the algorithm from Theorem 16 with K AP . The lower bound for the length of a shortest prob-bad-prefix follows from the same construction as in the proof of Theorem 10.

Theorem 23
Finding a prob-bad-prefix for a prob-counterable LTL formula ψ can be done in time 2 O(|ψ|) or in space polynomial in |ψ|. Furthermore, the prob-bad-prefix that is found is of length 2 O(|ψ|) . The shortest prob-bad-prefix for ψ is tightly exponential in |ψ|.
Proof A prob-bad-prefix can be found by using the algorithm from Theorem 19 with the Kripke structure K AP . The lower bound for the length of a shortest prob-bad-prefix follows from the same construction as in the proof of Theorem 9.
Thus, the exponential advantage of the probabilistic approach in the case the language is given by an LTL formula is carried over to the non-relative setting. When the specification is given by means of an NBW, the complexities of the probabilistic and non-probabilistic approaches coincide. The probabilistic approach, however, may return more bad prefixes.

Discussion
We extended the applicability of finite counterexamples by introducing relative and probabilistic bad-prefixes. This lifts the advantage of safety properties, which always have bad-prefixes, to ω-regular languages that are not safety. We believe that K -bad-prefixes and K -prob-bad-prefixes may be very helpful in practice, as they describe a finite execution that leads the system to an error state. From a computational point of view, finding a K -bad-prefix for an LTL formula ψ is unfortunately EXPSPACE-complete in |ψ|. Experience shows that even highly complex algorithms often run surprisingly well in practice. Also here, the complexity originates from the blow-up in the translation of LTL to automata, which rarely happens in practice. In cases the complexity is too high, we suggest the following two alternatives, which do not go beyond the PSPACE complexity of LTL model checking (and, like model checking, are NLOGSPACE in K ): (1) Recall that when ψ is K -safety and K | ψ, then finding a K -bad-prefix can be done in PSPACE. Thus, we suggest to check ψ for K -safety with the algorithm from Theorem 11, and then apply the algorithm from Theorem 14.
(2) Recall that finding a K -prob-bad-prefix is only PSPACE-complete. Thus, we suggest to apply the algorithm from Theorem 19. Note that the probabilistic approach is not only exponentially less complex, but may be essential when ψ is K -prob-counterable and not K -counterable.
When a user gets a lasso-shaped counterexample, he can verify that indeed it does not satisfy the specification. For finite bad-prefixes, the user knows that they lead the system to an error state, and it is desirable to accompany the prefix with information explaining why these states are erroneous. We suggest the following three types of explanations. (1) A K -bad-prefix leads the product K × A ψ to states w, S that are empty. Recall that the states of A ψ consist of subsets of subformulas of ψ, and that w, S being empty means that w does not satisfy the conjunction of the formulas in S [41]. Returning S to the user explains what makes w an error state. (2) Researchers have studied certified model checking [26], where a positive answer of model checking (that is, K | ψ) is accompanied by a certificate-a compact explanation as to why K × A ¬ψ is empty. In our setting, certificates can provide a compact explanation as to why K × A ψ with initial state w, S is empty.
(3) When a K -prob-bad-prefix u that is not a K -bad-prefix is returned, it may be helpful to accompany u with an infinite lasso-shaped computation τ of the system that starts with u and does satisfy the specification. Thus, the user would get an exception: he would know that almost all computations that start in u except for τ (and possibly more computations, whose probability is 0) violate ψ. The exceptional correct behavior would help the user understand why almost all other behaviors are incorrect.
Finally, our probabilistic approach has been extended in [15], which introduces a quantitative measure for safety. Essentially, the safety level of a language L measures the fraction of words not in L that have a bad prefix. In particular, a safety language has safety level 1 and a liveness language has safety level 0, thus, the study in [15] uses our probabilistic approach in order to span the spectrum between safety and liveness.