Abstract 1 Introduction 2 Background 3 Technical Overview 4 Algorithm 5 Derivation paths 6 The Framework for the Analysis 7 Analysis 8 Conclusion References Appendix A Appendix

An FPRAS for Model Counting for Non-Deterministic Read-Once Branching Programs

Kuldeep S. Meel ⓡ ORCID University of Toronto, Canada Alexis de Colnet ⓡ ORCID TU Wien, Austria
Abstract

Non-deterministic read-once branching programs, also known as non-deterministic free binary decision diagrams (nFBDD), are a fundamental data structure in computer science for representing Boolean functions. In this paper, we focus on #nFBDD, the problem of model counting for non-deterministic read-once branching programs. The #nFBDD problem is #P-hard, and it is known that there exists a quasi-polynomial randomized approximation scheme for #nFBDD. In this paper, we provide the first FPRAS for #nFBDD. Our result relies on the introduction of new analysis techniques that focus on bounding the dependence of samples.

Keywords and phrases:
Approximate model counting, FPRAS, Knowledge compilation, nFBDD
Copyright and License:
[Uncaptioned image] © Kuldeep S. Meel ⓡ and Alexis de Colnet ⓡ; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Approximation algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2406.16515
Acknowledgements:
This research was initiated at Dagstuhl Seminar 24171 on “Automated Synthesis: Functional, Reactive and Beyond” (https://www.dagstuhl.de/24171). We gratefully acknowledge the Schloss Dagstuhl – Leibniz Center for Informatics for providing an excellent environment and support for scientific collaboration.
Funding:
Meel acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2024-05956; de Colnet is supported by the Austrian Science Fund (FWF), ESPRIT project FWF ESP 235. This work was done in part while de Colnet was visiting the University of Toronto.111 The authors decided to forgo the old convention of alphabetical ordering of authors in favor of a randomized ordering, denoted by ⓡ. The publicly verifiable record of the randomization is available at https://www.aeaweb.org/journals/policies/random-author-order/search
Editors:
Sudeepa Roy and Ahmet Kara

1 Introduction

Read-once branching programs or binary decision diagrams are fundamental data structures in computer science used to represent Boolean functions. Their variants have been discovered multiple times across various sub-fields of computer science, and consequently, they are referred to by many acronyms [15, 9, 23]. In this paper, we focus on non-deterministic read-once branching programs, also known as non-deterministic free binary decision diagrams (nFBDD).

We study the following computational problem:

#nFBDD.

Given a non-deterministic read-once branching program B over a Boolean set of variables X, compute the number of models of B, i.e., the number of assignments over X that B maps to 1.

From a database perspective, #nFBDD is an important problem owing to the recent connections between query evaluation and knowledge compilation [13, 14, 21, 20, 2, 1]. The field of knowledge compilation has its origins in the artificial intelligence community, where functions represented in input languages are compiled into target languages that can support queries tractably (often viewed as polynomial time) [10]. The typical queries of interest are satisfiability, entailment, enumeration, and counting.

The target languages in the context of databases have been variants of binary decision diagrams, also referred to as branching programs, and circuits in decomposable negation normal form (DNNF) [2, 1]. A binary decision diagram is a representation of a Boolean function as a directed acyclic graph where the nodes correspond to variables and the sinks correspond to values, i.e., 0 or 1. One of the most well-studied forms is the ordered binary decision diagram (OBDD), where the nodes correspond to variables and, along every path from root to leaf, the variables appear in the same order [9]. A generalization of OBDD is nOBDD, where internal nodes can also represent disjunction () gates.

nFBDD are a natural generalization of nOBDDs, as they do not impose restrictions on the ordering of Boolean variables. Since nFBDD do not impose such restrictions, they are known to be exponentially more succinct than nOBDD; that is, there exist functions for which the smallest nOBDD is exponentially larger than the smallest nFBDD [4]. From this viewpoint, nFBDDs occupy a space between nOBDD and DNNF circuits, as they are exponentially more succinct than nOBDDs, while DNNFs are quasi-polynomially more succinct than nFBDD [8, 4].

In the context of databases, the connection between knowledge compilation and query evaluation has been fruitful, leading to the discovery of both tractable algorithms and lower bounds. Of particular note is the application of the knowledge compilation paradigm in query evaluation on probabilistic databases [14], Shapley value computation [11], the enumeration of query answers, probabilistic graph homomorphism [5], counting answers to queries [7]. The knowledge compilation-based approach involves first representing the database task as a query over a Boolean function and then demonstrating that the corresponding Boolean function has a tractable representation in a given target language, which also supports the corresponding query in polynomial time [3]. For example, in the context of query evaluation over probabilistic databases, one can show that the problem of probabilistic query evaluation can be represented as a weighted model counting problem over nOBDD when the underlying query is a path query [5]. Since there is a fully polynomial-time randomized approximation scheme (FPRAS) for the problem of model counting over nOBDD [6], it follows that probabilistic query evaluation for regular path queries over tuple-independent databases admits an FPRAS [5]. In the context of aggregation tasks, the underlying query is often model counting and its variants [19].

The aforementioned strategy makes it desirable to have target languages that are as expressive as possible while still supporting queries such as counting in polynomial time. In this context, the recent flurry of results has been enabled by the breakthrough of Arenas, Croquevielle, Jayaram, and Riveros, who showed that the problem of #nOBDD admits an FPRAS [6]. As mentioned earlier, nOBDD imposes a severe restriction on variable ordering, i.e., along every path from root to leaf, the variable ordering remains the same. nFBDD generalizes nOBDD by alleviating this restriction, thereby enabling succinct representations for several functions that require exponentially large nOBDD. Since nFBDD generalize nOBDD, the #P-hardness of #nFBDD, the problem of model counting over nFBDD, immediately follows. Accordingly, in light of the recent discovery of FPRAS for #nOBDD, an important open question is whether there exists an FPRAS for #nFBDD. The best known prior result provides a quasi-polynomial time algorithm owing to reduction of nFBDD to DNNF (and accordingly (+,×)-programs) [12]. As noted in Section 2.1, the techniques developed in the context of design of FPRAS for #nOBDD do not extend to handle the case for #nFBDD and therefore, design of FPRAS for #nFBDD would require development of new techniques.

The primary contribution of our work is to answer the aforementioned question affirmatively, which is formalized in the following theorem.

Theorem 1.

Let B be an nFBDD over n variables, ε>0 and δ>0. Algorithm approxMCnFBDD(B,ε,δ) runs in time O(n5ε4log(δ1)|B|6log|B|) and returns 𝖾𝗌𝗍 with the guarantee that Pr[𝖾𝗌𝗍(1±ε)|B1(1)|]1δ.

Organization of the paper.

We start with background on nFBDD in Section 2. The different components of the FPRAS are described in Section 4 and the analysis is split in three parts: in Section 5 we introduce the key concept of derivation paths, in Section 6 we describe the particular framework for the analysis, and in Section 7 we go through the proof of the FPRAS guarantees. For space reason, the proofs of several intermediate results are deferred to the appendix.

2 Background

Given a positive integer n and m an integer less than n, [n] denotes the set {1,2,,n} and [m,n] the set {m,m+1,,n}. For a, b and ε three real numbers with ε>0, we use a(1±ε)b to denote (1ε)ba(1+ε)b, similarly, ab1±ε stands for b1+εab1ε. We sometimes uses the special value and in particular that 1 equals 0.

Boolean variables take value 0 (false) or 1 (true). An assignment α to a set X of Boolean variables is mapping from X to {0,1}. We sometimes see α as a set {xα(x)xX}. We denote by α the empty assignment, which corresponds to the empty set. The set of assignments to X is denoted {0,1}X. A Boolean function f over X is a mapping {0,1}X to {0,1}. The models of f are the assignments mapped to 1 by f. When not explicit, the set of variables assigned by α is denoted by var(α). When var(α)var(α)=, we denote by αα the assignment to var(α)var(α) consistent with both α and α. For S and S two sets of assignments, we write SS={αααS,αS}.

nBDD.

A binary decision diagram (BDD) is a directed acyclic graph (DAG) with a single source node qsource, two sinks labeled 0 and 1, and where each internal node is labeled by a Boolean variable x and has two outgoing edges: the 0-edge going to the 0-child q0 and and the 1-edge going to the 1-child q1 (potentially q0=q1). Internal nodes are called decision nodes and are written ite(x,q1,q0) (if x=1 then go to q1 else go to q0). A path in the DAG contains a variable x if it contains a node ite(x,q1,q0). Every variables assignment α corresponds to the unique path that starts from the source and, on a decision node ite(x,q1,q0) follows the α(x)-edge. Non-deterministic BDD (nBDD) also admit guess nodes: unlabeled nodes with arbitrarily many children. When a path for an assignment reaches a guess node it can pursue to any child, so several paths can correspond to the same assignment in an nBDD. For q a node in an nBDD B, var(q) denotes the set of variables labeling decision nodes reachable from q (including q) in the usual sense of graph reachibility. We note var(B)=var(qsource). B computes a Boolean function over var(B) whose models are the assignments for which at least one path reaches the 1-sink. Every node q of B is itself the source of an nBDD and therefore represents a function over var(q) whose set of models we note 𝗆𝗈𝖽(q). So B1(1)=𝗆𝗈𝖽(qsource). The function computed by an nBDD is also that computed by the circuit obtained replacing every decision node ite(x,q1,q0) by (¬xq0)(xq1) and every guess node with children q1,,qk by q1qk. Thus, we call -nodes the guess nodes in this paper. The size of an nBDD B, denoted by |B|, is its number of edges.

nFBDD.

An nBDD is free (nFBDD) when every path contains every variable at most once. There are also called in the literature read-once non-deterministic branching programs (1-NBP). Note that in an nFBDD, variables may appear in different order depending on the path. When the order of occurrence of the variables is consistent among all paths of the nFBDD we say that we have an ordered nBDD (nOBDD). We call an nFBDD 1-complete when along every path from the source to the 1-sink, every variable occurs exactly once. We call an nFBDD 0-reduced when it contains no decision nodes ite(x,0-sink,0-sink) and no -nodes that have the 0-sink among their children. Technically, 0-reduced nFBDD cannot represent functions with no models, but these functions are not considered in this paper. An nFBDD is alternating when its source node is a -node, when every -node only has decision nodes for children, and when every decision node has only -nodes and sinks for children.

Lemma 2.

Every nFBDD B over n variables can be made 1-complete, 0-reduced, and alternating in time O(n|B|2).

Proof sketch.

First, we make B 0-reduced by repeating the following until reaching a fixed point: replace all nodes ite(x,0-sink,0-sink) by the 0-sink, remove the 0-sink from -nodes’ children, and replace all -nodes with no child by the 0-sink. Doing these replacements bottom-up in B takes time O(|B|) and results in a 0-reduced nFBDD B with |B||B|.

Second, we make B alternating w.r.t. the -nodes. Replace every -node that have the 1-sink as a child by the 1-sink. At that point, no sink is a child of any -node. Next, for every -node q in B, if q has a parent q that is a -node, then remove q from q’s children and add all of q’s children to q’s children. Doing this replacement bottom-up yields an nFBDD whose -nodes all have only decision nodes children. The number of children of each -node is increased by at most |B| so the running time is O(|B|2)=O(|B|2). Let B′′ be the resulting nFBDD. B′′ is still 0-reduced.

Third, we make B 1-complete. For every qB′′, let 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk). While there exists i[k] such that var(qi)var(q), choose xvar(q)var(qi) and replace qi by the node ite(x,qi,qi) in the children of q. This adds at most n decision nodes per original child of q. So doing this for all decision nodes takes time O(n|B′′|)=O(n|B|2) and gives a 1-complete nFBDD B′′′ which is still 0-reduced and alternating w.r.t. the nodes.

Finally, to make B′′′ alternating, we just have to consider every ite(x,q1,q0) and, if qb is not a -node or a sink, to replace it by a -node whose unique child is qb. This takes time O(B′′′). One -node is added for the source of the nFBDD if needed.

The nodes of 1-complete 0-reduced alternating nFBDD are organized in layers L0,L1,, L2n. L0 contains the sinks and, for 1i2n the layer Li contains all nodes whose children (except the 0-sink) are in Li1. We write Li=L0Li, and similarly for Li, L<i and L>i. Note that for all 1in, L2i1 contains only decision nodes whereas L2i contains only -nodes. Importantly, we assume an arbitrary ordering on the children of the nodes; for every node q we have a sequence (not a set) 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q) of its children.

FPRAS.

For a counting problem that, given an input x of size |x|, aims at computing some integer value N(x), a fully polynomial-time randomized approximation scheme (FPRAS) is an algorithm that, given x, ε>0, and 0<δ<1, runs in time polynomial in |x|, 1/ε, and log(1/δ), and returns N~ with the guarantee that Pr[N~(1±ε)N(x)]1δ. In this paper we give an #FPRAS for the problem #nFBDD.
#nFBDD Input: an nFBDD B Output: its number of models |B1(1)|

2.1 Related Work

As noted in Section 1, the literature on binary decision diagrams is extensive; therefore, we will focus solely on related results in the context of the model counting problem. The problem of #nFBDD is #P-complete: membership in #P is immediate as every assignment can be evaluated in PTIME, and the #P-hardness follows from the observation that the problem of #DNF, i.e., counting the number of satisfying assignments of Boolean formulas in Disjunctive Normal Form, is #P-hard [22]. Moreover, every DNF can be represented as an nFBDD such that the size of the resulting nFBDD is polynomial in the size of the DNF. Furthermore, it is also known that the problem of #nOBDD is SpanL-complete [6].

Given the #P-hardness, a natural direction of research is to understand the complexity of approximation. The discovery of polynomial-time randomized approximation schemes for #P-hard problems has been of long-standing interest and has led to several theoretically deep and elegant results. One such result was that of Karp and Luby [17] in the context of #DNF, relying on Monte Carlo sampling. Building on Monte Carlo sampling, Kannan, Sampath, and Mahaney [16] proposed a quasi-polynomial running time approximation scheme for #nOBDD.222The result of  [16] was stated for regular languages, but the underlying techniques can handle #nOBDD. In a follow-up work [12], this result was extended to handle context-free grammars by reducing the problem of counting words of a context-free grammar to estimating the support size of multilinear (+,×)-programs. It is straightforward to see that the same reduction can be applied in the context of #DNNF, implying a quasi-polynomial runtime approximation for #nFBDD. Since then, the question of the existence of a fully polynomial-time randomized approximation scheme for #nFBDD and its generalizations has remained open.

In a major breakthrough, Arenas et al. [6] provided an FPRAS for #nOBDD. Their technique relied on union of sets estimation à la Karp-Luby and the generation of independent samples via the self-reducibility union property.333The term self-reducibility union property was coined by Meel, Chakraborty, and Mathur [18] to explain the high-level idea of [6]. The self-reducibility union property can be informally stated as follows: The set of models conditioned on a given partial assignment (according to the variable ordering of the given nOBDD) can be expressed as the union of models of the states of the given nOBDD. In a follow-up work [7], Arenas et al. observed that the problem of model counting over structured DNNF (st-DNNF) circuits also admits FPRAS. In this context, it is worth highlighting that the self-reducibility union property does not hold for nFBDD and there exists exponential separation between nFBDD and st-DNNF, i.e., there is a family of functions for which the smallest FBDD are exponentially smaller than the smallest st-DNNF, and therefore, the problem of whether there exists an FPRAS for #nFBDD remains open.

3 Technical Overview

Our algorithm proceeds in a bottom-up manner and for every node q of given nFBDD B, we keep: (1) a number p(q)(0,1] which seeks to approximate 1|𝗆𝗈𝖽(q)|, and therefore, 1p(q) can be used to estimate |𝗆𝗈𝖽(q)|, and (2) nsnt sets of samples S1(q),,Snsnt(q), each a subset of 𝗆𝗈𝖽(q), where ns and nt are polynomial in n=|var(B)|, ε1 and log|B|. Few comments are in order: we keep many (nsnt) independent sets of samples so as to rely on the median of means estimator. As mentioned earlier, our algorithm works in bottom-up manner, in particular, for a node q, we will compute (p(q),{Sr(q)}r) using the values of p() and {Sr()}r of its children.

Ideally, we want every model of q to be in Sr(q) identically and independently with probability p(q), and thus tat the expected size of Sr(q) is small. However, obtaining iid samples is computationally expensive, which resulted in quasi-polynomial runtimes in earlier studies [12]. Recent works on FPRAS for nOBDD achieved independence by leveraging self-reducibility union property [6, 7], but, as remarked in Section 2.1, the self-reducibility union property does not hold for nFBDD and therefore, it is not known how to accomplish independence among samples without giving up on the desiderata of polynomial time.

The key insight in our approach is to give up the desire for independence altogether, in particular, we do not even ensure pairwise independence, i.e., even for α,α𝗆𝗈𝖽(q), it may not be the case that Pr[αSr(q)|αSr(q)]=Pr[αSr(q)]. Of course, we do need to quantify the dependence. In order to discuss the dependence, we first discuss how we update p(q) and Sr(q) for decision nodes and -nodes.

  • Let q=ite(x,q1,q0). Then we compute p(q) and Sr(q) as p(q)=(1p(q0)+1p(q1))1 and Sr(q)=(reduce(Sr(q0),p(q)p(q0)){x0})(reduce(Sr(q1),p(q)p(q1)){x1}), where reduce(S,p) is the operation that keeps each element of a set S with probability p.

  • Let q be a -node such that q=q1q2 (assuming two children for simplicity). Furthermore, for simplicity of exposition and for the sake of high-level intuition, assume p(q1)=p(q2). The technical difficulty for -nodes arises from the fact that it is possible that for a given α𝗆𝗈𝖽(q), we have α𝗆𝗈𝖽(q1)𝗆𝗈𝖽(q2). Therefore, in order to ensure no α has undue advantage even if it lies in the models set of multiple children, we update p(q) and Sr(q) as follows: we first compute ρ=min(p(q1),p(q2)) and S^r(q)=Sr(q1)(Sr(q2)𝗆𝗈𝖽(q1)) and then p(q)=𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(1ρnsr=jns+1(j+1)ns|S^r(q)|)1 followed by Sr(q)=reduce(S^r(q),p(q)ρ).

Observe that the usage of Sr(q2)𝗆𝗈𝖽(q1) ensures that for every α𝗆𝗈𝖽(q), there is exactly one child of q𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q) such that if αS^r(q) then αSr(q), and therefore, no α has undue advantage.

It is worth re-emphasizing the crucial departure in our work from earlier efforts is embrace of dependence. For instance, consider q=q1q2 and q^=q1q3, then Sr(q) and Sr(q^) will of course be reusing samples Sr(q1), and therefore, do not even have pairwise independence. Now, of course, we need to bound dependence so as to retain any hope of computing p(q) from Sr(q). To this end, we focus on the following quantity of interest: Pr[αSr(q)αSr(q)] for α,α𝗆𝗈𝖽(q), which determines the variance for the estimator. In this regard, for every (α,q), we can define a derivation path, denoted as 𝗉𝖺𝗍𝗁(α,q), where for every -node, 𝗉𝖺𝗍𝗁(α,q) is 𝗉𝖺𝗍𝗁(α,q) appended with q, where q is the first child of q such that α𝗆𝗈𝖽(q). The key observation is that our computations ensure Pr[αSr(q)αSr(q)] depends on the first node (starting from the 1-sink) q where the derivation paths 𝗉𝖺𝗍𝗁(α,q) and 𝗉𝖺𝗍𝗁(α,q) diverge. In particular, it turns out:

Pr[αSr(q)|αSr(q)]p(q)p(q).

One might wonder whether the above expression suffices: it turns out it does, because the number of pairs (α,α) whose derivation paths diverge for the first time at q can be shown to be bounded by |𝗆𝗈𝖽(q)|2|𝗆𝗈𝖽(q)|, which suffices to show that the variance of the estimator can be bounded by constant factor of square of its mean, thus allowing us to use median of means estimator.

We close off by remarking that the situation is more nuanced than previously described, as p(q) is itself a random variable. Although the high-level intuition remains consistent, the technical analysis requires coupling, based on a carefully defined random process, detailed in Section 6. Simplifying the rigorous technical argument would be an interesting direction of future research.

4 Algorithm

The core of our FPRAS, approxMCnFBDD_core, takes in a 1-complete, 0-reduced and alternating nFBDD B. B’s nodes are processed in bottom-up order, so from the sinks to the source. For each node q, the algorithm computes p(q) which seeks to estimate |𝗆𝗈𝖽(q)|1, and polynomially-many subsets of 𝗆𝗈𝖽(q) called sample sets: S1(q),,SN(q). The algorithm stops after processing the source node qsource and returns p(qsource)1. The procedure that computes the content of Sr(q) and the value for p(q) is estimateAndSample(q). Since this procedure uses randomness, the {Sr(q)}r and p(q) are random variables. Our FPRAS works towards ensuring that “Pr[αSr(q)]=p(q)” holds true for every qB and α𝗆𝗈𝖽(q). Thus, if p(q) is a good estimate of |𝗆𝗈𝖽(q)|1, then Sr(q) should be small in expectation. We put the equality between quotes because it does not make much sense as the left-hand side is a probability, so a fixed number, whereas the right-hand side is a random variable.

Algorithm 1 estimateAndSample(q) with q=ite(x,q1,q0).
1p(q)=(1p(q0)+1p(q1))1
2 for 1rnsnt do
3       Sr(q)=(reduce(Sr(q0),p(q)p(q0)){x0})(reduce(Sr(q1),p(q)p(q1)){x1})
4      

Decision nodes and -nodes are handled differently by estimateAndSample. When q is a decision node ite(x,q1,q0), p(q) is computed deterministically from p(q0) and p(q1), and Sr(q) is reduced from (Sr(q0){x0})(Sr(q1){x1}) using the reduce procedure.

Algorithm 2 reduce(S,p) with p[0,1].
1S
2 for sS do
3       add s to S with probability p
4      
return S

estimateAndSample(q) is more complicated when q is a -node. We explain it gradually. For starter, let q=q1qk with its children ordered as follows: (q1,,qk), and consider the problem of approximating |𝗆𝗈𝖽(q)| when a sample set S(qi)𝗆𝗈𝖽(qi) is available for every i[k] and b{0,1} with the guarantee that Pr[αS(qi)]=ρ holds for every α𝗆𝗈𝖽(qi). Every S(qi) is a subset of 𝗆𝗈𝖽(q). We compute S^(q)=union(q,S(q1),,S(qk)) as follows: for every αS(qi), α is added to S^(q) if and only if qi is the first child of q for which α is a model. Simple computations show that Pr[αS^(q)]=ρ holds for every α𝗆𝗈𝖽(q), and therefore ρ1|S^(q)| is an unbiased estimate of |𝗆𝗈𝖽(q)| (i.e., the expected value of the estimate is |𝗆𝗈𝖽(q)|).

Algorithm 3 union(q,S1,,Sk) with 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk).
1S=
2 for 1ik do
3       for αSi do
4             if α𝗆𝗈𝖽(qj) for every j<i then add α to S
5            
6      
return S

Now suppose that for every i, we only know that Pr[αS(qi)]=p(qi) for every α𝗆𝗈𝖽(qi) for some number p(qi) independent of α. Then we normalize the probabilities before computing the union. This is done using the reduce procedure. Let ρ=min(p(q1),,p(qk)) and S¯(qi)=reduce(S(qi),ρp(qi)). We have that Pr[αS¯(qi)]=ρ holds for every α𝗆𝗈𝖽(q). So S^(q)=union(q,S¯(q1),,S¯(qk)), and ρ1|S^(q)| is an unbiased estimate of |𝗆𝗈𝖽(q)|.

Algorithm 4 estimateAndSample(q) with q=q1qk and 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk).
1ρ=min(p(q1),,p(qk))
2 for 1rnsnt do
3       S^r(q)=union(q,reduce(Sr(q1),ρp(q1)),,reduce(Sr(qk),ρp(qk)))
4      
5for 0j<nt do
6       Mj=1ρnsr=jns+1(j+1)ns|S^r(q)|
7      
8ρ^=𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(Mj)1
9 p(q)=min(ρ,ρ^)
10 for 1rnsnt do
11       Sr(q)=reduce(S^r(q),p(q)ρ)
12      

To find an estimate that is concentrated around |𝗆𝗈𝖽(q)|, we use the “median of means” technique. Suppose that instead of one sample set S(qi) we have several sample sets S1(qi),S2(qi),,SN(qi) all verifying Pr[αSr(qi)]=p(qi). Then define S¯r(qi) similarly to S¯(qi) and S^r(q) similarly to S^(q). Each ρ1|S^r(q)| is an estimate of |𝗆𝗈𝖽(q)|. Say N=nsnt and partition S^1(q),S^2(q),,S^N(q) into nt batches of ns sets. The median of means technique computes the average size Mj=1nsr=jns+1(j+1)ns|S^r(q)| over each batch and uses ρ1𝗆𝖾𝖽𝗂𝖺𝗇(M1,,Mnt) to estimate |𝗆𝗈𝖽(q)|. The mean computation aims at reducing the variance of the estimate. The parameter ns can be chosen so that Pr[ρ1Mj(1±ε)|𝗆𝗈𝖽(q)|]>12 holds true. With the appropriate value for nt, ρ1𝗆𝖾𝖽𝗂𝖺𝗇(M1,,Mnt) lies in (1±ε)|𝗆𝗈𝖽(q)| with high probability (though the estimate is not unbiased anymore).

So this is how estimateAndSample works for -nodes. The median of means serves to compute ρ^, the inverse of the estimate of |𝗆𝗈𝖽(q)| which in turn is used to compute p(q). When q is not the source node qsource, we compute sample sets Sr(q) in preparation for processing of q’s ancestors. For this we reuse S^r(q) and compute Sr(q)=reduce(S^r(q),p(q)ρ).

Algorithm 5 approxMCnFBDD_core(B,n,ns,nt,θ).
1p(1-sink)=1, p(0-sink)=
2 for 1rnsnt do
3       Sr(1-sink)={α}, Sr(0-sink)=
4      
5for 1i2n do
6       for qLi do
7             estimateAndSample(q)
8             if |Sr(q)|θ then return 0
9            
10      
return 1p(qsource)

To ensure a polynomial running time, approxMCnFBDD_core terminates as soon as the number of samples grows too large (Line 5) and returns 0. This output is erroneous but we will show that the probability of terminating this way is negligible. For parameters carefully chosen, approxMCnFBDD_core returns a good estimate of |B1(1)| with probability larger than 1/2. The full FPRAS approxMCnFBDD amplifies this probability to 1δ by returning the median output of independent runs of approxMCnFBDD.

Algorithm 6 approxMCnFBDD(B,ε,δ).
1make B alternating, 1-complete and 0-reduced
2 n=|var(B)|, m=8ln(1/δ)
3 κ=ε/(1+ε), ns=4n/κ2, nt=8ln(16|B|), θ=16nsnt(1+κ)|B|
4 for 1jm do
5       𝖾𝗌𝗍j=approxMCnFBDD_core(B,n,ns,nt,θ)
6      
return 𝗆𝖾𝖽𝗂𝖺𝗇(𝖾𝗌𝗍1,,𝖾𝗌𝗍m)

5 Derivation paths

Models can have several accepting paths in an nFBDD. For q a node of B and α𝗆𝗈𝖽(q), we map (α,q) to a canonical accepting path, called the derivation path of α for q, denoted by 𝗉𝖺𝗍𝗁(α,q). A path 𝒫 is formally represented with a tuple (V(𝒫),E(𝒫)), with V(𝒫) a sequence of vertices and E(𝒫) a sequence of edges.

Definition 3.

For qB and α𝗆𝗈𝖽(q), the derivation path 𝗉𝖺𝗍𝗁(α,q) is defined as follows:

  • If q is the 1-sink then α=α and the only derivation path is 𝗉𝖺𝗍𝗁(α,q)=({q},).

  • If q=ite(x,q1,q0), let α be the restriction of α to var(α){x}, then V(𝗉𝖺𝗍𝗁(α,q))=V(𝗉𝖺𝗍𝗁(α,qα(x)))q and E(𝗉𝖺𝗍𝗁(α,q)) is E(𝗉𝖺𝗍𝗁(α,qα(x))) plus the α(x)-edge of q.

  • If q=q1qk with the children ordering 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk), let i be the smallest integer between 1 and k such that α𝗆𝗈𝖽(qi) then V(𝗉𝖺𝗍𝗁(α,q))=V(𝗉𝖺𝗍𝗁(α,qi))q and E(𝗉𝖺𝗍𝗁(α,q)) is E(𝗉𝖺𝗍𝗁(α,qi)) plus the edge between qi and q.

Our algorithm constructs sample sets in a way that respect derivation paths. That is, an assignment α𝗆𝗈𝖽(q) may end up in Sr(q) only if it is derived through 𝗉𝖺𝗍𝗁(α,q).

Lemma 4.

Let qB and α𝗆𝗈𝖽(q), let V(𝗉𝖺𝗍𝗁(α,q))=(q0,q1,,qi1,qi) with q0 the 1-sink and qi=q. For every j[0,i1], let αj be the restriction of α to var(qj). In a run of approxMCnFBDD_core, αSr(q) holds only if αjSr(qj) holds for every j[0,i1].

Proof.

α0=αSr(1-sink)=Sr(q0) holds by construction. Now consider j>0, it is sufficient to show that αjSr(qj) only if αj1Sr(qj1).

  • If qj=ite(x,qj,1,qj,0) then αj=αj1{xα(x)} and qj1=qj,α(x). Looking at estimateAndSample for decision nodes, one sees that αjSr(qj) only if αj1reduce(Sr(qj,α(x)),p(qj)/p(qj1)), so only if αj1Sr(qj,α(x))=Sr(qj1).

  • If qj is a -node with 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(qj)=(qj0,,qjk) then αj=αj1 and there is an i such that qj1=qji. Observe that αjSr(qj) only if αjS^r(qj) – so only if αj is reduce(Sr(qj),ρ/p(qj)) – for the smallest such that αj𝗆𝗈𝖽(qj); so only if αj=αj1Sr(qj). The definition of 𝗉𝖺𝗍𝗁(α,q) implies that =i.

Given two derivation paths 𝒫 and 𝒫. We call their last common prefix nodes denoted by 𝗅𝖼𝗉𝗇(𝒫,𝒫), the deepest node where the two paths diverge, that is, the first node contained in both paths from which the they follow different edges. Note that if 𝒫 and 𝒫 are consistent up to node q, and q=ite(x,q,q), and 𝒫 follows the 0-edge while 𝒫 follows the 1-edge, then the two paths diverge at q even though they both contain q.

Definition 5.

Let 𝒫=((q0,,qk),(e1,,ek)) and 𝒫=((q0,,q),(e1,,e)) be two derivation paths. The last common prefix node, denoted by 𝗅𝖼𝗉𝗇(𝒫,𝒫), is the node qi for the biggest i such that (q0,,qi)=(q0,,qi) and (e1,,ei)=(e1,,ei).

Note that every derivation path contains the 1-sink for first node, so the last common prefix node is well-defined. Let V(𝗉𝖺𝗍𝗁(α,q))=(q0,,qi) with q0 the 1-sink and qi=q. For every 0i, we define

I(α,q,):={α𝗆𝗈𝖽(q)𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q))=q}.

The following result will play a key role in bounding the variance of estimators in the analysis.

Lemma 6.

Let α𝗆𝗈𝖽(q) and V(𝗉𝖺𝗍𝗁(α,q))=(q0,,qi) with q0 the 1-sink and qi=q. For every 0i, |I(α,q,)||𝗆𝗈𝖽(q)||𝗆𝗈𝖽(q)|.

Proof.

Let I(α,q,)={α1,α2,}. Let α be the restriction of α to var(q). By definition, every αi is of the form αβi for some assignment βi to var(q)var(q). Consider α𝗆𝗈𝖽(q), then every αβi is in 𝗆𝗈𝖽(q). Since the αis differ on var(q)var(q), the βis are pairwise distinct, and therefore {αβi}i is a set of |I(α,q,)| distinct assignments. Considering all |𝗆𝗈𝖽(q)| possible choices for α, we find that {αβi}α,i is a set of |𝗆𝗈𝖽(q)||I(α,q,)| distinct assignments in 𝗆𝗈𝖽(q). Hence |𝗆𝗈𝖽(q)||I(α,q,)||𝗆𝗈𝖽(q)|.

6 The Framework for the Analysis

We introduce a random process that simulates approxMCnFBDD_core. Our intuition is that, for every α𝗆𝗈𝖽(q), a statement in the veins of “Pr[αSr(q)]=p(q)” should hold. The problem is that this equality makes no sense because Pr[αSr(q)] is a fixed real value whereas p(q) is a random variable. The variables Sr(q) and p(q) for different q are too dependent of each other so we use a random process to work with new variables that behave more nicely. The random process simulates several runs of the algorithm for all possible values of the p(q)s. There, Sr(q) is simulated by a different variable for each possible run. A coupling argument then allows us to replace Sr(q) by one of these variables assuming enough knowledge on the algorithm run up to q, encoded in what we call a history for q.

6.1 History

A history h for a set of nodes Q is a mapping h:Q{}. h is realizable when there exists a run of approxMCnFBDD_core that gives the value h(q) to p(q) for every qQ. Such a run is said compatible with h. Two histories h for Q and h for Q are compatible when h(q)=h(q) for all qQQ. Compatible histories can be merged into an history hh for QQ. For qQ and t, we write h(qt) to refer to the history h augmented with h(q)=t. For qB, we define the set 𝖽𝖾𝗌𝖼(q) of its descendants by 𝖽𝖾𝗌𝖼(1-sink)= and 𝖽𝖾𝗌𝖼(q)=𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)q𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)𝖽𝖾𝗌𝖼(q). Note that q𝖽𝖾𝗌𝖼(q). We only study histories realizable for sets Q that are closed for 𝖽𝖾𝗌𝖼, that is, if qQ and q is a descendant of q, then qQ. Thus we abuse terminology and refer to a history for 𝖽𝖾𝗌𝖼(q) as a history for q. The only history for the sinks is the vacuous history h for Q= (because no descendants).

6.2 Random Process

The random process comprises nsnt independent copies identified by the superscript r. For qB, t{} and h a realizable history for q, we have a random variable 𝔖h,tr(q) whose domain is all possible subsets of 𝗆𝗈𝖽(q). 𝔖h,tr(q) simulates Sr(q) in runs of compatible with h and where the value t is assigned to p(q).

  • If q is the 0-sink, then 𝗆𝗈𝖽(q)= and only 𝔖h,r(q)= is defined.

  • If q is the 1-sink, then 𝗆𝗈𝖽(q)={α} and only 𝔖h,1r(q)={α} is defined.

  • If q=ite(x,q1,q0), then for every 𝔖h0,t0r(q0), 𝔖h1,t1r(q1) with h1 and h0 realizable and compatible histories for q1 and q0, let h=h1h0(q0t0,q1t1) and t=(1t0+1t1)1

    𝔖h,tr(q)=reduce(𝔖h0,t0r(q0){x0},tt0)reduce(𝔖h1,t1r(q1){x1},tt1).
  • If q=q1qk then for every 𝔖h1,t1r(q1), , 𝔖hk,tkr(hk) with realizable and pairwise compatible histories, we define h=h1hk(q1t1,,qktk), tmin=min(t1,,tk) and the variable 𝔖^hr(q) that simulates S^r(q) when the history for q is h

    𝔖^hr(q)=union(q,reduce(𝔖h1,t1r(q1),tmint1),,reduce(𝔖hk,tkr(hk),tmintk)).

    For all ttmin we define 𝔖h,tr(q)=reduce(𝔖^hr(q),ttmin).

We make important observations to motivate the interest of the random process. Let H(q) be the random variable on the history for q obtained when running approxMCnFBDD_core.

Fact 7.

It holds that (H(q),(S^r(q)rR))=(H(q),(𝔖^H(q)r(q)rR)) and (H(q),p(q),(Sr(q)rR))=(H(q),p(q),(𝔖H(q),p(q)r(q))rR))

Fact 7 is explained in more details in the full version of the paper. The equalities should be interpreted as follows: for every (Ar𝗆𝗈𝖽(q)rR), the following holds:

Pr[H(q)=h,p(q)=t,rRSr(q)=Ar]=Pr[H(q)=h,p(q)=t,rR𝔖h,tr(q)=Ar]
Pr[H(q)=h and rRS^r(q)=Ar]=Pr[H(q)=h and rR𝔖^hr(q)=Ar].

A second key observation is that the variables 𝔖h,tr(q) and 𝔖^hr(q) are independent of H(q), p(q), Sr(q) and S^r(q). This is because the latter variables comes from the algorithm while the former are defined within the random process, and the two do not interact in any way.

Fact 8.

The variables 𝔖h,tr(q) and 𝔖^hr(q) are independent of any combination of H(q), p(q), Sr(q) and S^r(q) for every q (including q=q).

In the random process we have the correct variant of the equality “Pr[αSr(q)]=p(q)”.

Lemma 9.

For every 𝔖h,tr(q) and α𝗆𝗈𝖽(q), it holds that Pr[α𝔖h,tr(q)]=t. In addition, if q is a -node with 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk) then Pr[α𝔖^hr(q)]=min(h(q1),,h(qk)).

Lemma 10.

For every 𝔖h,tr(q) with 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk) and α,α𝗆𝗈𝖽(q) with αα, let t=h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q)), then we have that Pr[α𝔖h,tr(q)α𝔖h,tr(q)]tt and if q is -node then Pr[α𝔖^hr(q)α𝔖^hr(q)]min(h(q1),,h(qk))t.

Lemmas 9 and 10 are proved in appendix. Compared to “Pr[αSr(q)]=p(q)”, there is nothing wrong with the lemmas as t, t and the h(qi)s are fixed real numbers.

7 Analysis

We now conduct the analysis of approxMCnFBDD. The hardest part to analyze is the core algorithm approxMCnFBDD_core, for which we will prove the following.

Lemma 11.

Let B be a 1-complete 0-reduced and alternating nFBDD over n variables. Let m=maxi|Li|, ε>0, and κ=ε1+ε. If ns4nκ2, nt8ln(16|B|) and θ=16nsnt(1+κ)|B| then approxMCnFBDD_core(B,n,ns,nt,θ) runs in time O(nsntlog(nt)θ|B|2) and returns 𝖾𝗌𝗍 with the guarantee Pr[𝖾𝗌𝗍(1±ε)|B1(1)|]14.

Our main result is obtained decreasing this 1/4 down to any δ>0 with the median technique.

See 1

Proof.

Let 𝖾𝗌𝗍1,,𝖾𝗌𝗍m be the estimates from m independent calls to approxMCnFBDD_core. Let Xi be the indicator variable that takes value 1 if and only if 𝖾𝗌𝗍i(1±ε)|B1(1)|, and define X¯=X1++Xm. By Lemma 11, E[X¯]m/4. Hoeffding bound gives

Pr[𝗆𝖾𝖽𝗂𝖺𝗇(Xi)1im(1±ε)|B1(1)|]=Pr[X¯>m2]Pr[X¯E[X¯]>m4]em/8δ.

The running time is O(|log(1δ)|) times that of approxMCnFBDD_core(B,n,ns,nt) where B is B after it has been made 1-complete, 0-reduced, and alternating. By Lemma 2, |B|=O(n|B|2) and B is constructed in time O(n|B|2). So each call to approxMCnFBDD_core(B,n, ns,nt) takes time O(n5|B|6log|B|ε4)

Recall that approxMCnFBDD_core is approxMCnFBDD_core without the terminating condition of Line 5, that is, where the sets Sr(q) can grow big. Analyzing approxMCnFBDD_core is enough to prove Lemma 11 without running time requirements. In particular, it is enough to prove Lemmas 12 and 13. For these lemmas the settings described in Lemma 11 are assumed.

Lemma 12.

The probability that approxMCnFBDD_core(B,n,ns,nt,θ) computes p(q)(1±κ)|𝗆𝗈𝖽(q)|1 for some qB is at most 1/8.

Lemma 13.

The probability that approxMCnFBDD_core(B,n,ns,nt,θ) constructs sets Sr(q) such that |Sr(q)|θ for some qB is at most 1/8.

Proof of Lemma 11.

Let A()=approxMCnFBDD_core()(B,n,ns,nt,θ).

PrA [𝖾𝗌𝗍(1±ε)|B1(1)|]
=PrA[r,q|Sr(q)|θ]+PrA[r,q|Sr(q)|<θ and 𝖾𝗌𝗍(1±ε)|B1(1)|]
PrA[r,q|Sr(q)|θ]+PrA[𝖾𝗌𝗍(1±ε)|B1(1)|]
PrA[r,q|Sr(q)|θ]+PrA[qp(q)1(1±ε)|𝗆𝗈𝖽(q)|]

where q ranges over B’s nodes and r ranges in [nsnt]. The parameter κ has been set so that p(q)1±κ|𝗆𝗈𝖽(q)| implies p(q)1(1±ε)|𝗆𝗈𝖽(q)| so, using Lemmas 12 and 13:

PrA[𝖾𝗌𝗍(1±ε)|B1(1)|]PrA[r,q|Sr(q)|θ]+PrA[qp(q)1±κ|𝗆𝗈𝖽(q)|]14

The algorithm stops whenever the number of samples grows beyond θ so, for the worst-case running time, the number of samples is less than θ. Each node goes once through estimateAndSample. For a decision node, estimateAndSample takes time O(nsntθ). For a -node q, estimateAndSample calls union nsnt times, does a median of means where it computes the median of nt means of ns integers, and updates the sample sets. Updating the sample sets takes time O(nsntθ). Each mean costs O(ns) and the median requires sorting so the whole thing is done in O(nsntlog(nt)) time. For each sample, the union tests whether it is a model of the children of q, model checking is a linear-time operation on nFBDD so the total cost of one union is O(|𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)||B|θ). So the total cost of estimateAndSample for all -nodes is at most O(nsntlog(nt)θ|B|q|𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)|)=O(nsntlog(nt)θ|B|2).

It remains to prove Lemmas 12 and 13.

7.1 Proof of Lemma 12

Let Δ(q) be the interval 1±κ|𝗆𝗈𝖽(q)| and (q) be the interval |𝗆𝗈𝖽(q)|1±κ.

Claim 14.

The event qB(p(q)Δ(q)) occurs if and only if the event
qL>0(p(q)Δ(q) and for all q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)) occurs.

Proof.

The “if” direction is trivial. For the other direction, suppose that p(q)Δ(q) holds for some q. Let i be the smallest integer such that there is qLi and p(q)Δ(q). i cannot be 0 because the only node in L0 is the 1-sink and p(1-sink)=1=|𝗆𝗈𝖽(1-sink)|1Δ(1-sink). So qL>0 and, by minimality of i, we have that p(q)Δ(q) for all q𝖽𝖾𝗌𝖼(q).

Pr[qBp(q)Δ(q)]= Pr[qL>0p(q)Δ(q) and q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)] (1)
qL>0Pr[p(q)Δ(q) and q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)]P(q).

We bound P(q) from above. If q=ite(x,q1,q0) is a decision node, then by construction, p(q0)Δ(q0) and p(q1)Δ(q1) implies p(q)1|𝗆𝗈𝖽(q0)|+|𝗆𝗈𝖽(q1)|1±κ=|𝗆𝗈𝖽(q)|1±κ=(q) with probability 1. Thus P(q)=0 for decision nodes and only the case of -nodes remains.

Going to the random process

To bound P(q) when q is a -node, we move the analysis to the random process, whose variables we can analyze using Lemmas 9 and 10. Consider the set q of realizable histories for q and denote by H(q)=h the event that the algorithm sets p(q) to h(q) for all q𝖽𝖾𝗌𝖼(q).

P(q) =Pr[hqH(q)=h and p(q)Δ(q) and for all q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)]
hqPr[H(q)=h and p(q)Δ(q) and for all q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)]

Let q be the subset of q where h(q)Δ(q) holds for every q𝖽𝖾𝗌𝖼(q), then

P(q) hqPr[H(q)=h and p(q)Δ(q) and for all q𝖽𝖾𝗌𝖼(q),p(q)Δ(q)]
=hqPr[H(q)=h and p(q)Δ(q)].

Let ρ=min(p(q1),,p(qk)), ρh=min(h(q1),,h(qk)) and Mj=1ρnsr=jns+1(j+1)ns|S^r(q)|. Note that ρ is a random variable whereas ρh is a constant. If H(q)=h then the events ρ=ρh and p(q)=min(ρh,ρ^) both hold, where ρ^=𝗆𝖾𝖽𝗂𝖺𝗇(M0,,Mnt1)1.

Claim 15.

If H(q)=h then ρ^Δ(q) implies that p(q)Δ(q).

Proof.

H(q)=h implies p(q)=min(ρh,ρ^). If p(q)=ρ^ holds then, trivially, ρ^Δ(q) implies that p(q)Δ(q). If p(q)=ρh occurs then ρ^Δ(q) implies that p(q)ρ^1+κ|𝗆𝗈𝖽(q)| and, since hq guarantees that ρh1κmaxj[k]|𝗆𝗈𝖽(qj)|1κ|𝗆𝗈𝖽(q)|, we have that p(q)Δ(q). Let 𝔐j,h=1ρhnsr=jns+1(j+1)ns|𝔖^hr(q)|. Using that H(q)=h implies ρ=ρh and Fact 7, we find that (H(q),𝗆𝖾𝖽𝗂𝖺𝗇j(Mj))=(H(q),𝗆𝖾𝖽𝗂𝖺𝗇j(𝔐j,H(q))).

Pr [H(q)=h and p(q)Δ(q)]Pr[H(q)=h and ρ^Δ(q)] (Claim 15)
=Pr[H(q)=h and 𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(Mj)(q)] (by definition of ρ^)
Pr[H(q)=h and 𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(𝔐j,h)(q)] (Fact 7)
=Pr[H(q)=h]Pr[𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(𝔐j,h)(q)] (Fact 8)

We have reached our goal to replace variables by their counterpart in the random process. Now we bound Pr[𝗆𝖾𝖽𝗂𝖺𝗇j(𝔐j,h)(q)] using Chebyshev’s inequality and Hoeffding bound.

Variance upper bound

By Lemma 9, the expected value of |𝔖^hr(q)| is μ=ρh|𝗆𝗈𝖽(q)|. Now for the variance,

Var[|𝔖^hr(q)|] E[|𝔖^hr(q)|2]=μ+α,α𝗆𝗈𝖽(q)ααPr[α𝔖^hr(q) and α𝔖^hr(q)]
=μ+α,α𝗆𝗈𝖽(q)ααPr[α𝔖^hr(q)α𝔖^hr(q)]Pr[α𝔖^hr(q)]
μ+α,α𝗆𝗈𝖽(q)ααρh2h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(q,α),𝗉𝖺𝗍𝗁(q,α)) (Lemmas 9 and 10)

Let 𝒫=𝗉𝖺𝗍𝗁(α,q) and V(𝒫)=(qα0,qα1,qα2,,qαi1,q), with qα0=1-sink. Let 𝒫=𝗉𝖺𝗍𝗁(α,q) for any α𝗆𝗈𝖽(q) distinct from α. Then 𝗅𝖼𝗉𝗇(𝒫,𝒫) is one of the qαj. Recall that I(α,q,j) is the set of α𝗆𝗈𝖽(q) such that 𝗅𝖼𝗉𝗇(𝒫,𝒫)=qαj.

α,α𝗆𝗈𝖽(q)ααρh2h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(q,α),𝗉𝖺𝗍𝗁(q,α))=α𝗆𝗈𝖽(q)j[0,i1]|I(α,q,j)|ρh2h(qαj)
α𝗆𝗈𝖽(q)j[0,i1]ρh2|𝗆𝗈𝖽(q)||𝗆𝗈𝖽(qαj)|h(qαj) (Lemma 6)

Because hq and h(qαj) is in Δ(qαj) we have |𝗆𝗈𝖽(qαj)|h(qαj)1κ.

α𝗆𝗈𝖽(q)j[0,i1]ρh2|𝗆𝗈𝖽(q)||𝗆𝗈𝖽(qαj)|h(qαj) ρh21κα𝗆𝗈𝖽(q)j[0,i1]|𝗆𝗈𝖽(q)|μ2n1κ

Putting everything together, we conclude that Var[|𝔖^hr(q)|]μ+μ2n1κ.

Median of means

We have that E[𝔐j,h]=μρh=|𝗆𝗈𝖽(q)| and, by independence of the variables {𝔖^hr(q)}r

Var[𝔐j,h]=r=jns+1(j+1)nsVar[|𝔖^hr(q)|]ρh2ns21ρh2ns(μ+μ2n1κ)=1ns(|𝗆𝗈𝖽(q)|ρh+n|𝗆𝗈𝖽(q)|21κ).

𝔐j,h|𝗆𝗈𝖽(q)|1±κ occurs if and only if κ|𝗆𝗈𝖽(q)|1+κ𝔐j,h|𝗆𝗈𝖽(q)|κ|𝗆𝗈𝖽(q)|1κ, which is subsumed by |𝔐j,h|𝗆𝗈𝖽(q)||κ|𝗆𝗈𝖽(q)|1+κ. So Chebyshev’s inequality gives

Pr [𝔐j,h|𝗆𝗈𝖽(q)|1±κ]Pr[|𝔐j,h|𝗆𝗈𝖽(q)||>κ|𝗆𝗈𝖽(q)|1+κ](1+κ)2κ2|𝗆𝗈𝖽(q)|2Var[𝔐j,h]
(1+κ)2κ2ns(1|𝗆𝗈𝖽(q)|ρh+n1κ)
(1+κ)2κ2ns(11κ+n1κ) (ρh1κmaxj|𝗆𝗈𝖽(qj)|1κ|𝗆𝗈𝖽(q)|)
2nκ2ns14 ((1+κ)21κ decreases to 1 and ns4nκ2)

By taking the median, we decrease the 14 upper bound to a much smaller value. Let Ej be the indicator variable taking value 1 if and only if 𝔐j,h|𝗆𝗈𝖽(q)|1±κ and let E¯=j=0nt1Ej. We have E[E¯]nt4 so Hoeffding bound gives

Pr[𝗆𝖾𝖽𝗂𝖺𝗇0j<nt(𝔐j,h)|𝗆𝗈𝖽(q)|1±κ]=Pr[E¯>nt2]Pr[E¯E(E¯)nt4]ent/8116|B|

where the last inequality comes from nt8ln(16|B|). Putting everything together we have

P(q)hq116|B|Pr[H(q)=h]116|B|.

Used in (1), this gives Pr[qBp(q)Δ(q)]116, thus finishing the proof of Lemma 12. We have shown a 116 bound instead of a 18 bound in preparation for the proof of Lemma 13.

7.2 Proof of Lemma 13

We first bound Pr[r,q|Sr(q)|θ] from above by

Pr[r,q|Sr(q)|θ and qBp(q)1±κ|𝗆𝗈𝖽(q)|]P1+Pr[qBp(q)1±κ|𝗆𝗈𝖽(q)|]P2.

We have already a 116 upper bound on P2, so we focus on P1.

P1 r,qPr[|Sr(q)|θ and qBp(q)Δ(q)] (Union bound)
1θr,qE[|Sr(q)|qB𝟙(p(q)Δ(q))] (Markov’s inequality)
1θr,qE[|Sr(q)|𝟙(p(q)Δ(q))]E(r,q)

To bound E(r,q) we introduce the history of q and move to the variables of the random process. Recall that q is the set of all realizable histories for q.

E(r,q) =tΔ(q)E[|Sr(q)|𝟙(p(q)=t)]=hqtΔ(q)E[|Sr(q)|𝟙(p(q)=t and H(q)=h)]
=hqtΔ(q)E[|𝔖h,tr(q)|𝟙(p(q)=t and H(q)=h)] (Fact 7)
=hqtΔ(q)E[|𝔖h,tr(q)|]Pr[p(q)=t and H(q)=h] (Fact 8)
=hqtΔ(q)t|𝗆𝗈𝖽(q)|Pr[p(q)=t and H(q)=h] (Lemma 9)
hqtΔ(q)(1+κ)Pr[p(q)=t and H(q)=h](1+κ) (t1+κ|𝗆𝗈𝖽(q)|)

It follows that P1(1+κ)nsnt|B|θ116 (because θ16(1+κ)nsnt|B|) and therefore Pr[r,q|Sr(q)|θ]116+116=18. This finishes the proof of Lemma 13.

8 Conclusion

In this paper, we resolved the open problem of designing an FPRAS for #nFBDD. Our work also introduces a new technique to quantify dependence, and it would be interesting to extend this technique to other languages that generalize nFBDD. Another promising direction for future work would be to improve the complexity of the proposed FPRAS to enable practical adoption.

References

  • [1] Antoine Amarilli, Marcelo Arenas, YooJung Choi, Mikaël Monet, Guy Van den Broeck, and Benjie Wang. A circus of circuits: Connections between decision diagrams, circuits, and automata. arXiv preprint arXiv:2404.09674, 2024. doi:10.48550/arXiv.2404.09674.
  • [2] Antoine Amarilli and Florent Capelli. Tractable circuits in database theory. SIGMOD Rec., 53(2):6–20, 2024. doi:10.1145/3685980.3685982.
  • [3] Antoine Amarilli and Florent Capelli. Tractable circuits in database theory. ACM SIGMOD Record, 53(2):6–20, 2024. doi:10.1145/3685980.3685982.
  • [4] Antoine Amarilli, Florent Capelli, Mikaël Monet, and Pierre Senellart. Connecting knowledge compilation classes and width parameters. Theory Comput. Syst., 64(5):861–914, 2020. doi:10.1007/S00224-019-09930-2.
  • [5] Antoine Amarilli, Timothy van Bremen, and Kuldeep S Meel. Conjunctive queries on probabilistic graphs: The limits of approximability. In 27th International Conference on Database Theory, 2024.
  • [6] Marcelo Arenas, Luis Alberto Croquevielle, Rajesh Jayaram, and Cristian Riveros. Efficient logspace classes for enumeration, counting, and uniform generation. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2019, pages 59–73. ACM, 2019. doi:10.1145/3294052.3319704.
  • [7] Marcelo Arenas, Luis Alberto Croquevielle, Rajesh Jayaram, and Cristian Riveros. When is approximate counting for conjunctive queries tractable? In STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1015–1027. ACM, 2021. doi:10.1145/3406325.3451014.
  • [8] Paul Beame and Vincent Liew. New limits for knowledge compilation and applications to exact model counting. arXiv preprint arXiv:1506.02639, 2015. arXiv:1506.02639.
  • [9] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Trans. Computers, 35(8):677–691, 1986. doi:10.1109/TC.1986.1676819.
  • [10] Adnan Darwiche and Pierre Marquis. A knowledge compilation map. J. Artif. Intell. Res., 17:229–264, 2002. doi:10.1613/JAIR.989.
  • [11] Daniel Deutch, Nave Frost, Benny Kimelfeld, and Mikaël Monet. Computing the shapley value of facts in query answering. In SIGMOD ’22: International Conference on Management of Data, pages 1570–1583. ACM, 2022. doi:10.1145/3514221.3517912.
  • [12] Vivek Gore, Mark Jerrum, Sampath Kannan, Z. Sweedyk, and Stephen R. Mahaney. A quasi-polynomial-time algorithm for sampling words from a context-free language. Inf. Comput., 134(1):59–74, 1997. doi:10.1006/INCO.1997.2621.
  • [13] Abhay Kumar Jha and Dan Suciu. On the tractability of query compilation and bounded treewidth. In 15th International Conference on Database Theory, ICDT ’12, pages 249–261. ACM, 2012. doi:10.1145/2274576.2274603.
  • [14] Abhay Kumar Jha and Dan Suciu. Knowledge compilation meets database theory: Compiling queries to decision diagrams. Theory Comput. Syst., 52(3):403–440, 2013. doi:10.1007/S00224-012-9392-5.
  • [15] Sheldon B. Akers Jr. Binary decision diagrams. IEEE Trans. Computers, 27(6):509–516, 1978. doi:10.1109/TC.1978.1675141.
  • [16] Sampath Kannan, Z. Sweedyk, and Stephen R. Mahaney. Counting and random generation of strings in regular languages. In Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 551–557. ACM/SIAM, 1995. URL: http://dl.acm.org/citation.cfm?id=313651.313803.
  • [17] Richard M. Karp and Michael Luby. Monte-carlo algorithms for enumeration and reliability problems. In 24th Annual Symposium on Foundations of Computer Science, Tucson, Arizona, USA, 7-9 November 1983, pages 56–64. IEEE Computer Society, 1983. doi:10.1109/SFCS.1983.35.
  • [18] Kuldeep S. Meel, Sourav Chakraborty, and Umang Mathur. A faster FPRAS for #nfa. Proc. ACM Manag. Data, 2(2):112, 2024. doi:10.1145/3651613.
  • [19] Stefan Mengel. Counting, Knowledge Compilation and Applications. PhD thesis, Université d’Artois, 2021.
  • [20] Mikaël Monet. Solving a special case of the intensional vs extensional conjecture in probabilistic databases. In Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, pages 149–163. ACM, 2020. doi:10.1145/3375395.3387642.
  • [21] Mikaël Monet and Dan Olteanu. Towards deterministic decomposable circuits for safe queries. In Proceedings of the 12th Alberto Mendelzon International Workshop on Foundations of Data Management, volume 2100 of CEUR Workshop Proceedings. CEUR-WS.org, 2018. URL: https://ceur-ws.org/Vol-2100/paper19.pdf.
  • [22] Leslie G. Valiant. The complexity of enumeration and reliability problems. SIAM J. Comput., 8(3):410–421, 1979. doi:10.1137/0208032.
  • [23] Ingo Wegener. Branching Programs and Binary Decision Diagrams. SIAM, 2000. URL: http://ls2-www.cs.uni-dortmund.de/monographs/bdd/.

Appendix A Appendix

See 9

Proof.

Let qLi. We proceed by induction on i. The base case i=0 is immediate since 𝔖h,1r(1-sink)={α} and 𝔖h,r(0-sink)= are the only variables for L0 (and 1=0 by definition). Now let i>0, qLi, and suppose that the statement holds for all 𝔖h,tr(q) and α𝗆𝗈𝖽(q) with qL<i. If i is odd then q is a decision node ite(x,q1,q0) with q0 and q1 in Li1 (because B is alternating). Let b=α(x) and let α be the restriction of α to var(α){x}. Then, by induction,

Pr[α𝔖h,tr(q)]=Pr[αreduce(𝔖hb,tbr(qb),ttb){xb}]
=Pr[α𝔖hb,tbr(qb)]Pr[αreduce(𝔖hb,tbr(qb),ttb)|α𝔖hb,tbr(qb)]=tbttb=t

Now if i is even then q is a -node with children 𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)=(q1,,qk) all in Li1. Let j be the smallest integer such that α𝗆𝗈𝖽(qj) and let tmin=min(h(q1),,h(qk)). Then

Pr[α𝔖^hr(q)]=Pr[αreduce(𝔖hj,tjr(qj),tmintj)]=Pr[α𝔖hj,tjr(qj)]
Pr[αreduce(𝔖hj,tjr(qj),tmintj)|α𝔖hj,tjr(qj)]=tjtmintj=tmin

And for ttmin we have that Pr[α𝔖h,tr(q)]=Pr[αreduce(𝔖^hr(q),ttmin)]

=Pr[α𝔖^hr(q)]Pr[αreduce(𝔖^hr(q),ttmin)|α𝔖^hr(q)]=tminttmin=t

See 10

Proof.

We are going to prove a stronger statement, namely, that for every i, for every q and q (potentially q=q) in Li and every t and t such that t=t when q=q, and every α𝗆𝗈𝖽(q) and α𝗆𝗈𝖽(q), and every compatible histories h and h for q and q, respectively, we have that

Pr[α𝔖h,tr(q) and α𝔖h,tr(q)]ttt. (2)

where t=t if (q,α)=(q,α) and t=h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q)) otherwise. In addition if i is even, so q and q are -nodes, then

Pr[α𝔖^hr(q) and α𝔖^hr(q)]tmintmint. (3)

where tmin=min(h(c)c𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)) and tmin=min(h(c)c𝖼𝗁𝗂𝗅𝖽𝗋𝖾𝗇(q)) and t=tmin if (q,α)=(q,α) and t=h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q)) otherwise.

The inequalities (2) and (3) are straightforward when (α,q)=(α,q) because then h=h (by compatibility) and t=t=t or tmin=tmin=t and we can use Lemma 9. In particular (2) holds when q=q=1-sink. Now we assume (α,q)(α,q) and proceed by induction on i. The base case i=0 holds true by the previous remark (note that neither q nor q can be the 0-sink because 𝗆𝗈𝖽(q) and 𝗆𝗈𝖽(q) must not be empty).

Case 𝒊 odd.

In this case q and q are decision nodes. Let q=ite(x,q1,q0) and q=ite(y,q1,q0). Then h=h0h1{q0t0,q1t1} for some compatible histories h0 and h1 for q0 and q1, respectively, and t0 and t1 such that t=(1t0+1t1)1. Similarly, h=h0h1{q0t0,q1t1}. Let b=α(x) and c=α(y). Let also β be the restriction of α to var(α){x} and β be the restriction of α to var(α){y}. Then

Pr [α𝔖h,tr(q) and α𝔖h,tr(q)]
=Pr[βreduce(𝔖hb,tbr(qb),ttb),βreduce(𝔖hc,tcr(qc),ttc)
|β𝔖hb,tbr(qb),β𝔖hc,tcr(qc)]Pr[β𝔖hb,tbr(qb) and β𝔖hc,tcr(qc)]

Now, because q and q are both in Li, neither is an ancestor of the other and thus the two reduce are independent: the output of one reduce does not modify the set fed into the second reduce nor its output. Thus the probability becomes

Pr[β𝔖hb,tbr(qb),β𝔖hc,tcr(qc)]Pr[βreduce(𝔖hb,tbr(qb),ttb)|β𝔖hb,tbr(qb)]
Pr[βreduce(𝔖hc,tcr(qc),ttc)|β𝔖hc,tcr(qc)]

which is equal to tttbtcPr[β𝔖hb,tbr(qb) and β𝔖hc,tcr(qc)]. Now, if (β,qb)=(β,qc) then tb=tc (because h and h are compatible) and Pr[β𝔖hb,tbr(qb) and β𝔖hc,tcr(qc)] =Pr[β𝔖hb,tbr(qb)]=tb=tc by Lemma 9. So Pr[α𝔖h,tr(q) and α𝔖h,tr(q)]tttb=tth(qb). By assumption, (α,q)(α,q), if qq then the two derivation paths 𝗉𝖺𝗍𝗁(α,q) and 𝗉𝖺𝗍𝗁(α,q) diverge for the first time at qb, and if q=q then x=y and α(x)=1α(x) (because (α,q)(α,q) by assumption). In this case the derivation paths still diverge for the first time at qb: one follows the 0-edge and the other follows the 1-edge. So in both cases 𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q))=qb and we are done. We still have (β,qb)(β,qc) to consider. In this case the paths 𝗉𝖺𝗍𝗁(β,qb) and 𝗉𝖺𝗍𝗁(β,qc) diverge for the first time at some node q below qb and qc so by induction Pr[β𝔖hb,tbr(qb) and β𝔖hc,tcr(qc)]tbtc/hb(q)=tbtc/h(q). So Pr[α𝔖h,tr(q) and α𝔖h,tr(q)]tt/h(q). But q is also the first node where 𝗉𝖺𝗍𝗁(α,q) and 𝗉𝖺𝗍𝗁(α,q) diverge, hence the result.

Case 𝒊 even.

In this case q and q are both -nodes. Say q=q1qk and q=q1qm. Then h=h1hk{q1t1,,qktk} and h=h1hk{q1t1,,qmtm}. Let tmin=min(t1,,tk) and tmin=min(t1,,tm).

Pr[α𝔖h,tr(q) and α𝔖h,tr(q)]
= Pr[αreduce(𝔖^hr(q),ttmin) and αreduce(𝔖^hr(q),ttmin)]
= Pr[αreduce(𝔖^hr(q),ttmin),αreduce(𝔖^hr(q),ttmin)|α𝔖^hr(q),α𝔖^hr(q)]
Pr[α𝔖^hr(q) and α𝔖^hr(q)]

The reduce events are independent because q and q both belong to Li and thus neither in an ancestor of the other: the output of the first reduce does not influence the output of the second one, even with the knowledge that α𝔖^hr(q) and α𝔖^hr(q). So the probability becomes

Pr[αreduce(𝔖^hr(q),ttmin)|α𝔖^hr(q)]Pr[αreduce(𝔖^hr(q),ttmin)|α𝔖^hr(q)]Pr[α𝔖^hr(q) and α𝔖^hr(q)]

which is tttmintminPr[α𝔖^hr(q),α𝔖^hr(q)]. Now, there are a unique j and such that α𝔖^hr(q) only if α𝔖hj,tjr(qj) and α𝔖^hr(q) only if α𝔖h,tr(q). Thus Pr[α𝔖^hr(q),α𝔖^hr(q)] equals

Pr[αreduce(𝔖hj,tjr(qj),tmintj),αreduce(𝔖h,tr(q),tmint)].

qj and q belong to the same layer so with similar arguments we find that

Pr[α𝔖^hr(q) and α𝔖^hr(q)]=tmintmintjtlPr[α𝔖hj,tjr(qj) and α𝔖h,tr(q)].

It is possible that (α,qj)=(α,q) but then qq for otherwise we would have (α,q)=(α,q), against assumption. In the case (α,qj)=(α,q) we use Lemma 9 and find Pr[α𝔖hj,tjr(qj) and α𝔖h,tr(q)]=Pr[α𝔖hj,tjr(qj)]=tj=t. So

Pr[α𝔖^hr(q) and α𝔖^hr(q)]=tmintmintj

and Pr[α𝔖h,tr(q) and α𝔖h,tr(q)]=tttj. When (α,qj)=(α,q), the paths 𝗉𝖺𝗍𝗁(α,q) and 𝗉𝖺𝗍𝗁(α,q) diverge for the first time at qj=q. So tj=h(𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,q),𝗉𝖺𝗍𝗁(α,q))) and we are done. Now let us assume that (α,qj)(α,q), then we use the induction hypothesis and, denoting q=𝗅𝖼𝗉𝗇(𝗉𝖺𝗍𝗁(α,qj),𝗉𝖺𝗍𝗁(α,q)), we have

Pr[α𝔖^hr(q),α𝔖^hr(q)]tmintminh(q) and Pr[α𝔖h,tr(q),α𝔖h,tr(q)]tth(q)

When (α,qj)(α,q), the first node where 𝗉𝖺𝗍𝗁(α,q) and 𝗉𝖺𝗍𝗁(α,q) diverge is also the first node where 𝗉𝖺𝗍𝗁(α,qj) and 𝗉𝖺𝗍𝗁(α,q) diverge, so q. This finishes the proof of the inductive case.