An FPRAS for Model Counting for Non-Deterministic Read-Once Branching Programs
Abstract
Non-deterministic read-once branching programs, also known as non-deterministic free binary decision diagrams (nFBDD), are a fundamental data structure in computer science for representing Boolean functions. In this paper, we focus on #nFBDD, the problem of model counting for non-deterministic read-once branching programs. The #nFBDD problem is #P-hard, and it is known that there exists a quasi-polynomial randomized approximation scheme for #nFBDD. In this paper, we provide the first FPRAS for #nFBDD. Our result relies on the introduction of new analysis techniques that focus on bounding the dependence of samples.
Keywords and phrases:
Approximate model counting, FPRAS, Knowledge compilation, nFBDDCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Approximation algorithms analysisAcknowledgements:
This research was initiated at Dagstuhl Seminar 24171 on “Automated Synthesis: Functional, Reactive and Beyond” (https://www.dagstuhl.de/24171). We gratefully acknowledge the Schloss Dagstuhl – Leibniz Center for Informatics for providing an excellent environment and support for scientific collaboration.Funding:
Meel acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2024-05956; de Colnet is supported by the Austrian Science Fund (FWF), ESPRIT project FWF ESP 235. This work was done in part while de Colnet was visiting the University of Toronto.111 The authors decided to forgo the old convention of alphabetical ordering of authors in favor of a randomized ordering, denoted by ⓡ. The publicly verifiable record of the randomization is available at https://www.aeaweb.org/journals/policies/random-author-order/searchEditors:
Sudeepa Roy and Ahmet KaraSeries and Publisher:

1 Introduction
Read-once branching programs or binary decision diagrams are fundamental data structures in computer science used to represent Boolean functions. Their variants have been discovered multiple times across various sub-fields of computer science, and consequently, they are referred to by many acronyms [15, 9, 23]. In this paper, we focus on non-deterministic read-once branching programs, also known as non-deterministic free binary decision diagrams (nFBDD).
We study the following computational problem:
#nFBDD.
Given a non-deterministic read-once branching program over a Boolean set of variables , compute the number of models of , i.e., the number of assignments over that maps to .
From a database perspective, #nFBDD is an important problem owing to the recent connections between query evaluation and knowledge compilation [13, 14, 21, 20, 2, 1]. The field of knowledge compilation has its origins in the artificial intelligence community, where functions represented in input languages are compiled into target languages that can support queries tractably (often viewed as polynomial time) [10]. The typical queries of interest are satisfiability, entailment, enumeration, and counting.
The target languages in the context of databases have been variants of binary decision diagrams, also referred to as branching programs, and circuits in decomposable negation normal form (DNNF) [2, 1]. A binary decision diagram is a representation of a Boolean function as a directed acyclic graph where the nodes correspond to variables and the sinks correspond to values, i.e., or . One of the most well-studied forms is the ordered binary decision diagram (OBDD), where the nodes correspond to variables and, along every path from root to leaf, the variables appear in the same order [9]. A generalization of OBDD is nOBDD, where internal nodes can also represent disjunction () gates.
nFBDD are a natural generalization of nOBDDs, as they do not impose restrictions on the ordering of Boolean variables. Since nFBDD do not impose such restrictions, they are known to be exponentially more succinct than nOBDD; that is, there exist functions for which the smallest nOBDD is exponentially larger than the smallest nFBDD [4]. From this viewpoint, nFBDDs occupy a space between nOBDD and DNNF circuits, as they are exponentially more succinct than nOBDDs, while DNNFs are quasi-polynomially more succinct than nFBDD [8, 4].
In the context of databases, the connection between knowledge compilation and query evaluation has been fruitful, leading to the discovery of both tractable algorithms and lower bounds. Of particular note is the application of the knowledge compilation paradigm in query evaluation on probabilistic databases [14], Shapley value computation [11], the enumeration of query answers, probabilistic graph homomorphism [5], counting answers to queries [7]. The knowledge compilation-based approach involves first representing the database task as a query over a Boolean function and then demonstrating that the corresponding Boolean function has a tractable representation in a given target language, which also supports the corresponding query in polynomial time [3]. For example, in the context of query evaluation over probabilistic databases, one can show that the problem of probabilistic query evaluation can be represented as a weighted model counting problem over nOBDD when the underlying query is a path query [5]. Since there is a fully polynomial-time randomized approximation scheme (FPRAS) for the problem of model counting over nOBDD [6], it follows that probabilistic query evaluation for regular path queries over tuple-independent databases admits an FPRAS [5]. In the context of aggregation tasks, the underlying query is often model counting and its variants [19].
The aforementioned strategy makes it desirable to have target languages that are as expressive as possible while still supporting queries such as counting in polynomial time. In this context, the recent flurry of results has been enabled by the breakthrough of Arenas, Croquevielle, Jayaram, and Riveros, who showed that the problem of #nOBDD admits an FPRAS [6]. As mentioned earlier, nOBDD imposes a severe restriction on variable ordering, i.e., along every path from root to leaf, the variable ordering remains the same. nFBDD generalizes nOBDD by alleviating this restriction, thereby enabling succinct representations for several functions that require exponentially large nOBDD. Since nFBDD generalize nOBDD, the #P-hardness of #nFBDD, the problem of model counting over nFBDD, immediately follows. Accordingly, in light of the recent discovery of FPRAS for #nOBDD, an important open question is whether there exists an FPRAS for #nFBDD. The best known prior result provides a quasi-polynomial time algorithm owing to reduction of nFBDD to DNNF (and accordingly -programs) [12]. As noted in Section 2.1, the techniques developed in the context of design of FPRAS for #nOBDD do not extend to handle the case for #nFBDD and therefore, design of FPRAS for #nFBDD would require development of new techniques.
The primary contribution of our work is to answer the aforementioned question affirmatively, which is formalized in the following theorem.
Theorem 1.
Let be an nFBDD over variables, and . Algorithm runs in time and returns with the guarantee that
Organization of the paper.
We start with background on nFBDD in Section 2. The different components of the FPRAS are described in Section 4 and the analysis is split in three parts: in Section 5 we introduce the key concept of derivation paths, in Section 6 we describe the particular framework for the analysis, and in Section 7 we go through the proof of the FPRAS guarantees. For space reason, the proofs of several intermediate results are deferred to the appendix.
2 Background
Given a positive integer and an integer less than , denotes the set and the set . For , and three real numbers with , we use to denote , similarly, stands for . We sometimes uses the special value and in particular that equals .
Boolean variables take value (false) or (true). An assignment to a set of Boolean variables is mapping from to . We sometimes see as a set . We denote by the empty assignment, which corresponds to the empty set. The set of assignments to is denoted . A Boolean function over is a mapping to . The models of are the assignments mapped to by . When not explicit, the set of variables assigned by is denoted by . When , we denote by the assignment to consistent with both and . For and two sets of assignments, we write .
nBDD.
A binary decision diagram (BDD) is a directed acyclic graph (DAG) with a single source node , two sinks labeled and , and where each internal node is labeled by a Boolean variable and has two outgoing edges: the -edge going to the -child and and the -edge going to the -child (potentially ). Internal nodes are called decision nodes and are written (if then go to else go to ). A path in the DAG contains a variable if it contains a node . Every variables assignment corresponds to the unique path that starts from the source and, on a decision node follows the -edge. Non-deterministic BDD (nBDD) also admit guess nodes: unlabeled nodes with arbitrarily many children. When a path for an assignment reaches a guess node it can pursue to any child, so several paths can correspond to the same assignment in an nBDD. For a node in an nBDD , denotes the set of variables labeling decision nodes reachable from (including ) in the usual sense of graph reachibility. We note . computes a Boolean function over whose models are the assignments for which at least one path reaches the -sink. Every node of is itself the source of an nBDD and therefore represents a function over whose set of models we note . So . The function computed by an nBDD is also that computed by the circuit obtained replacing every decision node by and every guess node with children by . Thus, we call -nodes the guess nodes in this paper. The size of an nBDD , denoted by , is its number of edges.
nFBDD.
An nBDD is free (nFBDD) when every path contains every variable at most once. There are also called in the literature read-once non-deterministic branching programs (1-NBP). Note that in an nFBDD, variables may appear in different order depending on the path. When the order of occurrence of the variables is consistent among all paths of the nFBDD we say that we have an ordered nBDD (nOBDD). We call an nFBDD -complete when along every path from the source to the -sink, every variable occurs exactly once. We call an nFBDD -reduced when it contains no decision nodes and no -nodes that have the -sink among their children. Technically, -reduced nFBDD cannot represent functions with no models, but these functions are not considered in this paper. An nFBDD is alternating when its source node is a -node, when every -node only has decision nodes for children, and when every decision node has only -nodes and sinks for children.
Lemma 2.
Every nFBDD over variables can be made -complete, -reduced, and alternating in time .
Proof sketch.
First, we make -reduced by repeating the following until reaching a fixed point: replace all nodes by the -sink, remove the -sink from -nodes’ children, and replace all -nodes with no child by the -sink. Doing these replacements bottom-up in takes time and results in a -reduced nFBDD with .
Second, we make alternating w.r.t. the -nodes. Replace every -node that have the -sink as a child by the -sink. At that point, no sink is a child of any -node. Next, for every -node in , if has a parent that is a -node, then remove from ’s children and add all of ’s children to ’s children. Doing this replacement bottom-up yields an nFBDD whose -nodes all have only decision nodes children. The number of children of each -node is increased by at most so the running time is . Let be the resulting nFBDD. is still -reduced.
Third, we make -complete. For every , let . While there exists such that , choose and replace by the node in the children of . This adds at most decision nodes per original child of . So doing this for all decision nodes takes time and gives a -complete nFBDD which is still -reduced and alternating w.r.t. the nodes.
Finally, to make alternating, we just have to consider every and, if is not a -node or a sink, to replace it by a -node whose unique child is . This takes time . One -node is added for the source of the nFBDD if needed.
The nodes of -complete -reduced alternating nFBDD are organized in layers . contains the sinks and, for the layer contains all nodes whose children (except the -sink) are in . We write , and similarly for , and . Note that for all , contains only decision nodes whereas contains only -nodes. Importantly, we assume an arbitrary ordering on the children of the nodes; for every node we have a sequence (not a set) of its children.
FPRAS.
For a counting problem that, given an input of size , aims at computing some integer value , a fully polynomial-time randomized approximation scheme (FPRAS) is an algorithm that, given , , and , runs in time polynomial in , , and , and returns with the guarantee that
In this paper we give an #FPRAS for the problem #nFBDD.
#nFBDD
Input: an nFBDD
Output: its number of models
2.1 Related Work
As noted in Section 1, the literature on binary decision diagrams is extensive; therefore, we will focus solely on related results in the context of the model counting problem. The problem of #nFBDD is #P-complete: membership in #P is immediate as every assignment can be evaluated in PTIME, and the #P-hardness follows from the observation that the problem of #DNF, i.e., counting the number of satisfying assignments of Boolean formulas in Disjunctive Normal Form, is #P-hard [22]. Moreover, every DNF can be represented as an nFBDD such that the size of the resulting nFBDD is polynomial in the size of the DNF. Furthermore, it is also known that the problem of #nOBDD is SpanL-complete [6].
Given the #P-hardness, a natural direction of research is to understand the complexity of approximation. The discovery of polynomial-time randomized approximation schemes for #P-hard problems has been of long-standing interest and has led to several theoretically deep and elegant results. One such result was that of Karp and Luby [17] in the context of #DNF, relying on Monte Carlo sampling. Building on Monte Carlo sampling, Kannan, Sampath, and Mahaney [16] proposed a quasi-polynomial running time approximation scheme for #nOBDD.222The result of [16] was stated for regular languages, but the underlying techniques can handle #nOBDD. In a follow-up work [12], this result was extended to handle context-free grammars by reducing the problem of counting words of a context-free grammar to estimating the support size of multilinear -programs. It is straightforward to see that the same reduction can be applied in the context of #DNNF, implying a quasi-polynomial runtime approximation for #nFBDD. Since then, the question of the existence of a fully polynomial-time randomized approximation scheme for #nFBDD and its generalizations has remained open.
In a major breakthrough, Arenas et al. [6] provided an FPRAS for #nOBDD. Their technique relied on union of sets estimation à la Karp-Luby and the generation of independent samples via the self-reducibility union property.333The term self-reducibility union property was coined by Meel, Chakraborty, and Mathur [18] to explain the high-level idea of [6]. The self-reducibility union property can be informally stated as follows: The set of models conditioned on a given partial assignment (according to the variable ordering of the given nOBDD) can be expressed as the union of models of the states of the given nOBDD. In a follow-up work [7], Arenas et al. observed that the problem of model counting over structured DNNF (st-DNNF) circuits also admits FPRAS. In this context, it is worth highlighting that the self-reducibility union property does not hold for nFBDD and there exists exponential separation between nFBDD and st-DNNF, i.e., there is a family of functions for which the smallest FBDD are exponentially smaller than the smallest st-DNNF, and therefore, the problem of whether there exists an FPRAS for #nFBDD remains open.
3 Technical Overview
Our algorithm proceeds in a bottom-up manner and for every node of given nFBDD , we keep: (1) a number which seeks to approximate , and therefore, can be used to estimate , and (2) sets of samples , each a subset of , where and are polynomial in , and . Few comments are in order: we keep many () independent sets of samples so as to rely on the median of means estimator. As mentioned earlier, our algorithm works in bottom-up manner, in particular, for a node , we will compute using the values of and of its children.
Ideally, we want every model of to be in identically and independently with probability , and thus tat the expected size of is small. However, obtaining iid samples is computationally expensive, which resulted in quasi-polynomial runtimes in earlier studies [12]. Recent works on FPRAS for nOBDD achieved independence by leveraging self-reducibility union property [6, 7], but, as remarked in Section 2.1, the self-reducibility union property does not hold for nFBDD and therefore, it is not known how to accomplish independence among samples without giving up on the desiderata of polynomial time.
The key insight in our approach is to give up the desire for independence altogether, in particular, we do not even ensure pairwise independence, i.e., even for , it may not be the case that . Of course, we do need to quantify the dependence. In order to discuss the dependence, we first discuss how we update and for decision nodes and -nodes.
-
Let . Then we compute and as and , where is the operation that keeps each element of a set with probability .
-
Let be a -node such that (assuming two children for simplicity). Furthermore, for simplicity of exposition and for the sake of high-level intuition, assume . The technical difficulty for -nodes arises from the fact that it is possible that for a given , we have . Therefore, in order to ensure no has undue advantage even if it lies in the models set of multiple children, we update and as follows: we first compute and and then followed by .
Observe that the usage of ensures that for every , there is exactly one child of such that if then , and therefore, no has undue advantage.
It is worth re-emphasizing the crucial departure in our work from earlier efforts is embrace of dependence. For instance, consider and , then and will of course be reusing samples , and therefore, do not even have pairwise independence. Now, of course, we need to bound dependence so as to retain any hope of computing from . To this end, we focus on the following quantity of interest: for , which determines the variance for the estimator. In this regard, for every , we can define a derivation path, denoted as , where for every -node, is appended with , where is the first child of q such that . The key observation is that our computations ensure depends on the first node (starting from the -sink) where the derivation paths and diverge. In particular, it turns out:
One might wonder whether the above expression suffices: it turns out it does, because the number of pairs whose derivation paths diverge for the first time at can be shown to be bounded by , which suffices to show that the variance of the estimator can be bounded by constant factor of square of its mean, thus allowing us to use median of means estimator.
We close off by remarking that the situation is more nuanced than previously described, as is itself a random variable. Although the high-level intuition remains consistent, the technical analysis requires coupling, based on a carefully defined random process, detailed in Section 6. Simplifying the rigorous technical argument would be an interesting direction of future research.
4 Algorithm
The core of our FPRAS, approxMCnFBDD_core, takes in a -complete, -reduced and alternating nFBDD . ’s nodes are processed in bottom-up order, so from the sinks to the source. For each node , the algorithm computes which seeks to estimate , and polynomially-many subsets of called sample sets: . The algorithm stops after processing the source node and returns . The procedure that computes the content of and the value for is . Since this procedure uses randomness, the and are random variables. Our FPRAS works towards ensuring that “” holds true for every and . Thus, if is a good estimate of , then should be small in expectation. We put the equality between quotes because it does not make much sense as the left-hand side is a probability, so a fixed number, whereas the right-hand side is a random variable.
Decision nodes and -nodes are handled differently by estimateAndSample. When is a decision node , is computed deterministically from and , and is reduced from using the reduce procedure.
is more complicated when is a -node. We explain it gradually. For starter, let with its children ordered as follows: , and consider the problem of approximating when a sample set is available for every and with the guarantee that holds for every . Every is a subset of . We compute as follows: for every , is added to if and only if is the first child of for which is a model. Simple computations show that holds for every , and therefore is an unbiased estimate of (i.e., the expected value of the estimate is ).
Now suppose that for every , we only know that for every for some number independent of . Then we normalize the probabilities before computing the union. This is done using the reduce procedure. Let and . We have that holds for every . So , and is an unbiased estimate of .
To find an estimate that is concentrated around , we use the “median of means” technique. Suppose that instead of one sample set we have several sample sets all verifying . Then define similarly to and similarly to . Each is an estimate of . Say and partition into batches of sets. The median of means technique computes the average size over each batch and uses to estimate . The mean computation aims at reducing the variance of the estimate. The parameter can be chosen so that holds true. With the appropriate value for , lies in with high probability (though the estimate is not unbiased anymore).
So this is how estimateAndSample works for -nodes. The median of means serves to compute , the inverse of the estimate of which in turn is used to compute . When is not the source node , we compute sample sets in preparation for processing of ’s ancestors. For this we reuse and compute .
To ensure a polynomial running time, approxMCnFBDD_core terminates as soon as the number of samples grows too large (Line 5) and returns . This output is erroneous but we will show that the probability of terminating this way is negligible. For parameters carefully chosen, approxMCnFBDD_core returns a good estimate of with probability larger than . The full FPRAS approxMCnFBDD amplifies this probability to by returning the median output of independent runs of approxMCnFBDD.
5 Derivation paths
Models can have several accepting paths in an nFBDD. For a node of and , we map to a canonical accepting path, called the derivation path of for , denoted by . A path is formally represented with a tuple , with a sequence of vertices and a sequence of edges.
Definition 3.
For and , the derivation path is defined as follows:
-
If is the -sink then and the only derivation path is .
-
If , let be the restriction of to , then and is plus the -edge of .
-
If with the children ordering , let be the smallest integer between and such that then and is plus the edge between and .
Our algorithm constructs sample sets in a way that respect derivation paths. That is, an assignment may end up in only if it is derived through .
Lemma 4.
Let and , let with the -sink and . For every , let be the restriction of to . In a run of approxMCnFBDD_core, holds only if holds for every .
Proof.
holds by construction. Now consider , it is sufficient to show that only if .
-
If then and . Looking at estimateAndSample for decision nodes, one sees that only if , so only if .
-
If is a -node with then and there is an such that . Observe that only if – so only if is – for the smallest such that ; so only if . The definition of implies that .
Given two derivation paths and . We call their last common prefix nodes denoted by , the deepest node where the two paths diverge, that is, the first node contained in both paths from which the they follow different edges. Note that if and are consistent up to node , and , and follows the -edge while follows the -edge, then the two paths diverge at even though they both contain .
Definition 5.
Let and be two derivation paths. The last common prefix node, denoted by , is the node for the biggest such that and .
Note that every derivation path contains the -sink for first node, so the last common prefix node is well-defined. Let with the -sink and . For every , we define
The following result will play a key role in bounding the variance of estimators in the analysis.
Lemma 6.
Let and with the -sink and . For every , .
Proof.
Let . Let be the restriction of to . By definition, every is of the form for some assignment to . Consider , then every is in . Since the s differ on , the s are pairwise distinct, and therefore is a set of distinct assignments. Considering all possible choices for , we find that is a set of distinct assignments in . Hence .
6 The Framework for the Analysis
We introduce a random process that simulates approxMCnFBDD_core. Our intuition is that, for every , a statement in the veins of “” should hold. The problem is that this equality makes no sense because is a fixed real value whereas is a random variable. The variables and for different are too dependent of each other so we use a random process to work with new variables that behave more nicely. The random process simulates several runs of the algorithm for all possible values of the s. There, is simulated by a different variable for each possible run. A coupling argument then allows us to replace by one of these variables assuming enough knowledge on the algorithm run up to , encoded in what we call a history for .
6.1 History
A history for a set of nodes is a mapping is realizable when there exists a run of that gives the value to for every . Such a run is said compatible with . Two histories for and for are compatible when for all . Compatible histories can be merged into an history for . For and , we write to refer to the history augmented with . For , we define the set of its descendants by and . Note that . We only study histories realizable for sets that are closed for , that is, if and is a descendant of , then . Thus we abuse terminology and refer to a history for as a history for . The only history for the sinks is the vacuous history for (because no descendants).
6.2 Random Process
The random process comprises independent copies identified by the superscript . For , and a realizable history for , we have a random variable whose domain is all possible subsets of . simulates in runs of compatible with and where the value is assigned to .
-
If is the -sink, then and only is defined.
-
If is the -sink, then and only is defined.
-
If , then for every , with and realizable and compatible histories for and , let and
-
If then for every , , with realizable and pairwise compatible histories, we define , and the variable that simulates when the history for is
For all we define
We make important observations to motivate the interest of the random process. Let be the random variable on the history for obtained when running .
Fact 7.
It holds that and
Fact 7 is explained in more details in the full version of the paper. The equalities should be interpreted as follows: for every , the following holds:
A second key observation is that the variables and are independent of , , and . This is because the latter variables comes from the algorithm while the former are defined within the random process, and the two do not interact in any way.
Fact 8.
The variables and are independent of any combination of , , and for every (including ).
In the random process we have the correct variant of the equality “”.
Lemma 9.
For every and , it holds that . In addition, if is a -node with then .
Lemma 10.
For every with and with , let , then we have that and if is -node then
7 Analysis
We now conduct the analysis of approxMCnFBDD. The hardest part to analyze is the core algorithm approxMCnFBDD_core, for which we will prove the following.
Lemma 11.
Let be a -complete -reduced and alternating nFBDD over variables. Let , , and . If , and then runs in time and returns with the guarantee
Our main result is obtained decreasing this down to any with the median technique.
See 1
Proof.
Let be the estimates from independent calls to approxMCnFBDD_core. Let be the indicator variable that takes value if and only if , and define . By Lemma 11, . Hoeffding bound gives
The running time is times that of where is after it has been made -complete, -reduced, and alternating. By Lemma 2, and is constructed in time . So each call to takes time
Recall that is approxMCnFBDD_core without the terminating condition of Line 5, that is, where the sets can grow big. Analyzing is enough to prove Lemma 11 without running time requirements. In particular, it is enough to prove Lemmas 12 and 13. For these lemmas the settings described in Lemma 11 are assumed.
Lemma 12.
The probability that computes for some is at most .
Lemma 13.
The probability that constructs sets such that for some is at most .
Proof of Lemma 11.
Let .
where ranges over ’s nodes and ranges in . The parameter has been set so that implies so, using Lemmas 12 and 13:
The algorithm stops whenever the number of samples grows beyond so, for the worst-case running time, the number of samples is less than . Each node goes once through estimateAndSample. For a decision node, estimateAndSample takes time . For a -node , estimateAndSample calls union times, does a median of means where it computes the median of means of integers, and updates the sample sets. Updating the sample sets takes time . Each mean costs and the median requires sorting so the whole thing is done in time. For each sample, the union tests whether it is a model of the children of , model checking is a linear-time operation on nFBDD so the total cost of one union is . So the total cost of estimateAndSample for all -nodes is at most .
7.1 Proof of Lemma 12
Let be the interval and be the interval .
Claim 14.
The event
occurs if and only if the event
occurs.
Proof.
The “if” direction is trivial. For the other direction, suppose that holds for some . Let be the smallest integer such that there is and . cannot be because the only node in is the -sink and . So and, by minimality of , we have that for all .
(1) | ||||
We bound from above. If is a decision node, then by construction, and implies with probability . Thus for decision nodes and only the case of -nodes remains.
Going to the random process
To bound when is a -node, we move the analysis to the random process, whose variables we can analyze using Lemmas 9 and 10. Consider the set of realizable histories for and denote by the event that the algorithm sets to for all .
Let be the subset of where holds for every , then
Let , and . Note that is a random variable whereas is a constant. If then the events and both hold, where .
Claim 15.
If then implies that .
Proof.
implies . If holds then, trivially, implies that . If occurs then implies that and, since guarantees that , we have that . Let . Using that implies and Fact 7, we find that .
(Claim 15) | ||||
(by definition of ) | ||||
(Fact 7) | ||||
(Fact 8) |
We have reached our goal to replace variables by their counterpart in the random process. Now we bound using Chebyshev’s inequality and Hoeffding bound.
Variance upper bound
Median of means
We have that and, by independence of the variables
occurs if and only if , which is subsumed by . So Chebyshev’s inequality gives
() | ||||
( decreases to 1 and ) |
By taking the median, we decrease the upper bound to a much smaller value. Let be the indicator variable taking value if and only if and let . We have so Hoeffding bound gives
7.2 Proof of Lemma 13
We first bound from above by
We have already a upper bound on , so we focus on .
(Union bound) | ||||
(Markov’s inequality) | ||||
To bound we introduce the history of and move to the variables of the random process. Recall that is the set of all realizable histories for .
8 Conclusion
In this paper, we resolved the open problem of designing an FPRAS for #nFBDD. Our work also introduces a new technique to quantify dependence, and it would be interesting to extend this technique to other languages that generalize nFBDD. Another promising direction for future work would be to improve the complexity of the proposed FPRAS to enable practical adoption.
References
- [1] Antoine Amarilli, Marcelo Arenas, YooJung Choi, Mikaël Monet, Guy Van den Broeck, and Benjie Wang. A circus of circuits: Connections between decision diagrams, circuits, and automata. arXiv preprint arXiv:2404.09674, 2024. doi:10.48550/arXiv.2404.09674.
- [2] Antoine Amarilli and Florent Capelli. Tractable circuits in database theory. SIGMOD Rec., 53(2):6–20, 2024. doi:10.1145/3685980.3685982.
- [3] Antoine Amarilli and Florent Capelli. Tractable circuits in database theory. ACM SIGMOD Record, 53(2):6–20, 2024. doi:10.1145/3685980.3685982.
- [4] Antoine Amarilli, Florent Capelli, Mikaël Monet, and Pierre Senellart. Connecting knowledge compilation classes and width parameters. Theory Comput. Syst., 64(5):861–914, 2020. doi:10.1007/S00224-019-09930-2.
- [5] Antoine Amarilli, Timothy van Bremen, and Kuldeep S Meel. Conjunctive queries on probabilistic graphs: The limits of approximability. In 27th International Conference on Database Theory, 2024.
- [6] Marcelo Arenas, Luis Alberto Croquevielle, Rajesh Jayaram, and Cristian Riveros. Efficient logspace classes for enumeration, counting, and uniform generation. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2019, pages 59–73. ACM, 2019. doi:10.1145/3294052.3319704.
- [7] Marcelo Arenas, Luis Alberto Croquevielle, Rajesh Jayaram, and Cristian Riveros. When is approximate counting for conjunctive queries tractable? In STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 1015–1027. ACM, 2021. doi:10.1145/3406325.3451014.
- [8] Paul Beame and Vincent Liew. New limits for knowledge compilation and applications to exact model counting. arXiv preprint arXiv:1506.02639, 2015. arXiv:1506.02639.
- [9] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Trans. Computers, 35(8):677–691, 1986. doi:10.1109/TC.1986.1676819.
- [10] Adnan Darwiche and Pierre Marquis. A knowledge compilation map. J. Artif. Intell. Res., 17:229–264, 2002. doi:10.1613/JAIR.989.
- [11] Daniel Deutch, Nave Frost, Benny Kimelfeld, and Mikaël Monet. Computing the shapley value of facts in query answering. In SIGMOD ’22: International Conference on Management of Data, pages 1570–1583. ACM, 2022. doi:10.1145/3514221.3517912.
- [12] Vivek Gore, Mark Jerrum, Sampath Kannan, Z. Sweedyk, and Stephen R. Mahaney. A quasi-polynomial-time algorithm for sampling words from a context-free language. Inf. Comput., 134(1):59–74, 1997. doi:10.1006/INCO.1997.2621.
- [13] Abhay Kumar Jha and Dan Suciu. On the tractability of query compilation and bounded treewidth. In 15th International Conference on Database Theory, ICDT ’12, pages 249–261. ACM, 2012. doi:10.1145/2274576.2274603.
- [14] Abhay Kumar Jha and Dan Suciu. Knowledge compilation meets database theory: Compiling queries to decision diagrams. Theory Comput. Syst., 52(3):403–440, 2013. doi:10.1007/S00224-012-9392-5.
- [15] Sheldon B. Akers Jr. Binary decision diagrams. IEEE Trans. Computers, 27(6):509–516, 1978. doi:10.1109/TC.1978.1675141.
- [16] Sampath Kannan, Z. Sweedyk, and Stephen R. Mahaney. Counting and random generation of strings in regular languages. In Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 551–557. ACM/SIAM, 1995. URL: http://dl.acm.org/citation.cfm?id=313651.313803.
- [17] Richard M. Karp and Michael Luby. Monte-carlo algorithms for enumeration and reliability problems. In 24th Annual Symposium on Foundations of Computer Science, Tucson, Arizona, USA, 7-9 November 1983, pages 56–64. IEEE Computer Society, 1983. doi:10.1109/SFCS.1983.35.
- [18] Kuldeep S. Meel, Sourav Chakraborty, and Umang Mathur. A faster FPRAS for #nfa. Proc. ACM Manag. Data, 2(2):112, 2024. doi:10.1145/3651613.
- [19] Stefan Mengel. Counting, Knowledge Compilation and Applications. PhD thesis, Université d’Artois, 2021.
- [20] Mikaël Monet. Solving a special case of the intensional vs extensional conjecture in probabilistic databases. In Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, pages 149–163. ACM, 2020. doi:10.1145/3375395.3387642.
- [21] Mikaël Monet and Dan Olteanu. Towards deterministic decomposable circuits for safe queries. In Proceedings of the 12th Alberto Mendelzon International Workshop on Foundations of Data Management, volume 2100 of CEUR Workshop Proceedings. CEUR-WS.org, 2018. URL: https://ceur-ws.org/Vol-2100/paper19.pdf.
- [22] Leslie G. Valiant. The complexity of enumeration and reliability problems. SIAM J. Comput., 8(3):410–421, 1979. doi:10.1137/0208032.
- [23] Ingo Wegener. Branching Programs and Binary Decision Diagrams. SIAM, 2000. URL: http://ls2-www.cs.uni-dortmund.de/monographs/bdd/.
Appendix A Appendix
See 9
Proof.
Let . We proceed by induction on . The base case is immediate since and are the only variables for (and by definition). Now let , , and suppose that the statement holds for all and with . If is odd then is a decision node with and in (because is alternating). Let and let be the restriction of to . Then, by induction,
Now if is even then is a -node with children all in . Let be the smallest integer such that and let . Then
And for we have that
See 10
Proof.
We are going to prove a stronger statement, namely, that for every , for every and (potentially ) in and every and such that when , and every and , and every compatible histories and for and , respectively, we have that
(2) |
where if and otherwise. In addition if is even, so and are -nodes, then
(3) |
where and and if and otherwise.
The inequalities (2) and (3) are straightforward when because then (by compatibility) and or and we can use Lemma 9. In particular (2) holds when . Now we assume and proceed by induction on . The base case holds true by the previous remark (note that neither nor can be the -sink because and must not be empty).
Case odd.
In this case and are decision nodes. Let and . Then for some compatible histories and for and , respectively, and and such that . Similarly, . Let and . Let also be the restriction of to and be the restriction of to . Then
Now, because and are both in , neither is an ancestor of the other and thus the two reduce are independent: the output of one reduce does not modify the set fed into the second reduce nor its output. Thus the probability becomes
which is equal to . Now, if then (because and are compatible) and by Lemma 9. So . By assumption, , if then the two derivation paths and diverge for the first time at , and if then and (because by assumption). In this case the derivation paths still diverge for the first time at : one follows the -edge and the other follows the -edge. So in both cases and we are done. We still have to consider. In this case the paths and diverge for the first time at some node below and so by induction . So . But is also the first node where and diverge, hence the result.
Case even.
In this case and are both -nodes. Say and . Then and . Let and .
The reduce events are independent because and both belong to and thus neither in an ancestor of the other: the output of the first reduce does not influence the output of the second one, even with the knowledge that and . So the probability becomes
which is . Now, there are a unique and such that only if and only if . Thus equals
and belong to the same layer so with similar arguments we find that
It is possible that but then for otherwise we would have , against assumption. In the case we use Lemma 9 and find . So
and . When , the paths and diverge for the first time at . So and we are done. Now let us assume that , then we use the induction hypothesis and, denoting , we have
When , the first node where and diverge is also the first node where and diverge, so . This finishes the proof of the inductive case.