Abstract 1 Introduction 2 Preliminaries 3 Sensitivity Theorem in the Presence of Uncertainty 4 Relationships 5 Decision Tree Size and Limited Uncertainty 6 Discussion and Open Problems References

Sensitivity and Query Complexity Under Uncertainty

Deepu Benson ORCID Institute of Science and Technology, Chinmaya Vishwa Vidyapeeth, Ernakulam, India Balagopal Komarath ORCID Department of Computer Science and Engineering, IIT Gandhinagar, Gujarat, India Nikhil Mande ORCID Department of Computer Science, University of Liverpool, UK Sai Soumya Nalli ORCID Microsoft Research India, Bangalore, India Jayalal Sarma ORCID Department of Computer Science and Engineering, IIT Madras, Chennai, India Karteek Sreenivasaiah ORCID Department of Computer Science, University of Liverpool, UK
Abstract

In this paper, we study the query complexity of Boolean functions in the presence of uncertainty, motivated by parallel computation with an unlimited number of processors where inputs are allowed to be unknown. We allow each query to produce three results: zero, one, or unknown. The output could also be: zero, one, or unknown, with the constraint that we should output “unknown” only when we cannot determine the answer from the revealed input bits. Such an extension of a Boolean function is called its hazard-free extension.

  • We prove an analogue of Huang’s celebrated sensitivity theorem [Annals of Mathematics, 2019] in our model of query complexity with uncertainty.

  • We show that the deterministic query complexity of the hazard-free extension of a Boolean function is at most quadratic in its randomized query complexity and quartic in its quantum query complexity, improving upon the best-known bounds in the Boolean world.

  • We exhibit an exponential gap between the smallest depth (size) of decision trees computing a Boolean function, and those computing its hazard-free extension.

  • We present general methods to convert decision trees for Boolean functions to those for their hazard-free counterparts, and show optimality of this construction. We also parameterize this result by the maximum number of unknown values in the input.

  • We show lower bounds on size complexity of decision trees for hazard-free extensions of Boolean functions in terms of the number of prime implicants and prime implicates of the underlying Boolean function.

Keywords and phrases:
CREW-PRAM, query complexity, decision trees, sensitivity, hazard-free extensions
Funding:
Deepu Benson: Funded by SERB SRG Project SRG/2021/001339.
Copyright and License:
[Uncaptioned image] © Deepu Benson, Balagopal Komarath, Nikhil Mande, Sai Soumya Nalli, Jayalal Sarma, and Karteek Sreenivasaiah; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Oracles and decision trees
Related Version:
This paper merges and extends results from [18 ] and [5].
Editors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał Skrzypczak

1 Introduction

A single-tape Turing Machine needs at least n steps in the worst case to compute any n-bit function that depends on all of its inputs. One way to achieve faster computation is to use multiple processors in parallel. Parallel computation is modeled using Parallel Random Access Machines (PRAM), originally defined by Fortune and Wyllie [10], in which there are multiple processors and multiple memory cells shared among all processors. In a single time step, each processor can read a fixed number of memory cells, execute one instruction, and write to a fixed number of memory cells. In the real world, access to shared memory has to be mediated using some mechanism to avoid read-write conflicts. A popular mechanism to achieve this is called Concurrent-Read Exclusive-Write (CREW) in which concurrent reads of the same memory cell are allowed, but each cell can only be written to by at most one processor in a single time step. An algorithm that violates this restriction is invalid.

A fundamental problem in such a model of computation is to determine the number of processors and the amount of time required for computing Boolean functions. A Boolean function f:{0,1}n{0,1} is pre-determined111The number of input bits is fixed, making this model non-uniform. and the input bits are presented in shared memory locations. The processors have to write the output bit f(x) to a designated shared memory location. For example, consider the Boolean disjunction (logical OR) of all input bits which outputs 1 if and only if there is at least one 1 among the inputs. There is a simple divide-and-conquer algorithm to compute the OR of n input bits in O(logn) time using n processors that exploits the fact that OR is associative and distributive. This is essentially a CREW-PRAM algorithm to compute the OR of n bits in O(log(n))-time. Note that each of the processors in the above algorithm computes a trivial function at each step namely the OR of two bits. Can we do better if we are allowed to use more complex computations at each step?

Cook and Dwork [7], and Cook, Dwork, and Reischuk [8] answer this question by showing that any CREW-PRAM algorithm that computes the logical OR of n-bits needs Ω(log(n))-time, irrespective of the functions computed by the processors at each step. The reason their lower bound is independent of the functions allowed at each processor is because their lower bound really applies to the number accesses made to the shared memory. If we only care about analyzing the number of memory accesses by an algorithm running on an all-powerful processor, a neat way to think of the computation at each processor is to model it as a two-player interactive game: a querier who is all-powerful and an oracle. They want to compute a Boolean function f:{0,1}n{0,1} known beforehand, on an input x{0,1}n. However, the input x is only known to the oracle. The only interaction allowed is where the querier asks a query i[n], and the oracle can reply with xi. The query complexity of f is defined as the the maximum number of queries required to find the answer, where the maximum is taken over all x. The querier’s aim is to minimize the number of queries required to determine the value of the function in the worst-case. This is exactly the computation model studied in an exciting area of computer science simply called “query complexity”.

The technique used in [7] and [8] involves defining a measure that is equivalent to what is now commonly known as sensitivity of a Boolean function, denoted 𝗌(f). More precisely, they show that the time needed by CREW-PRAM algorithms, irrespective of the instruction set, to compute a Boolean function is asymptotically lower bounded by the logarithm of the sensitivity of the function.

The question whether a function can be computed in time at most the logarithm of the function’s sensitivity, thereby characterizing the CREW-PRAM time complexity in terms of sensitivity, remained open. Nisan [20] introduced a generalization of sensitivity called block-sensitivity, denoted 𝖻𝗌(f), which is lower bounded by sensitivity, and proved that CREW-PRAM time complexity is asymptotically the same as the logarithm of the block sensitivity of a function. Nisan further related decision tree depth complexity, denoted 0pt(f) to block-sensitivity providing another characterization for CREW-PRAM time complexity as the logarithm of decision tree depth. However, the block-sensitivity of a function can be potentially much higher than its sensitivity, as shown by Rubinstein [22] with an explicit function that witnesses a quadratic gap between these two measures. After research spanning almost three decades which saw extensive study of relationships between the above parameters and various other parameters (see, for example, [1] and the references therein), such as certificate complexity (denoted 𝖼𝖼(f)) and degree (denoted 𝖽𝖾𝗀(f)), Huang [13], in a breakthrough result, proved that sensitivity and block-sensitivity are polynomially related. This is the celebrated sensitivity theorem:

Theorem 1 (Sensitivity theorem for Boolean functions [13]).

For all Boolean functions f:{0,1}n{0,1}, 𝖻𝗌(f)=O(𝗌(f)4).

By earlier results [20, 21, 3], Huang’s result implies that the measures of sensitivity, block sensitivity, certificate complexity, degree, approximate degree, deterministic/randomized/quantum query complexity are all polynomially equivalent for Boolean functions. This strong connection makes the study of query complexity essentially equivalent to studying the time complexity of CREW-PRAMs. Thus henceforth, we shall predominantly use terminology from query complexity, and also express our results in the context of query complexity.

In this work, we initiate a systematic study to understand the effect of allowing uncertainty among the inputs in the setting of CREW-PRAM. Allowing uncertainty in this setting is easier to understand in the equivalent query model: When an input bit is queried by the querier, the oracle can reply with “uncertain”. If it is possible for the function value to be determined in the presence of such uncertainties, then we would like the querier to output such a value. In what follows, we make the setting more formal.

To model uncertainty we use a classic three-valued logic, namely Kleene’s strong logic of indeterminacy [17], usually denoted K3. The logic K3 has three truth values 0, 1, and 𝗎. The behaviour of the third value 𝗎 with respect to the basic Boolean primitives – conjunction (), disjunction (), and negation (¬) – are given in Table 1.

Table 1: Kleene’s three valued logic “K3”.
0 𝗎 1
0 0 0 0
𝗎 0 𝗎 𝗎
1 0 𝗎 1
0 𝗎 1
0 0 𝗎 1
𝗎 𝗎 𝗎 1
1 1 1 1
¬ 0 𝗎 1
1 𝗎 0

The logic K3 has been used in several other contexts where there is a need to represent and work with unknowns and hence has found several important wide-ranging applications in computer science. For instance, in relational database theory, SQL implements K3 and uses 𝗎 to represent a NULL value [19]. Perhaps the oldest use of K3 is in modeling hazards that occur in real-world combinational circuits. Recently there have been a series of results studying constructions and complexity of hazard-free circuits [14, 15, 16]. Here 𝗎 was used to represent metastability, or an unstable voltage, that can resolve to 0 or 1 at a later time.

One way to interpret the basic binary operations , , and ¬ in K3 is as follows: for a bit b{0,1}, if the value of the function is guaranteed to be b regardless of all {0,1}-settings to the 𝗎-variables, then the output is b. Otherwise, the output is 𝗎. This interpretation generalizes in a nice way to n-variate Boolean functions. In literature, this extension is typically called the hazard-free extension of f (see, for instance, [14]), and is an important concept studied in circuits and switching network theory since the 1950s. The interested reader can refer to [14], and the references therein, for history and applications of this particular way of extending f to K3. We define this extension formally below.

For a string x{0,𝗎,1}n, define the resolutions of x as follows:

𝖱𝖾𝗌(x):={y{0,1}n:yi=xii[n]withxi{0,1}}.

That is, 𝖱𝖾𝗌(x) denotes the set of all strings in {0,1}n that are consistent with the {0,1}-valued bits of x. The hazard-free extension of a Boolean function is defined as follows:

Definition 2 (Hazard-free Extensions).

For a Boolean function f:{0,1}n{0,1}, we define its hazard-free extension f~:{0,𝗎,1}n{0,𝗎,1} as follows. For an input y{0,𝗎,1}n,

f~(y):={bif f(y)=b for all y𝖱𝖾𝗌(x)u otherwise 

To understand the motivation behind this definition, consider the instability partial order defined on {0,𝗎,1} by the relations 𝗎0 and 𝗎1. The elements 0 and 1 are incomparable. This partial order captures the intuition that 𝗎 is less certain than 0 and 1. This partial order can be extended naturally to elements of {0,𝗎,1}n as xy iff xiyi for all 1in. A function f:{0,𝗎,1}n{0,𝗎,1} is natural if for all x,y{0,𝗎,1}n such that xy, we have f(x)f(y) and f(z){0,1} when z{0,1}n. Intuitively, this property says that the function cannot produce a less certain output on a more certain input and if there is no uncertainty in the input, there should be no uncertainty in the output. A natural function f extends a Boolean function f if f(x)=f(x) for all x{0,1}n. There could be many natural functions that extend a Boolean function. Consider two natural functions f and f′′ that extend f. We say ff′′ if f(x)f′′(x) for all x{0,𝗎,1}n. This says that the output of f′′ is at least as certain as the output of f. An alternative definition for the hazard-free extension of a function f is as follows: it is the unique function f~ such that ff~ for all natural functions f that extends f. That is, the hazard-free extension of a Boolean function is the best we could hope to compute in the presence of uncertainties in the inputs to f.

We note here that even though we use the term “hazard-free”, there is a fundamental difference between our model of computation and the ones studied in results such as [14, 15, 16]. For hazard-free circuits, the value 𝗎 represents an unstable voltage, and the gates in a circuit are fundamentally unable to detect it. That is, there is no circuit that can output 1 when input is 𝗎 and 0 otherwise. However, in our setting, the value 𝗎 is simply another symbol just like 0 or 1. So we can indeed detect/read a 𝗎 value. The restriction that we have to compute the hazard-free extension is a semantic one in this paper, whereas Boolean circuits can only compute natural functions. In other words, we are interested in query complexity of hazard-free extensions of Boolean functions per se, and we have no notion of metastability in our computation model.

There is a rich body of work on the query complexity of Boolean functions that has established best-possible polynomial relationships among various models such as deterministic, randomized, and quantum models of query complexity, and their interplay with analytical measures such as block sensitivity and certificate complexity (see, for example, [1] and the references therein). We study these relationships in the presence of uncertainty, in particular for hazard-free extensions of Boolean functions. A main goal is to characterize query complexity (equivalently CREW-PRAM time complexity) of hazard-free extensions of Boolean functions using these parameters.

1.1 Our Results

In this subsection, we discuss the results presented in this paper. The organization of the paper follows the structure of this subsection.

1.1.1 Sensitivity Theorem in the Presence of Uncertainty

We prove the sensitivity theorem for Boolean functions in the presence of uncertainties. We define analogues of query complexity, sensitivity, block sensitivity, and certificate complexity called 𝗎-query complexity (denoted 0pt𝗎(f)), 𝗎-sensitivity (denoted 𝗌𝗎(f)), 𝗎-block sensitivity (denoted 𝖻𝗌𝗎(f)), and 𝗎-certificate complexity (denoted 𝖼𝖼𝗎(f)), respectively. We show that analogues of Cook, Dwork, and Reischuk’s [8] lower bound and Nisan’s [20] upper bound hold in the presence of uncertainties by adapting their proofs suitably (See the full version [4] for details). Therefore, we can now focus on proving that these parameters are polynomially equivalent in the presence of uncertainties. Huang’s proof of the sensitivity theorem for Boolean functions crucially uses the parameter called the degree of a Boolean function. It is unclear how to define an analogue of degree in our setting. However, it turns out that a more classical parameter, the maximum of prime implicant size and prime implicate size, suffices.

Our proof of the sensitivity theorem in the presence of uncertainty is much simpler and more straightforward than the proof of the sensitivity theorem in the classical setting. It raises the question of whether we can find a simpler proof of the classical sensitivity theorem by generalizing our proof to handle an arbitrary upper bound on the number of uncertain values in the input. Note that the classical setting assumes that this number is 0 and we prove the sensitivity theorem by assuming that this number is n, the number of inputs.

Recall that a literal is an input variable or its negation. An implicant (implicate) of a Boolean function f is a subset S of all literals such that f is 1 (is 0) on any input that has all literals in S set to 1 (set to 0 respectively). A prime implicant (prime implicate) is an implicant (implicate) such that no proper subset of it is an implicant (implicate respectively), i.e., the implicant (implicate) is minimal (w.r.t. set inclusion). The size of a prime implicant or prime implicate is the size of the set. Prime implicants and prime implicates of a Boolean function are widely studied in electronic circuit design and Boolean function analysis.

Theorem 3 (Sensitivity theorem for hazard-free extensions of Boolean functions).

Let f:{0,1}n{0,1} be a Boolean function and let k1 and k2 be the sizes of a largest prime implicant and prime implicate of f. Then, the parameters 𝗌𝗎(f), 𝖻𝗌𝗎(f), 0pt𝗎(f), 𝖼𝖼𝗎(f), and max{k1,k2} are linearly equivalent.

We note here that while Huang [13] showed 𝖻𝗌(f)=O(s(f)4) for all Boolean f, our result shows that, in the presence of uncertainty block sensitivity and sensitivity are in fact linearly related to each other.

1.1.2 Relationships

Clearly, the query complexity of the hazard-free extension of a Boolean function f cannot be smaller than that of f itself. Can the query complexity of the hazard-free extension of a Boolean function be much more than the query complexity of the function itself? For monotone functions, we show that the answer is no. In fact, this also holds for randomized query complexity (denoted by 𝖱(f) and 𝖱𝗎(f)) and quantum query complexity (denoted by 𝖰(f) and 𝖰𝗎(f)).

Lemma 4.

Let f:{0,1}n{0,1} be a monotone Boolean function. Then we have

0pt𝗎(f)=Θ(0pt(f)) and 𝖱𝗎(f)=Θ(𝖱(f)) and 𝖰𝗎(f)=Θ(𝖰(f)).

A natural question to ask is whether the model we are considering is non-degenerate. In other words, do all hazard-free extensions of Boolean functions have large query complexity? Using Lemma 4, we can show that there are functions that are easy to compute even in the presence of uncertainties. The following monotone variant of the 𝖬𝖴𝖷 function (defined below) by Wegener [24] is sufficient.

Definition 5 ([24]).

For an even integer n>0, define 𝗆𝖬𝖴𝖷n, as follows: The function 𝗆𝖬𝖴𝖷n is defined on n+(nn/2) (which is Θ(2n/n)) variables, where the latter (nn/2) variables are indexed by all n-bit strings of Hamming weight exactly n/2. For (x,y){0,1}n+(nn/2), define

𝗆𝖬𝖴𝖷n(x,y)={0|x|<n/21|x|>n/2yxotherwise.

It is easy to see that this function is monotone and has a query complexity of n+1. We also exhibit a non-monotone, non-degenerate n-variate function such that its hazard-free extension has O(logn) query complexity (see full version [4] for the details).

For general functions, we show that uncertainty can blow-up query complexity exponentially. The Boolean function 𝖬𝖴𝖷n:{0,1}n+2n{0,1} defined by

𝖬𝖴𝖷n(s0,s1,,sn1,(x(b0,,bn1))bi{0,1}):=x(s0,,sn1)

is a function on n+2n inputs that depends on all its inputs and has query complexity of n+1. The inputs si are called the selector bits and the inputs xj are called data bits. It is easy to show that any function that depends on all its N input bits must have at least logarithmic (in N) query complexity. Therefore, 𝖬𝖴𝖷n is one of the easiest functions to compute in the query complexity model. We prove that its hazard-free extension is one of the hardest functions in the query complexity model.

Theorem 6.
0ptu(𝖬𝖴𝖷n)=2n+n and 𝖱𝗎(𝖬𝖴𝖷n)=Θ(2n) and 𝖰𝗎(𝖬𝖴𝖷n)=Θ(2n/2)

We also show the following relationships between deterministic, randomized, and quantum query complexities of hazard-free extensions of Boolean functions.

Theorem 7.

For f:{0,1}n{0,1}, we have

0pt𝗎(f)=O(𝖱𝗎(f)2) and 0pt𝗎(f)=O(𝖰𝗎(f)4).

We remark here that the deterministic-randomized relationship above is better than the best-known cubic relationship in the Boolean world, while the quartic deterministic-quantum separation above matches the best-known separation in the Boolean world (see [1]). The key reason we are able to obtain better relationships is because we show that sensitivity, block sensitivity, and certificate complexity are all linearly related in the presence of uncertainty (Theorem 3). This is not the case in the classical Boolean setting. Regarding best possible separations: while a linear relationship between deterministic and randomized query complexities in the presence of uncertainty remains open, a quadratic deterministic-quantum separation follows from Theorem 6 (or from the OR function, which has maximal deterministic query complexity, but Grover’s search algorithm [12] offers a quadratic quantum speedup).

It is natural to model query algorithms using decision trees. A decision tree represents the strategy of the querier using a tree structure. The internal nodes of the tree are labeled by input variables and has two outgoing edges labeled 0 and 1. Computation starts at the root and queries the variable labeling the current node. Then, the outgoing edge labeled by the answer is taken to reach the next node. The leaves of the tree are labeled with 0 or 1 and represent the answer determined by the querier. A decision tree is said to compute a Boolean function f if the querier can correctly answer the value of f for every possible input by following the decision tree from root to a leaf. The depth of the decision tree represents the worst-case time. The depth of a smallest depth decision tree that computes f is called the decision tree depth complexity of f, and it is the same as deterministic query complexity of f.

1.1.3 Decision Tree Size

Decision trees have played an important role in machine learning. The size of a decision tree is an important measure as large decision trees often suffer from over-fitting. It has been long-known that functions that admit small-size decision trees are efficiently PAC learnable [9]. Moreover, the class of decision trees of large size is not efficiently PAC-learnable as their VC-dimension is directly proportional to their size. Various ideas to prune decision trees and reduce their size while maintaining reasonable empirical error (error on the training set) are used in practice. For a detailed treatment of the role of decision tree size in learning, the interested reader may refer to [23, Chapter 18]. We denote the decision tree size of a function f by 𝗌𝗂𝗓𝖾(f) and the decision tree size of its hazard-free extension by 𝗌𝗂𝗓𝖾𝗎(f). We show that for the 𝖬𝖴𝖷 function despite the exponential blow-up in depth from Theorem 6, the size blow-up is only polynomial.

Theorem 8.

24n𝗌𝗂𝗓𝖾𝗎(𝖬𝖴𝖷n)4n+13n.

In contrast, we show that there are functions for which the size blow-up is exponential. In particular, the 𝖠𝖭𝖣n function has linear-size Boolean decision trees. However, its hazard-free extension needs exponential-size decision trees.

Theorem 9.

𝗌𝗂𝗓𝖾𝗎(𝖠𝖭𝖣n)=2n+11.

We also show that there are hazard-free extensions of Boolean functions that require trees of size Ω((nn/3)(2n/3n/3)) (see full version [4] for the details). Notice that a ternary tree of depth n can have at most 3n leaves. This lower bound is only smaller than this worst-case by a polynomial factor.

We also show how to construct decision trees for hazard-free extensions of Boolean functions from a decision tree for the underlying Boolean function.

Theorem 10.

For any Boolean function f, we have 2𝗌𝗂𝗓𝖾(f)1𝗌𝗂𝗓𝖾𝗎(f)2𝗌𝗂𝗓𝖾(f)1.

The tightness of the first inequality is witnessed by the PARITYn function and that of the second inequality is witnessed by the 𝖠𝖭𝖣n function.

We also show that, in the case of hazard-free extensions too, sensitivity plays an important role in the function learning problem. The problem is as follows: We are provided a few input-output pairs and a guarantee that the function is from some family. The goal is to learn the function from as few samples as possible. It is known that a function with sensitivity s is completely specified by its values on a Hamming ball of radius 2s [11]. We prove an analogue for hazard-free extensions of Boolean functions. We refer the reader to the full version [4] for details.

Theorem 11.

A hazard-free extension f that has 𝗌u(f)s is specified by its values on any Hamming ball of radius 4s in {0,𝗎,1}n.

1.1.4 Limited Uncertainty

We study computation in the presence of only a limited amount of uncertainty by introducing a parameter k that limits the number of bits for which the oracle can respond 𝗎. For metastability-containing electronic circuits, it is known that assuming only limited metastability allows constructing circuits that are significantly smaller [14]. We show a similar effect on decision tree size and query complexity when uncertainty is limited. See the full version [4] for details.

Theorem 12.

Let T be a Boolean decision tree of size s and depth d for f. Then, there exists a decision tree of size at most s2k+11 and depth at most 2kd for f~ provided that the input is guaranteed to have at most k positions with value 𝗎.

For settings in which k is a small constant, observe that the decision tree size is polynomial in the size of the Boolean decision tree. If k is considered a parameter, observe that the depth remains fixed-parameter tractable in the language of parameterized complexity theory.

2 Preliminaries

Definition 13 (𝘂-query complexity).

Let f:{0,1}n{0,1} be a Boolean function, and let f~ be its hazard-free extension. A deterministic decision tree, also called a deterministic query algorithm, computing f~, is a ternary tree T whose leaf nodes are labeled by elements of {0,𝗎,1}, each internal node is labeled by a variable xi where i[n] and has three outgoing edges, labeled 0, 1, and 𝗎. On an input x{0,𝗎,1}n, the tree’s computation proceeds from the root down to a leaf as follows: from a node labeled xi, we take the outgoing edge labeled by value of xi until we reach a leaf. The label of the leaf is the output of the tree T(x).

We say that T computes f~ if T(x)=f~(x) for all x{0,𝗎,1}n. The deterministic 𝗎-query complexity of f, denoted 0pt𝗎(f), is defined as

0pt𝗎(f):=minTdepth(T),

where the minimization is over all deterministic decision trees T that compute f~.

Definition 14 (𝘂-sensitivity).

Let f:{0,1}n{0,1} be a Boolean function, and let f~ be its hazard-free extension. For an x{0,𝗎,1}n, we define the 𝗎-sensitivity of f at x as:

𝗌𝗎(f,x)=|{iy{0,𝗎,1}n s.t. f~(y)f~(x) and yjxj at only j=i}|

The elements i of the set are called the 𝗎-sensitive bits of x for f. The 𝗎-sensitivity of f, denoted 𝗌𝗎(f), is maxx{0,𝗎,1}n𝗌𝗎(f,x).

Definition 15 (𝘂-block sensitivity).

Let f:{0,1}n{0,1} be a Boolean function and let f~ be its hazard-free extension. For x{0,𝗎,1}n, the 𝗎-block sensitivity of f at x is defined as maximum k such that there are disjoint subsets B1,B2,Bk[n] and for each i[k], there is a y such that f~(y)f~(x) and y differs from x at exactly the positions in Bi. Each Bi is called a 𝗎-sensitive block of f on input x. The 𝗎-block sensitivity of f, denoted 𝖻𝗌𝗎(f), is then defined as the maximum 𝗎-block sensitivity of f taken over all x.

For b{0,1}, we use 𝖻𝗌𝗎,b(f) to denote the maximum 𝗎-block sensitivity of f at x, over all xf1(b). For a string x{0,𝗎,1}n and a set B[n] that is a sensitive block of f~ at x, we abuse notation and use xB to denote an arbitrary but fixed string y{0,𝗎,1}n that satisfies y[n]B=x[n]B and f~(y)f~(x).

We now formally define certificate complexity for hazard-free extensions of Boolean functions. We first define a partial assignment as follows: By a partial assignment on n bits, we mean a string p{0,1,𝗎,}n, representing partial knowledge of a string in {0,𝗎,1}n, where the -entries are yet to be determined. We say a string y{0,𝗎,1}n is consistent with a partial assignment p{0,1,𝗎,}n if yi=pi for all i[n] with pi.

Definition 16 (𝘂-certificate complexity).

Let f:{0,1}n{0,1} and x{0,𝗎,1}n. A partial assignment p{0,1,𝗎,}n is called a certificate for f~ at x if

  • x is consistent with p, and

  • f~(y)=f~(x) for all y consistent with p.

The size of this certificate is |p|:=|{i[n]:pi}|. The domain of p is said to be {i[n]:pi}. The certificate complexity of f~ at x{0,𝗎,1}n, denoted 𝖼𝖼𝗎(f,x), is the minimum size of a certificate p for f~ at x. The 𝗎-certificate complexity of f~, denoted 𝖼𝖼𝗎(f), is the maximum value of 𝖼𝖼𝗎(f,x) over all x.

In other words, a certificate for f~ at x is a set of variables of x that if revealed, guarantees the output of all consistent strings with the revealed variables to be equal to f~(x).

Definition 17 (Randomized 𝗎-query complexity).

A randomized decision tree is a distribution over deterministic decision trees. We say a randomized decision tree computes f~ with error 1/3 if for all x{0,𝗎,1}n, the probability that it outputs f~(x) is at least 2/3. The depth of a randomized decision tree is the maximum depth of a deterministic decision tree in its support. Define the randomized 𝗎-query complexity of f as follows.

𝖱𝗎(f):=minTdepth(T),

where the minimization is over all randomized decision trees T that compute f~ to error at most 1/3.

We refer the reader to [6] for basics of quantum query complexity.

Definition 18 (Quantum 𝗎-query Complexity).

A quantum query algorithm 𝒜 for f~ begins in a fixed initial state |ψ0 in a finite-dimensional Hilbert space, applies a sequence of unitaries U0,Ox,U1,Ox,,UT, and performs a measurement. Here, the initial state |ψ0 and the unitaries U0,U1,,UT are independent of the input. The unitary Ox represents the “query” operation, and does the following for each basis state: it maps |i|b|w to |i|b+ximod3|w for all i[n] (here xi=𝗎 is interpreted as xi=2, and the last register represents workspace that is not affected by the application of a query oracle).

The algorithm then performs a 3-outcome measurement on a designated output qutrit and outputs the observed value.

We say that 𝒜 is a bounded-error quantum query algorithm computing f~ if for all x{0,𝗎,1}n the probability that f~(x) is output is at least 2/3. The (bounded-error) quantum 𝗎-query complexity of f~, denoted by 𝖰𝗎(f), is the least number of queries required for a quantum query algorithm to compute f~ with error probability at most 1/3.

3 Sensitivity Theorem in the Presence of Uncertainty

We first show that sensitivity, block sensitivity, certificate complexity, and size of the largest prime implicant/prime implicate are all asymptotically the same. This is a more formal and more precise restatement of Theorem 3.

Theorem 19.

Let f:{0,1}n{0,1} be a Boolean function. Let k1 be the size of a largest prime implicant of f, and k2 be the size of a largest prime implicate of f. Then, we have:

max{k1,k2}𝗌𝗎(f)𝖻𝗌𝗎(f)𝖼𝖼𝗎(f)k1+k21.
Proof.

The inequalities 𝗌𝗎(f)𝖻𝗌𝗎(f)𝖼𝖼𝗎(f) follow from definitions as for their Boolean counterparts (see, for example, [6, Proposition 1]).

To show the first inequality, we crucially use our three-valued domain. Let P be a prime implicant of f. Define the input yP{0,𝗎,1}n to be 1 in those positions where P contains the corresponding positive literal, 0 where P has the corresponding negated literal, and 𝗎 everywhere else. We claim that each index in {i[n]:xiP or ¬xiP} is sensitive for f~ at yP. To see this, first observe that f~(yP)=1 since P being an implicant means f is 1 on all resolutions of yP. Let xiP be a positive literal. Then, observe that if setting the i’th bit of yP to a 0 does not change the value of f~, then changing the i’th bit to 𝗎 would also not change the value of f~. This means P{xi} would also be an implicant of f, contradicting the fact that P was a prime implicant. The case when xi appears as a negated literal in P is similar. Thus we have 𝗌𝗎(f)k1. A similar argument can be made for prime implicates as well, showing that 𝗌𝗎(f)k2. This proves the first inequality in the theorem.

To prove the last inequality, let x{0,𝗎,1}n be any input to f~. Observe that if f~(x)=0, then the prover can pick an implicate (of size at most k2) that has all literals set to 0 in x, and reveal those values to the verifier. If f~(x)=1, then the prover reveals an implicant (of size at most k1) that is 1. If f~(x)=𝗎, then there must exist inputs x0,x1𝖱𝖾𝗌(x) such that f(x0)=0 and f(x1)=1. Since both x0 and x1 are resolutions of x, it must hold that for all i[n] where xi{0,1}, xi0=xi1=xi. i.e., x0 and x1 differ from x only in positions where x is 𝗎. Hence, a prime implicant that is 1 in x1 must contain a 𝗎 in x. Similarly, a prime implicate that is 0 in x0 must contain a 𝗎 in x. Thus, there exists a prime implicant and a prime implicate of f both of which contain a literal assigned 𝗎 in x. The prover reveals the bits in such a prime implicant and prime implicate. Since every prime implicant and every prime implicate have at least one common variable, the prover reveals only k1+k21 values. Why should this convince the verifier? Since x0, x1, and x coincide on positions where x has 0 or 1, it is possible to set the 𝗎 in the revealed prime implicant to values that make the function output 1. Similarly, it is possible to make the function output 0 by filling in the values to the 𝗎 positions in the prime implicate revealed. Thus, the verifier can check that there are indeed two valid resolutions that give different outputs.

 Remark 20.

Notice that Theorem 19 shows that in our setting, the parameters sensitivity, block sensitivity, and certificate complexity are equivalent (up to a multiplicative constant) to the largest prime implicants/prime implicates. In the Boolean world, for certificate complexity we get a tighter characterization in terms of CNF/DNF width, which is analogous to prime implicant/prime implicant size here. On the other hand, in the Boolean world, there is a quadratic separation between sensitivity and block sensitivity [22].

We now proceed to show that similar to their Boolean counterparts, the deterministic 𝗎-query complexity is polynomially upper bounded by 𝗎-sensitivity and deterministic, randomized, and quantum 𝗎-query complexities are all polynomially related to each other. We do this in two parts:

  1. (i)

    0pt𝗎(f)𝖼𝖼𝗎(f)𝖻𝗌𝗎(f).

  2. (ii)

    𝖻𝗌𝗎(f)=O(𝖱𝗎(f)), 𝖻𝗌𝗎(f)=O(𝖰𝗎(f)2).

We start by showing 0pt𝗎(f)=O(𝖼𝖼𝗎(f)𝖻𝗌𝗎(f)). Algorithm 1 is a deterministic query algorithm that achieves this bound, as shown in Theorem 24. Following this, we derive (ii) in Lemma 27. The final relationships among the three 𝗎-query complexities is presented in Theorem 28.

Algorithm 1 𝗎-query algorithm.

We will need the following observations to prove correctness of the algorithm:

Observation 21.

Let x be a partial assignment that does not contain a b-certificate of f~ for any b{0,𝗎,1}. Then there exists a zf~1(𝗎) that is consistent with x.

Proof.

Since x does not already contain a b-certificate of f~ for any b{0,𝗎,1}, this means that for each b{0,𝗎,1}, there exists a setting to the -variables in the partial assignment to yield a b-input to f~. Specifically, it must be the case that there exists some assignment to the undetermined ’s that sets an implicant of f to 1, and some assignment that sets an implicate of f to 0. Define the string z to be the same as x except with ’s replaced with 𝗎. It can be observed that f~(z)=𝗎.

Observation 22.

Let f:{0,1}n{0,1}, b{0,1} and xf~1(b). Let c be a minimal certificate of f~ at x, and let its domain be C. Then for all iC, we have ci𝗎.

Proof.

Assume towards a contradiction an index i[n] in a minimal certificate c for f~ at x with ci=𝗎. By the definition of a certificate, and f~, the partial assignment c obtained by removing i from the domain of c is such that all strings x{0,𝗎,1}n consistent with c satisfy f~(x)=b. This shows that c is also a certificate for f~ at x, contradicting the minimality of c.

Lemma 23.

If Algorithm 1 reaches Line 15, then every 1-input of f~, and every 0-input of f~ is inconsistent with the partial assignment x (at Line 15).

Proof.

Let k denote max(𝖻𝗌𝗎,0(f~),𝖻𝗌𝗎,1(f~)), the number of iterations of the for loop on Line 4. Assume the algorithm reached Line 15, and let x be the partial assignment x constructed by the algorithm when it reaches Line 15.

Suppose, for the sake of contradiction, there exists an input yf~1(1) that is consistent with x. Since the algorithm reached Line 15, it must be the case that the partial assignment x constructed does not contain a b-certificate for any b{0,𝗎,1} because otherwise one of the if conditions between Lines 9 and 11 would have terminated the algorithm. Then using Observation 21, there must also exist a 𝗎-input consistent with x. Hence every time Line 5 was executed, there was indeed an xf~1(𝗎) consistent with x. Suppose the 𝗎-certificates used during the run of the for loop on Line 4 were c1,,ck, and their respective domains were C1,,Ck.

The fact that neither of the if conditions fired means that every time the oracle was queried, the replies differed from the 𝗎-certificate being queried in at least one index each time. Let BiCi be the set of positions in Ci where x differs from ci. By the observations above, each Bi is non-empty. Observe that since ci+1 is chosen to be consistent with x, it must be the case that ci+1 and x agree on all positions in Ci. Hence Bi+1 is disjoint from Bi. With the same reasoning, we can conclude that Bi+1 is disjoint from every Bj where ji.

Observe that for each i[k], there is a setting to the bits in Bi such that x becomes a 𝗎-input – simply take the setting of these bits from ci, which is a 𝗎-certificate. More formally, for each i[k], there exists a string αi{0,1}|Bi| such that f(x|Biαi)=𝗎.

Since y is consistent with x, it agrees with x on all positions where x. This means the previous observation holds for y too. That is, for each i[k], there exists strings αi{0,1}|Bi| such that f(y|Biαi)=𝗎. Recall that yf~1(1), and hence the sets B1,,Bk form a collection of disjoint sensitive blocks for f~ at y. Further, since the algorithm has not found a 1-certificate (or 0-certificate) yet, it must be the case that there is some 𝗎-input consistent with x by Observation 21. This means there is yet another disjoint block Bk+1 that is sensitive for f~ at y. But this is a contradiction since the maximum, over all 1-inputs of f~, number of disjoint sensitive blocks is 𝖻𝗌𝗎,1(f~)=k<k+1.

A nearly identical proof can be used to show that every 0-input is inconsistent with x.

Theorem 24.

Algorithm 1 correctly computes f~, and makes at most O(𝖼𝖼𝗎(f)𝖻𝗌𝗎(f)) queries. Thus 0pt𝗎(f)=O(𝖼𝖼𝗎(f)𝖻𝗌𝗎(f)).

Proof.

If the algorithm outputs a value in {0,1}, then it must have passed the corresponding if condition (either in Line 11, or in Line 13 and is trivially correct. If the algorithm outputs 𝗎, then either the if condition on Line 9 must have passed, or Line 15 must have been reached. In the former case, the correctness of the algorithm is trivial. In the latter case, from Claim 23, we conclude that every 0-input and every 1-input of f~ must be inconsistent with the partial assignment x constructed by the algorithm when it arrives at Line 15. This means that every x{0,𝗎,1}n that is consistent with x (in particular, the unknown input x) must satisfy f~(x)=𝗎, which concludes the proof of correctness.

The for loop runs for max(𝖻𝗌𝗎,0(f),𝖻𝗌𝗎,1(f))=O(𝖻𝗌𝗎(f)) many iterations, and at most 𝖼𝖼𝗎(f) many bits are queried in each iteration.

We can now conclude that deterministic, randomized, and quantum 𝗎-query complexities are all polynomially related. We use Yao’s minimax principle [25] and the adversary method for lower bounds on quantum query complexity due to Ambainis [2]. We state them below for convenience.

Lemma 25 (Yao’s minimax principle).

For finite sets D,E and a function f:DE, we have 𝖱(f)k if and only if there exists a distribution μ:D[0,1] such that 0ptμ(f)k. Here, 0ptμ(f) is the minimum depth of a deterministic decision tree that computes f to error at most 1/3 when inputs are drawn from the distribution μ.

Theorem 26 ([2, Theorem 5.1]).

Let D,E be finite sets, let n be a positive integer, and let f:DnE be a function. Let X,Y be two sets of inputs such that f(x)f(y) for all (x,y)X×Y. Let RX×Y be such that

  • For every xX, there are at least m different yY with (x,y)R,

  • for every yY, there are at least m different xX with (x,y)R,

  • for every xX and i[n], there are at most different yY such that (x,y)R and xiyi,

  • for every yY and i[n], there are at most different xX such that (x,y)R and xiyi.

Then 𝖰(f)=Ω(mm).

We lower bound randomized and quantum 𝗎-query complexities polynomially by 𝗎-block sensitivity.

Lemma 27.

Let f:{0,1}n{0,1}. Then,

𝖱𝗎(f)=Ω(𝖻𝗌𝗎(f)),𝖰𝗎(f)=Ω(𝖻𝗌𝗎(f)).
Proof.

We use Lemma 25 for the randomized lower bound. Let x{0,𝗎,1}n be such that 𝖻𝗌𝗎(f)=𝖻𝗌𝗎(f~,x)=k, with corresponding sensitive blocks B1,,Bk. Define a distribution μ on {0,𝗎,1}n as follows:

μ(x) =1/2,
μ(xBi) =1/2kfor alli[k].

Towards a contradiction, let T be a deterministic decision tree of cost less than k/10 that computes f~ to error at most 1/3 under the input distribution μ. Let Lx denote the leaf of T reached by the input x. There are now two cases:

  • If the output at Lx is not equal to f~(x), then T errs on x, which contributes to an error of 1/2 under μ, which is a contradiction.

  • If the output at Lx equals f~(x), since the number of queries on this path is less than k/10, there must exist at least 9k/10 many blocks Bi such that no variable of xBi is read on input xBi (observe that in this case, xBi also reaches the leaf Lx). Since f(xBi)f(x), this means T makes an error on each of these Bi’s, contributing to a total error of at least 9k/101/2k=0.45 under μ, which is a contradiction.

This concludes the randomized lower bound.

For the quantum lower bound we use Theorem 26. Define X={x},Y={xBi:i[k]}, and R=X×Y. From Theorem 26 we have m=k,m=1,=1 (since each index appears in at most 1 block, as each block is disjoint) and =1. Theorem 26 then implies 𝖰𝗎(f)=Ω(k)=Ω(𝖻𝗌𝗎(f)).

Theorem 28.

For f:{0,1}n{0,1}, we have

0pt𝗎(f)=O(𝖱𝗎(f)2),0pt𝗎(f)=O(𝖰𝗎(f)4).
Proof.

Theorem 24 and Theorem 19 imply

0pt𝗎(f)=O(𝖼𝖼𝗎(f)𝖻𝗌𝗎(f))=O(𝖻𝗌𝗎(f)2)=O(𝖱𝗎(f)2),

where the final bound is from Lemma 27. Substituting the second bound from Lemma 27 in the last equality above yields 0pt𝗎(f)=O(𝖰𝗎(f)4).

4 Relationships

We start by proving that for monotone functions, the presence of uncertainty does not make computation significantly harder. Similar relationships are known for monotone combinational circuits w.r.t. containing metastability [14].

Proof of Lemma 4.

Any query algorithm for f~ also computes f with at most the same cost, and hence 0pt(f)0pt𝗎(f). Similarly we have 𝖱(f)𝖱𝗎(f) and 𝖰(f)𝖰𝗎(f).

We start with a best (deterministic/randomized/quantum) query algorithm A for f, and an oracle holding an input x{0,𝗎,1}n for f~ . Now, for b{0,1}, we define Ab to be the same algorithm as A, but whenever A queries the jth bit of its input, it performs the following operation instead:

  1. 1.

    Query the j’th bit of x, denote the outcome by xj.

  2. 2.

    If xj{0,1}, return xj.

  3. 3.

    If xj=𝗎, return b.

In the case of quantum query complexity, note that this operation can indeed be implemented quantumly making 2 queries to Ox. The initial query performs the instructions described above, and the second query uncomputes the values from the tuple we don’t need for the remaining part of the computation. Note here that Ox3=I by definition, and thus Ox2=Ox1, which is what we need to implement for the uncomputation operations.

Let S𝗎={ixi=𝗎} be the positions that have 𝗎 in x. Recall that y0:=x|Su0 (and y1:=x|Su1) is the input that has all the 𝗎 in x replaced by 0s (by 1s respectively). Run A0 and A1 (possibly repeated constantly many times each to boost correctness probability) to determine the values of f(y0) and f(y1) with high probability. If f(y0)=1, then we output 1. Else if f(y1)=0, we output 0. Else we have f(y0)=0 and f(y1)=1, and we output 𝗎.

Correctness.

First observe that for b{0,1}, all answers to queries in the algorithm Ab are consistent with the input yb. Next observe that in the poset (equivalently “subcube”) formed by the resolutions of x, the inputs y0 and y1 form the bottom and top elements respectively. Since f is monotone, if f(y0)=1, we can conclude that f is 1 on all resolutions of x. Similarly when f(y1)=0, it must be the case that f is 0 on all inputs in the poset. The remaining case is when f(y0)=0 and f(y1)=1. In this case, the inputs y0 and y1 are themselves resolutions of x with different evaluations of f, and hence the algorithm correctly outputs 𝗎.

By standard boosting of success probability using a Chernoff bound by repeating A0 (A1) constantly many times and taking a majority vote of the answers, we can ensure that the correctness probability of A0 (A1) is large enough, say at least 0.9. Thus, the algorithm described above has correctness probability at least 0.92=0.81, and its cost is at most a constant times the cost of A.

It is easy to prove using Theorem 19 that the 𝗎-query complexity of 𝖬𝖴𝖷n is exponentially larger than its query complexity. We claim that 𝗌𝗎(𝖬𝖴𝖷n)2n. Consider the input where all selector bits are 𝗎 and all data bits are 1. Flipping any data bit to zero changes the output from 1 to 𝗎. In the following theorem, we prove a stronger statement. A function is said to be evasive if its query complexity is the maximum possible. We first prove Theorem 6, i.e., 𝖬𝖴𝖷n is evasive for all n. In fact, we prove a more general theorem. We consider decision trees for 𝖬𝖴𝖷n~ that are guaranteed to produce the correct output when the number of unstable values in the input is at most k, for an arbitrary k[0,2n+n]. We defer the proof of the following theorem to the full version [4].

Theorem 29.

Let k[0,2n+n]. Any optimal-depth decision tree that correctly computes the hazard-free extension of 𝖬𝖴𝖷n on inputs with at most k unstable values has depth exactly min{2k+n,2n+n}.

5 Decision Tree Size and Limited Uncertainty

We show that despite requiring exponentially more depth in the presence of uncertainty, we can compute the function 𝖬𝖴𝖷n using a decision tree that is only quadratically larger in size than the Boolean one (Theorem 8).

However, there are functions that require exponentially larger decision tree size in the presence of uncertainty such as the 𝖠𝖭𝖣 function (Theorem 9). We defer the proofs of these statements to the full version [4]. The size lower bound technique used here for both the functions involves constructing a set of inputs that must all lead to different leaves and hence any decision tree that computes the hazard-free extension correctly requires size at least as large as the size of this input set. The upper bound is shown by an explicit construction.

We present a general construction of decision trees of hazard-free extensions of functions from a decision tree of the underlying Boolean function (Theorem 10) in the full version [4] of this paper. The main idea is to construct the 𝗎 subtree of the root node in the new decision tree from the 0 and 1 subtrees that have been constructed recursively. The 𝗎 subtree construction can be viewed as a product construction where we replace each leaf in a copy of the 0 subtree with a modified copy of the 1 subtree. This product construction works because for every input that reaches the 𝗎 subtree, the output value can determined by the output values when that bit is 0 and when it is 1, which is exactly what the 0 and 1 subtrees compute.

We also prove in Theorem 12 that the complexity increases only gradually along with the increase in the amount of uncertainty in the inputs. More specifically, we prove that if the inputs are guaranteed to have at most k bits with value 𝗎, without any guarantee on where they occur, the exponential blow-up in query complexity is contained to the parameter k (See full version [4] for a proof). This proof also makes use of the product construction mentioned above.

6 Discussion and Open Problems

In this paper we initiated a study of query complexity of Boolean functions in the presence of uncertainty, by considering a natural generalization of Kleene’s strong logic of indeterminacy on three variables.

We showed that an analogue of the celebrated sensitivity theorem [13] holds in the presence of uncertainty too. While Huang showed a fourth-power relationship between sensitivity and block sensitivity in the Boolean world, we were able to show these measures are linearly related in the presence of uncertainty. The proof of sensitivity theorem in our setting is considerably different and easier from the proof of sensitivity theorem in the Boolean world. We can parameterize 𝗎-sensitivity and 𝗎-query complexity by restricting our attention to inputs that have at most k unstable bits. The setting k=0 gives us the Boolean sensitivity theorem and the setting k=n, our sensitivity theorem. Can we unify these two proofs using this parameterization? That is, is there a single proof for the sensitivity theorem that works for all k?

We showed using 𝗎-analogues of block sensitivity, sensitivity, and certificate complexity that for all Boolean functions f, its deterministic, randomized, and quantum 𝗎-query complexities are polynomially related to each other. An interesting research direction would be to determine the tightest possible separations between all of these measures. It is interesting to note that our quadratic relationship between deterministic and randomized 𝗎-query complexity improves upon the best-known cubic relationship in the usual query models. Moreover, our quartic deterministic-quantum relationship matches the best-known relationship in the Boolean world [1]. More generally, it would be interesting to see best known relationships between combinatorial measures of Boolean functions in this model, and see how they compare to the usual query model (see, for instance, [1, Table 1]). A linear relationship between deterministic and randomized query complexities in the presence of uncertainty remains open, but a quadratic deterministic-quantum separation follows from Theorem 6 or from the OR function (via Grover’s search algorithm [12]).

While we studied an important extension of Boolean functions to a specific three-valued logic that has been extensively studied in various contexts, an interesting future direction is to consider query complexities of Boolean functions on other interesting logics. Our definition of f~ dictates that f~(x)=b{0,1} iff f(y)=b for all y𝖱𝖾𝗌(x), and f~(x)=𝗎 otherwise. A natural variant is to define a 0-1 valued function that outputs b{0,1} iff majority of f(y) equals b over all y𝖱𝖾𝗌(x). It is not hard to show that the complexity of this variant of 𝖬𝖴𝖷n is bounded from below by the usual query complexity of Majority on 2n variables, which is Ω(2n) in the deterministic, randomized, and quantum query models.

References

  • [1] Scott Aaronson, Shalev Ben-David, Robin Kothari, Shravas Rao, and Avishay Tal. Degree vs. approximate degree and quantum implications of huang’s sensitivity theorem. In Proceedings of 53rd Annual Symposium on Theory of Computing (STOC 2021), pages 1330–1342, 2021. doi:10.1145/3406325.3451047.
  • [2] Andris Ambainis. Quantum Lower Bounds by Quantum Arguments. J. Comput. Syst. Sci., 64(4):750–767, 2002. doi:10.1006/JCSS.2002.1826.
  • [3] Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum Lower Bounds by Polynomials. J. ACM, 48(4):778–797, 2001. doi:10.1145/502090.502097.
  • [4] Deepu Benson, Balagopal Komarath, Nikhil Mande, Sai Soumya Nalli, Jayalal Sarma, and Karteek Sreenivasaiah. Sensitivity and Query Complexity under Uncertainty, 2025. arXiv:2507.00148.
  • [5] Deepu Benson, Balagopal Komarath, Jayalal Sarma, and Nalli Sai Soumya. Hazard-free decision trees. CoRR, abs/2501.00831, 2025. doi:10.48550/arXiv.2501.00831.
  • [6] Harry Buhrman and Ronald de Wolf. Complexity measures and decision tree complexity: a survey. Theor. Comput. Sci., 288(1):21–43, 2002. doi:10.1016/S0304-3975(01)00144-X.
  • [7] Stephen Cook and Cynthia Dwork. Bounds on the time for parallel RAM’s to compute simple functions. In Proceedings of the 14th Annual Symposium on Theory of Computing (STOC 1982), pages 231–233, 1982. doi:10.1145/800070.802196.
  • [8] Stephen Cook, Cynthia Dwork, and Rüdiger Reischuk. Upper and Lower Time Bounds for Parallel Random Access Machines without Simultaneous Writes. SIAM Journal on Computing, 15(1):87–97, 1986. doi:10.1137/0215006.
  • [9] Andrzej Ehrenfeucht and David Haussler. Learning Decision Trees from Random Examples. Inf. Comput., 82(3):231–246, 1989. doi:10.1016/0890-5401(89)90001-1.
  • [10] Steven Fortune and James Wyllie. Parallelism in Random Access Machines. In Proceedings of the 10th Annual Symposium on Theory of Computing (STOC 1978), pages 114–118, 1978. doi:10.1145/800133.804339.
  • [11] Parikshit Gopalan, Noam Nisan, Rocco A. Servedio, Kunal Talwar, and Avi Wigderson. Smooth Boolean Functions Are Easy: Efficient Algorithms for Low-Sensitivity Functions. In Proceedings of the Conference on Innovations in Theoretical Computer Science (ITCS 2016), pages 59–70, 2016. doi:10.1145/2840728.2840738.
  • [12] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual Symposium on Theory of Computing (STOC 1996), pages 212–219, 1996. doi:10.1145/237814.237866.
  • [13] Hao Huang. Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture. Annals of Mathematics, 190(3):949–955, 2019. doi:10.4007/annals.2019.190.3.6.
  • [14] Christian Ikenmeyer, Balagopal Komarath, Christoph Lenzen, Vladimir Lysikov, Andrey Mokhov, and Karteek Sreenivasaiah. On the Complexity of Hazard-free Circuits. Journal of the ACM (JACM), 66(4):1–20, 2019. doi:10.1145/3320123.
  • [15] Christian Ikenmeyer, Balagopal Komarath, and Nitin Saurabh. Karchmer-Wigderson Games for Hazard-Free Computation. In Proceedins of 14th Innovations in Theoretical Computer Science Conference (ITCS 2023), volume 251, pages 74:1–74:25, 2023. doi:10.4230/LIPIcs.ITCS.2023.74.
  • [16] Stasys Jukna. Notes on Hazard-Free Circuits. SIAM J. Discret. Math., 35(2):770–787, 2021. doi:10.1137/20M1355240.
  • [17] Stephen Cole Kleene. Introduction to Metamathematics. P. Noordhoff N.V., Groningen, 1952.
  • [18] Nikhil S. Mande and Karteek Sreenivasaiah. Query Complexity with Unknowns. CoRR, abs/2412.06395, 2024. doi:10.48550/arXiv.2412.06395.
  • [19] Ron van der Meyden. Logical Approaches to Incomplete Information: A Survey. In Logics for Databases and Information Systems, pages 307–356. Kluwer, 1998.
  • [20] Noam Nisan. CREW PRAMs and Decision Trees. SIAM J. Comput., 20(6):999–1007, 1991. doi:10.1137/0220062.
  • [21] Noam Nisan and Mario Szegedy. On the Degree of Boolean Functions as Real Polynomials. Comput. Complex., 4:301–313, 1994. doi:10.1007/BF01263419.
  • [22] David Rubinstein. Sensitivity vs. Block Sensitivity of Boolean functions. Combinatorica, 15(2):297–299, 1995. doi:10.1007/BF01200762.
  • [23] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning - From Theory to Algorithms. Cambridge University Press, 2014. URL: http://www.cambridge.org/de/academic/subjects/computer-science/pattern-recognition-and-machine-learning/understanding-machine-learning-theory-algorithms.
  • [24] Ingo Wegener. The Critical Complexity of All (Monotone) Boolean Functions and Monotone Graph Properties. Inf. Control., 67(1-3):212–222, 1985. doi:10.1016/S0019-9958(85)80036-X.
  • [25] Andrew Chi-Chih Yao. Probabilistic computations: Toward a unified measure of complexity. In 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pages 222–227. IEEE Computer Society, 1977.