Abstract 1 Introduction 2 Preliminaries 3 A random local access algorithm for 𝒌-SAT solutions 4 Proof of Theorem 14 5 Proof of the main theorem 6 Concluding remarks References Appendix A Correctness of Algorithm 5 Appendix B Efficiency of Algorithm 5

Random Local Access for Sampling k-SAT Solutions

Dingding Dong ORCID Harvard University, Cambridge, MA, USA Nitya Mani ORCID Massachusetts Institute of Technology, Cambridge, MA, USA
Abstract

We present a sublinear time algorithm that gives random local access to the uniform distribution over satisfying assignments to an arbitrary k-SAT formula Φ, at exponential clause density. Our algorithm provides memory-less query access to variable assignments, such that the output variable assignments consistently emulate a single global satisfying assignment whose law is close to the uniform distribution over satisfying assignments to Φ. Random local access and related models have been studied for a wide variety of natural Gibbs distributions and random graphical processes. Here, we establish feasibility of random local access models for one of the most canonical such sample spaces, the set of satisfying assignments to a k-SAT formula.

Our algorithm proceeds by leveraging the local uniformity of the uniform distribution over satisfying assignments to Φ. We randomly partition the variables into two subsets, so that each clause has sufficiently many variables from each set to preserve local uniformity. We then sample some variables by simulating a systematic scan Glauber dynamics backward in time, greedily constructing the necessary intermediate steps. We sample the other variables by first conducting a search for a polylogarithmic-sized local component, which we iteratively grow to identify a small subformula from which we can efficiently sample using the appropriate marginal distribution. This two-pronged approach enables us to sample individual variable assignments without constructing a full solution.

Keywords and phrases:
sublinear time algorithms, random generation, k-SAT, local computation
Copyright and License:
[Uncaptioned image] © Dingding Dong and Nitya Mani; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Streaming, sublinear and near linear time algorithms
Related Version:
Full Version: https://arxiv.org/abs/2409.03951
Acknowledgements:
We would like to thank Kuikui Liu and Ronitt Rubinfeld for helpful discussions, suggestions, and feedback on a draft. We would like to thank the anonymous referees for identifying an issue with a previous version of this article.
Funding:
NM was supported by the Hertz Graduate Fellowship and the NSF Graduate Research Fellowship Program #2141064. This material includes work supported by the NSF Grant DMS-1928930, while the authors were in residence at SLMath during the Spring 2025 semester.
Editors:
Jeremias Berg and Jakob Nordström

1 Introduction

Efficiently sampling from an exponentially sized solution space is a fundamental problem in computation. The quintessential such problem is sampling a uniformly random satisfying assignment to a given k-SAT formula, the setting of this work. Throughout, we let Φ denote an n-variable (k,d)-formula, which is a k-SAT formula in conjunctive normal form (which we also call a k-CNF) such that each variable appears in at most d clauses. We let Ω=ΩΦ denote the set of satisfying assignments to Φ, and let μ=μΦ denote the uniform distribution over Ω. We consider the regime where k is a (large) constant and take n.

For d(2ek)12k, it has long been known (e.g., by the Lovász local lemma introduced in [8]) that any such k-SAT formula Φ has at least one satisfying assignment. Moreover, seminal work of Moser and Tardos [27] gave a simple algorithm that computes such a satisfying assignment in expected linear time. However, a similarly basic question in the sampling regime is far more poorly understood: when is it possible to efficiently sample an (approximately uniformly) random satisfying assignment to Φ?

The first breakthrough in this direction came from Moitra [25] who gave a deterministic, fixed-parameter tractable algorithm for approximately sampling from μ provided that d2ck for c1/60. This was followed up by work of Feng, Guo, Yin and Zhang [10], who used a Markov chain approach to give a polynomial time algorithm for sampling from μ provided that d2k/20. Recent works [18, 31] have given state of the art results for sampling from k-CNFs, when d2k/4.82 (in fact for large domain sizes, [31] gives essentially tight bounds on sampling from atomic constraint satisfaction problems). There has also been a tremendous amount of work sampling from random k-CNFs, recently essentially resolved by work of [6], building on previous progress of [13, 7, 12, 19].

A natural question about such algorithms to sample from k-CNFs is whether one can adapt them to more efficiently answer local questions about a random satisfying assignment in sublinear time. For variable v, let μv denote the marginal distribution on v induced by μ. When n is large, one might wish to sample a small handful of variables S from their marginal distributions μS in o(n) time, without computing an entire Ω(n)-sized satisfying assignment σμ. Further, many traditional algorithms for counting the number of satisfying assignments to a given k-SAT formula proceed by computing marginal probabilities of variable assignments, a task that can be completed given local access to a random assignment. Therefore, sublinear time algorithms for answering local questions can also yield speedups in more general counting algorithms.

Ideally, a random local access model should provide query access to variable assignments such that the output enjoys the following properties:

  1. (a)

    the model is consistent: queries for different variable assignments consistently emulate those of a single random satisfying assignment σμ;

  2. (b)

    the model is sublinear: sampling variable assignments takes o(n) time in expectation;

  3. (c)

    the model is memory-less: given the same initialization of randomness, answers to multiple, possibly independent queries for variable assignments are consistent with each other.

We give a more precise definition of random local access models in Section 2. Random local access models were formally introduced in work of Biswas, Rubinfeld, and Yodpinyanee [5], motivated by a long line of work over the past decades studying sublinear time algorithms for problems in theoretical computer science. The authors of [5] studied a different natural Gibbs distribution, the uniform distribution over proper q-colorings of a given n-vertex graph with maximum degree Δ. By adapting previously studied classical and parallel sampling algorithms for q-colorings, they were able to construct a local access algorithm to a random proper q-coloring when q9Δ. The related problem of sampling partial information about huge random objects was pioneered much earlier in [15, 16]; further work in [28] considers the implementation of different random graph models. Random local access and related (e.g., parallel computation) models have also been recently studied for several other random graphical processes and Gibbs distributions (cf. [5, 4, 26, 23]).

The above model for local access to random samples extends a rich line of work studying local computation algorithms (LCAs), originally introduced in works of Rubinfield, Tamir, Vardi, and Xie [29] and Alon, Rubinfield, Vardi, and Xie [2]. Local computation algorithms are widely used in parallel and distributed computing, with notable practical success in areas such as graph sparsification [1], load balancing [24], and sublinear-time coloring [2]. A primary initial application of LCAs in [29] was locally constructing a satisfying assignment to a given (k,d)-formula Φ when d2k/10. Similar to the desired properties of random local access, LCA implements query access to a large solution space in sublinear time using local probes. However, rather than sampling from a given distribution, LCA only provides local access to some consistent valid solution in the desired solution space that may otherwise be completely arbitrary in nature.

In this work, we present a polylogarithmic time algorithm that gives random local access to the uniform distribution over satisfying assignments of an arbitrary k-SAT formula Φ at exponential clause density (i.e., the number of occurrences of each variable is bounded by an exponential function of k). The following is an informal version of our main result; a more formal version can be found at Theorem 24.

Theorem 1 (Main theorem: informal).

There exists a random local access algorithm 𝒜 that satisfies the following conditions. For any (k,d)-formula Φ=(V,𝒞) with maximum degree d2k/400 and any variable vV, we can use 𝒜 to sample a single assignment for v, 𝒜(v){0,1} in expected polylogarithmic time (in |V|), such that the distribution of 𝒜(v) is the same as the marginal distribution of σ(v), where σ is a uniformly random satisfying assignment to Φ.

Similar to [5], the proof of Theorem 1 adapts some of the algorithmic tools used to study parallel and distributed sampling. The proof also builds upon the work of Moitra [25] and Feng–Guo–Yin–Zhang [10] on sampling from bounded degree k-CNFs in polynomial time. The authors of [25, 10] both critically took advantage of a global variable marking (see Definition 10) to better condition the marginal distributions of variables. Such an approach allows for a subset of the variable assignments to be sampled with ease; the resulting shattering of the solution space conditioned on such a partial assignment then allows one to efficiently complete the random satisfying assignment. These initial approaches have been extended and strengthened to nearly linear time algorithms that succeed for larger maximum variable degree in a flurry of recent work (c.f. [25, 10, 20, 17, 13, 7, 12, 31, 6]).

Recently, Feng, Guo, Wang, Wang, and Yin [9] used a recursive sampling scheme to simulate the systematic scan projected Glauber dynamics via a strategy termed coupling towards the past, which they used to derandomise several Markov chain Monte Carlo (MCMC) algorithms for sampling from CSPs. Additionally, recent work of He, Wang and Yin [18] used a recursive sampling scheme to sample k-SAT solutions. Their work immediately gives a sublinear (in fact, expected constant time) algorithm for sampling the assignment of a single variable one time; however, this work does not immediately extend out of the box to a random local access model that enjoys the desired consistency and memory-less properties if multiple variables are sampled. Recursive sampling schemes have also been recently used to make progress on analyzing and designing fast sampling algorithms for a variety of Gibbs distributions (cf. [18, 7, 3]). As noted earlier, such schemes have been particularly useful for derandomising and constructing parallel and distributed versions of many popular MCMC algorithms for sampling solutions to CSPs [9, 11, 17, 18].

An immediate roadblock to adapting such global or parallel sampling strategies to random local access to k-SAT solutions is that the vast majority of the aforementioned existing algorithms critically either use some global information – like a variable marking, particular ordering of variables, or other global state compression – as an input to make algorithmic decisions or postpone sampling of certain problematic variable assignments until a linear number of other choices are made. Both these issues necessitate a substantive departure from these approaches for any hope of local access adaptation. In this work, we overcome these obstacles by adapting the coupling towards the past strategy used in [9] to derandomise MCMC in conjunction with a local implementation of the variable marking strategy introduced in [25].

We use these algorithms to carefully select and sample a small number of auxiliary variable assignments on an as-needed basis, showing that bad cases can be reduced to small calculations after a sublinear number of steps. Our proof of Theorem 1 requires a novel adaptation of sampling techniques to avoid requiring global information or complete passes of the variable set; we show that we can perform analogous operations locally or leverage local uniformity properties to circumvent them altogether in the course of locally sampling a variable assignment. Importantly, we demonstrate how to execute local sampling in an oblivious, consistent fashion, so that the algorithm need not retain any memory and that variables need not be sampled in any particular order.

2 Preliminaries

Notation

Throughout, let Φ=(V,𝒞) be a k-CNF on variable set V={v1,,vn} with associated collection of clauses 𝒞. In this work we do not let the same variable appear multiple times in any clause of Φ, although our algorithm could be adapted to the more general scenario. We further assume that Φ is a (k,d)-formula; that is, each variable vi appears in at most d clauses. For every clause C𝒞, let vbl(C) denote the collection of variables in the clause C. We further define Δ:=maxC𝒞|{C𝒞:vbl(C)vbl(C)}|, so in particular Δkd.

In the regime we work in, we assume k is a large fixed constant and view n. We use fg to denote that there is some constant C (not depending on k) such that fCg. We also use the standard asymptotic notation (O, o, Ω, ω, Θ), where when not specified, we assume these are in the n limit. We use 𝖫𝖺𝗐(X) to denote the underlying distribution of a random variable X.

We let Ω=ΩΦ{0,1}n denote the set of satisfying assignments to Φ and let μ=μΦ denote the uniform distribution over Ω. We suppress the Φ subscripts when the formula is clear from context. We introduce a few more concepts associated to a k-SAT instance that will be used later.

Definition 2.

Given probability distributions ν1,ν2 over Ω, the total variation distance is

d𝖳𝖵(ν1,ν2):=12ωΩ|ν1(ω)ν2(ω)|.

If we have a random variable X with Law(X)=ν, we may write d𝖳𝖵(μ,X) instead of d𝖳𝖵(μ,ν) in a slight abuse of notation.

Definition 3 (Dependency hypergraph).

Given a (k,d)-formula Φ, let HΦ=(V,) be the dependency hypergraph (with multiple edges allowed), where V is the set of variables and ={vbl(C):C𝒞} is the collection of variable sets of clauses of Φ.

Definition 4 (Partial assignment).

Given (k,d)-formula Φ=(V,𝒞), let

𝒬:=ΛV{0,1,}Λ

denote the space of partial assignments with the following symbology. Given a partial assignment σ{0,1,}Λ on some ΛV, each variable vΛ is classified as follows:

  • σ(v){0,1} means that v is accessed by the algorithm and assigned with σ(v){0,1};

  • σ(v)= means that v is accessed by the algorithm but unassigned yet with any value.

We sometimes use to denote the empty assignment (i.e., Λ is the empty set). We say that σ is feasible if it can be extended to a satisfying assignment to Φ.

Definition 5 (Reduced formula on partial assignment).

Let Φ be a (k,d)-formula. Given a partial assignment σ on ΛV, let Φσ be the result of simplifying Φ under σ. That is, define Φσ:=(Vσ,𝒞σ) where

  • Vσ=V{vΛ:σ(v){0,1}},

  • 𝒞σ is obtained from 𝒞 by removing all clauses that have been satisfied under σ, and removing any appearance of variables that are assigned 0 or 1 by σ.

Let HΦσ be the associated (not necessarily k-uniform) hypergraph to Φσ. For variable vVΛ, let Φvσ denote the maximal connected component of Φσ to which v belongs.

Definition 6 (Marginal distribution).

For an arbitrary set of variables SV, let μS be the marginal distribution on S induced by μ, so that

μS(τ)=τ{0,1}n:τ|S=τμ(τ)τ{0,1}S.

When S={v} is a single vertex, we let μv=μ{v}. Furthermore, given some partial assignment σ{0,1,}Λ for ΛV, if SΛ=, we let μSσ():=μS(σ) be the marginal distribution on S conditioned on the partial assignment σ. When S=VΛ, we simply use μσ to denote μVΛσ.

2.1 The random local access model and local computation algorithms

One of the most widely studied models of local computation are local computation algorithms (LCAs) introduced in [2, 29]. Given a computation problem F, an LCA (in an online fashion) provides the i-th bit to some consistent solution F in sublinear time. As originally defined, local computation algorithms need not be query-oblivious; in other words, the output solution can depend on the order of queried bits. However, several follow-up works have given query-oblivious analogues of LCAs for a variety of natural problems. Such models are the non-random-sampling version of the random local access models we study here.

In this work, we construct a query-oblivious LCA for an intermediate step in our construction of the random local access model (described in more detail in Section 2.2). We thus precisely define both LCAs and random local access models below.

Definition 7 (Local computation algorithm).

Given an object family (Π,Ω) with input Π and sample space ΩΣn (for some alphabet Σ), a (t(n),δ(n))-local computation algorithm (LCA) provides an oracle 𝒜 that implements query access to some arbitrary σΩ that satisfies the following conditions:

  • 𝒜 has query access to the input description Π and to a tape of public random bits 𝐑.

  • 𝒜 gets a sequence of queries i1,,iq for any q>0, and after each query ij, it produces an output σij such that the outputs σi1,,σiq are consistent with some σΩ.

  • The probability of success over all q queries is at least 1δ(n) (where δ(n)<1/3).

  • The expected running time of 𝒜 on any query is at most t(n), which is sublinear in n.

We further say that 𝒜 is query-oblivious if the outputs of 𝒜 do not depend on the order of the queries but depend only on Π and 𝐑.

Motivated by the above, we give a formal definition of the random local access model introduced in [5].

Definition 8 (Random local access).

Given a random object family (Π,Ω,μ) with input Π, sample space ΩΣn (with alphabet Σ) and distribution μ supported on Ω, a (t(n),δ(n))-random local access implementation of a family of query functions {F1,F2,} provides an oracle 𝒜 with the following properties:

  • 𝒜 has query access to the input description Π and to a tape of public random bits 𝐑.

  • For a given input Π, upon being queried with Fi, the oracle with probability 1o(1) uses at most t(n) resources (where t(n) is sublinear in n) to return the value 𝒜(Π,𝐑,Fi(Y)) for some specific YΩ.

  • The choice of YΩ only depends on Π and 𝐑.

  • The distribution of Y over the randomness 𝐑 satisfies

    d𝖳𝖵(Y,μ)=12ωΩ|(Y=ω)μ(ω)|<δ(n),

    where δ(n)1nc for constant c>1.

In other words, if 𝒜 is a random local access oracle for a set of queries, then when provided the same input Π and the same random bits 𝐑, it must provide outputs that are consistent with a single choice of Y regardless of the order and content of these queries.

Remark 9.

In this work, we do not study or seek to optimize the memory usage of our algorithms. However, there is also a rich literature studying space-efficient and parallelizable local models (e.g., [2, 14, 21]).

2.2 Marking and a query-oblivious LCA

As noted in the introduction, the Lovász local lemma [30] guarantees the existence of a satisfying assignment to any (k,d)-formula Φ if d(2ek)12k. Furthermore, the Moser-Tardos algorithm [27] gives a simple linear-time algorithm to construct such a satisfying assignment in the above (k,d)-regime:

  • Sample v1,,vnR{0,1} uniformly at random;

  • While there exists a clause C𝒞 that is violated by the current assignment, resample variables in C uniformly at random.

The Lovász local lemma can be made quantitative, showing that not only is there some satisfying assignment to Φ if d(2ek)12k, but both that there are exponentially many such satisfying assignments and that the marginal distributions μv are approximately uniform with μv(1)12exp(1/k) (see [25, 10]). Such local uniformity is critical to the success of algorithms that approximately sample from μ. In his breakthrough work, Moitra [25] noted that this local uniformity continues to hold for conditional distributions μσ provided that each clause has a sufficiently large number of free variables under partial assignment σ. This motivates the idea of a marking, as introduced in [25], which is a careful selection (via the Lovász local lemma) of a linear sized subset of variables V that has the following properties:

  • For every clause C, |vbl(C)|k. Having a large number of marked variables in each clause would hopefully result in a desired shattering condition; namely, we can sample a partial assignment σ{0,1} on this marking that partitions the original formula into sufficiently small connected components.

  • For every clause C, |vbl(C)\|k. This large non-intersection preserves the local uniformity of the marginal distributions of the as yet-unsampled variables in .

The general strategy of arguments following the marking approach is to show that it is “easy” to sample a partial assignment σ{0,1}, and moreover, conditioned on any such assignment, the reduced formula Φσ is very likely to shatter into sufficiently small connected components such that the remaining variable assignments can be efficiently sampled from the conditional distribution by solving a smaller instance of the original problem. We now make this notion precise.

Definition 10 (α-marking).

Given (k,d)-formula Φ and constant α(0,1/2), we say that V is an α-marking if for every C𝒞, |vbl(C)|αk and |vbl(C)\|αk.

In this work, we locally, greedily construct an α-marking using a local implementation of Moser-Tardos. We will further argue that because of the shattering property, we can locally compute the connected component of Φvσ for some σμ and a given vertex v, without having to actually assign linearly many vertices.

Precisely, we construct a query-oblivious LCA, 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽(), where for a (k,d)-formula Φ and a given range of α(0,1), 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() can take in any variable vV and output either 0 or 1 indicating whether v is in some α-marking of V. Moreover, 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() takes sublinear time and when queried on all of V, gives a consistent α-marking of V.

Theorem 11.

Let Φ=(V,𝒞) be a (k,d)-formula. Suppose there exist constants 1/2<β1<β2<1α that satisfy the following conditions:

4α<2(1β2)<1β1,
16k4d52(β1h(1β1))k,
16k4d52(β2β1)kh(β2β11β1)(1β1)k, (1)
δ(β2δ)h(β2δ1δ)(1δ) is decreasing on 0δβ1,
2e(kd+1)<2(1h(α1β2))(1β2)k.

(Here h(x):=xlog2(x)(1x)log2(1x) is the binary entropy.)

Fix constant c>0. Then there exists a polylog(n) time local computation algorithm 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() which, given any variable vV, returns an assignment sv{0,1} denoting whether v is contained in an α-marking of Φ. Moreover, with probability at least 1nc, the responses for all vV yield a consistent α-marking of Φ.

The construction of 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() and the verification that it is a query-oblivious LCA draws inspiration from the approach in [2] to obtain an oblivious LCA for hypergraph 2-coloring. From a high level, 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() determines sv by randomly and greedily including variables in the marking and subsequently determining those that must/must not be in the marking, and finally (if needed) performing a rejection sampling on a smaller connected component that contains v. The formulae in Theorem 11 are some technical conditions that guarantee this process to go through. We refer the readers to Appendix E of the full version (arxiv:2409.03951) for a proof of Theorem 11.

3 A random local access algorithm for 𝒌-SAT solutions

In this section, we introduce the main algorithm (Algorithm 1) that locally accesses the assignment of a variable vV in a uniformly random satisfying assignment σ to Φ.

Recall from Theorem 1 that given a (k,d)-formula Φ=(V,𝒞), variable vV, and any constant c>0, we wish to come up with a random local access algorithm 𝒜 such that (1) the expected running time of 𝒜 is polylog(n), and (2) the output μ^v of 𝒜 for every vV satisfies d𝖳𝖵(Law(μ^v),μv)1nc. As a high level description, given the above inputs, Algorithm 1 performs the following:

  1. 1.

    Locally decides whether v lies in an α-marking of Φ using 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽(v) (Theorem 11), such that the responses for all vV yield a consistent α-marking V.

  2. 2.

    Suppose σμ is a uniformly random satisfying assignment to Φ. If v is marked, then we sample σ(v) by computing 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(v) (adapted from [9, Algorithm 8]), which may make further recursive calls to sample other marked variables.

  3. 3.

    If v is not marked, then we perform a depth-first search starting from v to compute σ restricted to . We start from σ=; for every w we encounter that we have not yet sampled, we compute 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(w) and update σ(w), to eventually obtain a (w.h.p. polylogarithmic in size) connected component Φv:=Φvσ that contains v. This part is carried out by the algorithm 𝖢𝗈𝗇𝗇(v).

    After obtaining the connected component Φvσ, we call 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 (Theorem 13) to sample a uniformly random satisfying assignment to Φvσ and extend σ. We then output σ(v).

Algorithm 1 The sampling algorithm.

Input: k-CNF formula Φ=(V,𝒞) and variable vV
   Output: random assignment σ(v){0,1}

To illustrate the workflow of Algorithm 1, we present a toy example on a small k-CNF formula, omitting some subroutine details for brevity.

Example 12.

Suppose k=3, d=2 and α=1/3. Consider the following (k,d)-formula Φ=(V,𝒞) with variables V={x1,x2,x3,x4,x5} and clauses 𝒞={C1,C2,C3}, where

C1=(x1x2¬x3),C2=(¬x2x3x4),C3=(¬x4x5¬x1).

Suppose we wish to approximately sample σ(x1) using the local access algorithm 𝒜. We run 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽 on each variable; suppose the resulting marking is ={x2,x5}, so that x1 is not marked.

Since x1, we call 𝖢𝗈𝗇𝗇(x1) to explore the connected component of Φ containing x1. We initialize 𝒮:={C1,C3} and partial assignment σ=.

  • Process clause C1. Since x2, we call 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(x2), which returns (say) σ(x2)=0. The clause becomes (x10¬x3)=(x1¬x3), which is unsatisfied with the current σ. We add adjacent clause C2 (via x2 and x3) to 𝒮.

  • Process clause C3. Since x5, we call 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(x5), which returns (say) σ(x5)=1. The clause becomes (¬x41¬x1), which is satisfied.

  • Process clause C2. We already have σ(x2)=0, so ¬x2=1, and the clause is satisfied. Remove C2 from 𝒮.

Now the discovered component Φx1σ includes (1) clauses C1, (2) marked variable assignments σ(x2)=0, σ(x5)=1, (3) free variables: x1,x3,x4. We now run 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 on the reduced subformula C1=(x1¬x3). The satisfying assignments (x1,x3) are {(1,0),(1,1),(0,0)}. We pick one uniformly at random, say (x1,x3)=(1,0). Then we return μ^x1=1.

Algorithm 1 is our main routine; it has several subroutines 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽, 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾, 𝖢𝗈𝗇𝗇, and 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 that we now describe. Recall that 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽 has been discussed in Theorem 11. We now introduce the algorithm 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 given by the work of He–Wang–Yin [18].

Theorem 13 ([18, Theorems 5.1 and 6.1]).

Suppose Φ is a (k,d)-CNF formula on n variables with k2k(dk)541150e3. Then there exists an algorithm 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 that terminates in O(k3(dk)9n) time in expectation, and outputs a uniformly random satisfying assignment to Φ.

As seen in Theorem 13, for an n-variable (k,d)-CNF formula, the algorithm has running time O(n). However, as we will only apply 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 to connected components of size polylog(n) (that are shattered by the partial assignment on the marked variables), the execution of 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 in Line 6 of Algorithm 1 will only take polylogarithmic time.

We will define subroutines 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾 and 𝖢𝗈𝗇𝗇 below and show that they satisfy the desired correctness and efficiency properties, beginning by verifying 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾 has the desired properties in Section 4.

Theorem 14.

Let Φ=(V,𝒞) be a (k,d)-formula, α>0, and V be an α-marking as in Definition 10. Fix positive constant c>0. Suppose θ:=112exp(2edk2αk)0.4, 36ed3k40.6αk1/2, and 2148dk4e2d2/α2αk0.9. Then there exists a random local access algorithm 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾() such that for every u, 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u) returns a random value in {0,1} that satisfies the following:

  1. 1.

    Let ν:=μ and ν^ be the joint distribution of the outputs (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u. Then we have d𝖳𝖵(ν^,ν)<nc.

  2. 2.

    For every u, the expected cost of 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u) is polylog(n).

We also require 𝖢𝗈𝗇𝗇 to be correct and have low expected cost.

Theorem 15.

Let Φ=(V,𝒞) be a (k,d)-formula, α>0, and V be an α-marking as in Definition 10. Suppose θ:=112exp(2edk2αk)0.4 and d2αk/4. Then there exists a random local access algorithm 𝖢𝗈𝗇𝗇() such that for every vV, 𝖢𝗈𝗇𝗇(v) returns the connected component in Φσ that contains v, where σ is the partial assignment given by (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u. Moreover, for every vV, the expected cost of 𝖢𝗈𝗇𝗇(v) is polylog(n).

From a high level, the algorithm 𝖢𝗈𝗇𝗇(v) explores the clauses and marked variables in the CNF formula that are reachable from v, greedily sampling the marked variables and expanding through unsatisfied clauses. It iteratively builds a partial assignment σ that “shatters” the formula into disconnected components, isolating the one containing v. We will verify Theorem 15 in Appendix D of the full version (arxiv:2409.03951).

4 Proof of Theorem 14

In this section we show Theorem 14. Throughout, let Φ=(V,𝒞) be a (k,d)-formula, α>0, and V be an α-marking with ||=m. We introduce a local access algorithm 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾 that satisfies the key property that the joint distribution of outputs (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u consistently follows the marginal distribution μ. In particular, (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u will simulate the output of a systematic scan Glauber dynamics on the marked variables.

Definition 16.

Let ={u1,,um} denote the marked variables (so ||=m).

  • For every time t, define i(t):=(tmodm)+1.

  • For every time t and ui, define 𝗉𝗋𝖾𝖽ui(t):=max{st:i(s)=i}.

In the systematic scan Glauber dynamics, we always resample vertex ui(t) at time t (as opposed to randomly choosing a variable to resample at each step). For every u and time t, 𝗉𝗋𝖾𝖽u(t) denotes the most recent time up to time t at which u got resampled. Observe that for all t and u, we have tm<𝗉𝗋𝖾𝖽u(t)t. Moreover, for all wui(t), we have 𝗉𝗋𝖾𝖽w(t)<t.

Algorithm 2 Systematic scan projected Glauber dynamics.

Input: a k-CNF formula Φ=(V,𝒞), a set of marked variables V, time T, and an ordering ={u1,,um}
   Output: a random assignment X{0,1}

We refer the readers to Appendix A of the full version (arxiv:2409.03951) for more introduction on the systematic scan Glauber dynamics and its comparison with the (original) projected Glauber dynamics Markov chain for sampling k-𝖲𝖠𝖳 solutions.

We have the following non-quantitative convergence for systematic scan Glauber dynamics.

Theorem 17 ([22]).

Let (Xt)t=0 denote the Glauber dynamics or the systematic scan Glauber dynamics with stationary distribution π. If (Xt)t=0 is irreducible over Ω{0,1}, then for every X0Ω, we have limtd𝖳𝖵(Xt,π)=0.

Let V be a fixed set of marked variables for a given k-CNF Φ. Let ν:=μ be the marginal distribution of μ on . We will crucially use the following local uniformity of Algorithm 2 (the proof in the systematic scan setting follows essentially identically to  [10, Lemma 5.3]):

Lemma 18.

Suppose Φ is a k-CNF with 1<s2αk2edk. Let X{0,1}Λ be either X0 or Xt(\{ui(t)}) for some t>0 (so correspondingly, Λ is either or \{ui(t)} for some t>0). Then for any SΛ and σ:S{0,1}, we have

(X(S)=σ)(12)|S|exp(|S|s).

Specifically, for any v and c{0,1}, we have (X(v)=c)112exp(1s)121s.

Definition 19.

Let π be a distribution on {0,1}. We say that π is b-marginally lower bounded if for all v, Λ{v} and feasible partial assignment σΛ{0,1,}Λ, we have

πvσΛ(0),πvσΛ(1)b.

Let π be a b-marginally lower bounded distribution over {0,1}. For every v, we define the following distributions:

  1. 1.

    Lower bound distribution πvLB over {0,1,}: we define πvLB:=πLB, with

    πLB(0)=πLB(1)=b,πLB()=12b.
  2. 2.

    Padding distribution πvpad,σΛ over {0,1}: for Λ{v} and feasible partial assignment σΛ{0,1,}Λ, we define

    πvpad,σΛ():=πvσΛ()b12b.

Per Lemma 18, we have that ν=μ is θ-lower bounded for

θ:=112exp(2edk2αk)1212(α2log2d)k. (2)

4.1 Systematic scan Glauber dynamics on marked variables

We adapt the approach of [9] to a local sampling algorithm by simulating the systematic scan projected Glauber dynamics on from time T to 0, which is an aperiodic and irreducible Markov chain by results in [10, 9].

Let (Xt)Tt0 be the output of Algorithm 2, where we relabel X0,,XT by XT,,X0. We know from Theorem 17 that, as T, we have d𝖳𝖵((X0(v))v,ν)0 where ν=μ is the marginal distribution of μ on the marked variables. In particular, for every fixed n and γ>0, there exists Tγ such that for all T>Tγ, the Markov chain (Xt)Tt0 satisfies d𝖳𝖵((X0(v))v,ν)<γ.

We know from Lemma 18 that ν is θ-lower bounded, with the lower bound distribution νLB defined by νLB(0)=νLB(1)=θ and νLB()=12θ. Thus, for every T<t0 and u=ui(t), sampling Xt(u)νuXt1({u}) can be decomposed into the following process:

  1. 1.

    With probability 2θ, set Xt(u) to 0 or 1 uniformly at random;

  2. 2.

    With probability 12θ, sample Xt(u) from the padding distribution νupad,Xt1({u}).

Our goal is to obtain X0(u) for u, which by Theorem 17 will closely follow the marginal distribution νu for T sufficiently large. It suffices to simulate the last update for u. The key observation here is that updates of Glauber dynamics may depend only on a very small amount of extra information. When θ is close to 1/2, it is reasonably likely that we can determine X0(u)=X𝗉𝗋𝖾𝖽u(0)(u) without any other information. Thus, we can deduce X0(u) by recursively revealing only the necessary randomness backwards in time. This method was termed coupling towards the past and studied for a variety of constraint satisfaction problems in [9].

We now give a general algorithm 𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R) in Algorithm 5, whose output simulates the effects of Algorithm 2 at any particular time t, by looking backwards in time at what happened over the course of running (Xt)Tt0. The eventual algorithm 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u) we give in Theorem 14 will retrieve the most recent update of each variable u, i.e., retrieve the coordinate X0(u).

Algorithm 3 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u).

Input: a k-CNF formula Φ=(V,𝒞), a set of marked variables ={u1,,um}V, and a marked variable u
   Output: a random value in {0,1}

The algorithm 𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R) contains another subroutine LB-SampleT,Y(t,R) that is defined in Algorithm 4. For every time t, the output of LB-SampleT,Y(t,R) follows the distribution νLB (see Definition 19). In other words, LB-SampleT,Y(t,R) preliminarily decides which of the above two regimes we fell into while resampling Xui(t) at time t.

Throughout Algorithm 5, we maintain two global data structures.

  • We let M:{0,1,} record the previously revealed outcomes of Algorithm 5. That is, for every t0 such that 𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R) has already been executed, we set M(t) to be the outcome of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R).

  • We let R={(s,rs)}×{0,1,} record the previously revealed outcomes of Algorithm 4. That is, for every t0 such that LB-SampleT,Y(t,M,R) has already been executed and returned rt{0,1,}, we add {(t,rt)} to R.

Since T,Y remain constant throughout Algorithms 5 and 4, and all recursive calls access and update the same M and R, we sometimes write 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t)=𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R) and LB-Sample(t)=LB-SampleT,Y(t,R) for short.

At the beginning of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t), we first check a few early stopping conditions:

  • (Lines 1–2) If variable ui(t) remains its initial assignment Y(ui(t)) at the end of time t (i.e., is never resampled), we terminate and return Y(ui(t)).

  • (Lines 3–4) If |R|, the number of stored outcomes of LB-Sample, already reaches 80dk4logn, we terminate and return 1.

  • (Lines 5–6) If previous iterations have already computed 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) and stored M(t){0,1}, we terminate and return M(t).

If none of the above conditions occurs, we then resample, first by applying LB-Sample(t) (Algorithm 4) in Lines 7–8. If LB-Sample(t){0,1} (which occurs with probability 2θ), we can update u=ui(t) by choosing an assignment from {0,1} uniformly at random without investigating the assignments of any other variables at earlier time steps (i.e., we fall into the zone of local uniformity).

If LB-Sample(t)= (which occurs with probability 12θ), then we fall into a zone of indecision and must resample u=ui(t) from the padding distribution νupad,Xt1({u}). To resample its spin, we slowly proceed backwards in time, lazily computing previously resampled assignments, until we have determined enough information to compute the assignment of u at the desired time step. Verifying accuracy is somewhat involved, given our lazy computation strategy and partitioning of ν into a locally uniform piece and an associated padding distribution. Thus, in Appendix A we show that Lines 9–19 correctly complete the desired task, proving the following bound on d𝖳𝖵((𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u,ν).

Proposition 20.

For any c>0 and n sufficiently large, we have

d𝖳𝖵((𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u,ν)<nc.

We next require that Algorithm 5 has expected polylogarithmic cost. This is largely a consequence of the local uniformity of μ and our lazy recursive computation of variable assignments in Algorithm 5.

Lemma 21.

Suppose 2148dk4e2d2/α2αk0.9. For every t0, the expected cost of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) is O(k17d10log2n/α).

Algorithm 4 LB-SampleT,Y(t,R).

Input: An integer T0 and assignment Y{0,1}; an integer t0
   Global variables: a set R×{0,1,} and α-marking ={u1,,um}
   Output: a random value in {0,1,} distributed as νLB

Algorithm 5 𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(t,M,R).

Input: An integer T0 and assignment Y{0,1}; an integer t0
   Global variables: (k,d)-CNF Φ=(V,𝒞), α-marking ={u1,,um}, M:{0,1,}, and R×{0,1,}
   Output: a random value in {0,1}

We prove Lemma 21 in Appendix B. These two results together allow us to prove Theorem 14.

Proof of Theorem 14.

The theorem directly follows from combining Proposition 20 and Lemma 21. By Proposition 20, we know that the joint distribution ν^:=(𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u satisfies d𝖳𝖵(ν^,ν)<nc. By Lemma 21, we know that for every u, the expected cost of 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u)=𝖦𝗅𝖺𝗎𝖻𝖾𝗋T,Y(𝗉𝗋𝖾𝖽u(0),M=,R=) is of order polylog(n).

5 Proof of the main theorem

We are finally able to state and prove the formal version of Theorem 1. Before picking the relevant parameters, we first collect the list of conditions required to apply Theorems 13, 14, and 15.

Condition 22.
k2αk(dk)541150e3,
θ:=112exp(2edk2αk)0.4, (3)
36ed3k40.6αk1/2,
2148dk4e2d2/α2αk0.9,
d2αk/4.

Recall that we also need conditions Theorem 11 to apply Theorem 11. One can verify that for d2k/400 and k sufficiently large, we can choose all the parameters appropriately so that Theorems 11 and 22 are satisfied.

Lemma 23.

Let

α=1/75,β1=0.778,β2=0.96.

If k is sufficiently large, and d=d(k)2k/400, then Conditions Theorems 11 and 22 are satisfied.

We defer the proof of Lemma 23 to Appendix F of the full version (arxiv:2409.03951). We can now state and prove Theorem 24, the formal version of our main result.

Theorem 24.

Suppose Φ=(V,𝒞) is a (k,d)-formula with d2k/400 and k sufficiently large. Let μ be the uniform distribution over satisfying assignments to Φ, with marginal distributions μv for vV. Then for all c>0, there is a (polylog(n),nc)-random local access algorithm 𝒜 for sampling the variable assignment of vV as μ^v, such that

d𝖳𝖵((μv^)vV,μ)1nc.

Here we remark that c>0 is any fixed constant, and the runtime of 𝒜 depends on it. As written, both the algorithmic runtime and correctness are random, since we give expected running time and bounds on the marginal distribution in total variation distance. However, our algorithm allows derandomising either correctness or running time at the expense of worse bounds on the other.

Proof.

Suppose Φ=(V,𝒞) is a (k,d)-formula with d2k/400 and k sufficiently large, and c>0 is any constant. Choose parameters

α=1/75,β1=0.778,β2=0.96.

By Lemma 23, we know that with these parameters, conditions Theorems 11 and 22 are satisfied. Thus, by Theorem 11, there exists a polylog(n) time oblivious local computation algorithm 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() that with probability at least 1n2c gives a consistent α-marking V.

Suppose 𝖨𝗌𝖬𝖺𝗋𝗄𝖾𝖽() gives a consistent α-marking V. By Theorem 14, we know that there is a random local access algorithm 𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾() with expected cost polylog(n) such that the distribution of (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))uν^ satisfies d𝖳𝖵(ν^,μ)<n2c.

Let τ=(𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u. By the proof of Theorem 15, we know that for every unmarked variable vV, with probability 1n0.1logn, the number of clauses in Φvτ is at most kdlog2n. We already proved in Theorem 15 that for every vV, the expected cost of 𝖢𝗈𝗇𝗇(v) is at most polylog(n).

Furthermore, since the reduced formula Φvτ has at least αk variables and at most k variables in each clause, and every variable lies in at most d clauses with d2αk/5.4, by Theorem 13, the expected cost of 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾(Φvτ) is asymptotically at most

k3(dk9)(kdlog2n+n0.1lognn)=polylog(n).

Since both 𝖢𝗈𝗇𝗇 and 𝖴𝗇𝗂𝖿𝗈𝗋𝗆𝖲𝖺𝗆𝗉𝗅𝖾 succeed with probability 1, we get that μ^, the joint distribution of outputs of Algorithm 1 for all vV, satisfies d𝖳𝖵(μ^,μ)<nc for all c>0.

By construction, Algorithm 1 is memory-less as it samples all necessary variable assignments in order to compute the assignment of a queried variable v. Furthermore, Algorithm 1 queried on different variables vV collectively returns an assignment σμ^ that has d𝖳𝖵(μ^,μ)<nc. Since this holds for any c>0 constant, we obtain the desired result.

6 Concluding remarks

With more involved refinements and optimizations of the arguments in this work, the density constraint d2k/400 of Theorem 1 can be substantially improved (to something closer to d2k/50). We omit these additional calculations in favor of expositional clarity to highlight our main result: random local access models exist for arbitrary bounded degree k-CNFs at exponential clause density. Furthermore, these arguments can also be adapted (in a similar fashion to [13, 19, 7]) to obtain a random local access model for random k-CNFs in a comparable density regime.

Nonetheless, the limit of the approaches in this work would still fall well short of obtaining random local access for, e.g., approximately d2k/4.82, the maximum density at which we currently know how to efficiently sample solutions to an arbitrary bounded degree k-CNF in nearly-linear time [18, 31]. This is because of our reliance on a query-oblivious LCA to construct a local marking and our application of weaker sampling results to a correspondingly reduced CNF.

The approach we take in this work is only one of many possible schema to reduce from existing classical, parallel, and/or distributed algorithms to more local algorithms. Our approach involved using ideas and techniques from a variety of previous works (notably [25, 9, 10]), many of which were in the non-local setting, and adapting them in a novel way to obtain a sublinear sampler. Our approach bears some resemblance to work of [5] where authors adapted a parallel Glauber dynamics algorithm to obtain random local access to proper q-colorings, and to work of [3] that used a recursive strategy to give perfect sampling algorithms from certain spin systems in amenable graphs. We expect that many other existing algorithms (including [18, 17, 19, 31]) can be adapted with some work to give random local access algorithms.

References

  • [1] Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Graph sketches: sparsification, spanners, and subgraphs. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS ’12, pages 5–14, New York, NY, USA, 2012. doi:10.1145/2213556.2213560.
  • [2] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1132–1139. SIAM, 2012. doi:10.1137/1.9781611973099.89.
  • [3] Konrad Anand and Mark Jerrum. Perfect sampling in infinite spin systems via strong spatial mixing. SIAM Journal on Computing, 51(4):1280–1295, 2022. doi:10.1137/21M1437433.
  • [4] Amartya Shankha Biswas, Edward Pyne, and Ronitt Rubinfeld. Local access to random walks. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.ITCS.2022.24.
  • [5] Amartya Shankha Biswas, Ronitt Rubinfeld, and Anak Yodpinyanee. Local access to huge random objects through partial sampling. 11th Innovations in Theoretical Computer Science (ITCS 2020), 2020. doi:10.4230/LIPIcs.ITCS.2020.27.
  • [6] Zongchen Chen, Aditya Lonkar, Chunyang Wang, Kuan Yang, and Yitong Yin. Counting random k-sat near the satisfiability threshold. arXiv preprint, 2024. doi:10.48550/arXiv.2411.02980.
  • [7] Zongchen Chen, Nitya Mani, and Ankur Moitra. From algorithms to connectivity and back: finding a giant component in random k-SAT. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3437–3470. SIAM, 2023.
  • [8] Paul Erdos and László Lovász. Problems and results on 3-chromatic hypergraphs and some related questions. Infinite and finite sets, 10(2):609–627, 1975.
  • [9] Weiming Feng, Heng Guo, Chunyang Wang, Jiaheng Wang, and Yitong Yin. Towards derandomising markov chain monte carlo. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 1963–1990, 2023. doi:10.1109/FOCS57990.2023.00120.
  • [10] Weiming Feng, Heng Guo, Yitong Yin, and Chihao Zhang. Fast sampling and counting k-SAT solutions in the local lemma regime. J. ACM, 68(6):Art. 40, 42, 2021. doi:10.1145/3469832.
  • [11] Weiming Feng and Yitong Yin. On local distributed sampling and counting. In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, pages 189–198, 2018. doi:10.1145/3212734.3212757.
  • [12] Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Andrés Herrera-Poyatos. Fast sampling of satisfying assignments from random k-SAT. arXiv, 2022. arXiv:2206.15308.
  • [13] Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Kuan Yang. Counting solutions to random CNF formulas. SIAM Journal on Computing, 50(6):1701–1738, 2021. doi:10.1137/20M1351527.
  • [14] Mohsen Ghaffari and Jara Uitto. Sparsifying distributed algorithms with ramifications in massively parallel computation and centralized local computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1636–1653. SIAM, 2019. doi:10.1137/1.9781611975482.99.
  • [15] Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. J. ACM, 33(4):792–807, August 1986. doi:10.1145/6490.6503.
  • [16] Oded Goldreich, Shafi Goldwasser, and Asaf Nussboim. On the implementation of huge random objects. SIAM Journal on Computing, 39(7):2761–2822, 2010. doi:10.1137/080722771.
  • [17] Kun He, Xiaoming Sun, and Kewen Wu. Perfect sampling for (atomic) Lovász local lemma. arXiv, 2021. arXiv:2107.03932.
  • [18] Kun He, Chunyang Wang, and Yitong Yin. Sampling Lovász local lemma for general constraint satisfaction solutions in near-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 147–158. IEEE, 2022. doi:10.1109/FOCS54457.2022.00021.
  • [19] Kun He, Kewen Wu, and Kuan Yang. Improved bounds for sampling solutions of random CNF formulas. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3330–3361. SIAM, 2023. doi:10.1137/1.9781611977554.CH128.
  • [20] Vishesh Jain, Huy Tuan Pham, and Thuy-Duong Vuong. On the sampling Lovász local lemma for atomic constraint satisfaction problems. arXiv, 2021. arXiv:2102.08342.
  • [21] Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. Local computation: Lower and upper bounds. Journal of the ACM (JACM), 63(2):1–44, 2016. doi:10.1145/2742012.
  • [22] David A. Levin and Yuval Peres. Markov chains and mixing times. American Mathematical Society, Providence, RI, second edition, 2017. With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling from the past” by James G. Propp and David B. Wilson. doi:10.1090/mbk/107.
  • [23] Hongyang Liu and Yitong Yin. Simple parallel algorithms for single-site dynamics. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pages 1431–1444, 2022. doi:10.1145/3519935.3519999.
  • [24] Yishay Mansour, Aviad Rubinstein, Shai Vardi, and Ning Xie. Converting online algorithms to local computation algorithms. In Automata, Languages, and Programming, pages 653–664, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. doi:10.1007/978-3-642-31594-7_55.
  • [25] Ankur Moitra. Approximate counting, the Lovász local lemma, and inference in graphical models. J. ACM, 66(2):Art. 10, 25, 2019. doi:10.1145/3268930.
  • [26] Peter Mörters, Christian Sohler, and Stefan Walzer. A sublinear local access implementation for the chinese restaurant process. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022), 2022. doi:10.4230/LIPIcs.APPROX/RANDOM.2022.28.
  • [27] Robin A. Moser and Gábor Tardos. A constructive proof of the general Lovász local lemma. J. ACM, 57(2):Art. 11, 15, 2010. doi:10.1145/1667053.1667060.
  • [28] Moni Naor and Asaf Nussboim. Implementing huge sparse random graphs. In International Workshop on Approximation Algorithms for Combinatorial Optimization, pages 596–608, 2007. doi:10.1007/978-3-540-74208-1_43.
  • [29] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. arXiv, 2011. arXiv:1104.1377.
  • [30] Joel Spencer. Asymptotic lower bounds for ramsey functions. Discrete Mathematics, 20:69–76, 1977. doi:10.1016/0012-365X(77)90044-9.
  • [31] Chunyang Wang and Yitong Yin. A sampling lovász local lemma for large domain sizes. In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 129–150. IEEE, 2024. doi:10.1109/FOCS61266.2024.00019.

Appendix A Correctness of Algorithm 5

In this section, we show Proposition 20 that with high probability, (𝖦𝗅𝖺𝗎𝖻𝖾𝗋(𝗉𝗋𝖾𝖽u(0)))u faithfully returns the final outcome (X0(u))u of the systematic scan Glauber dynamics (Xt)Tt0 initialized at XT=Y. We will later use Theorem 17 to show that therefore, when T is set to be sufficiently large, the distribution of (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u=(𝖦𝗅𝖺𝗎𝖻𝖾𝗋(𝗉𝗋𝖾𝖽u(0)))u is close to the marginal distribution ν=μ.

Proposition 25.

Fix t0 and let u=ui(t). Suppose |R|<80dk4logn after the execution of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) (i.e., Line 4 of Algorithm 5 is never triggered). Then 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) faithfully returns Xt(u), where (Xt)Tt0 is the systematic scan Glauber dynamics started at XT=Y.

Proof.

The statement clearly holds for t=T and u=ui(T). Since 𝗉𝗋𝖾𝖽u(T)=T, by Lines 1–2 of Algorithm 5, we have 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(T)=Y(u)=XT(u).

Inductively, for T<t0, assume the proposition for all Tt<t. Let u=ui(t). Suppose |R|<80dk4logn after the execution of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t). Observe that, since 𝗉𝗋𝖾𝖽w(t)<t for all wu, 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) only makes further calls to 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) with t<t. Thus, by the inductive hypothesis, all further calls of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) have correctly returned the outcomes Xt(ui(t)).

We wish to show that the resampled outcome 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) follows the marginal distribution νuXt1({u}). Per Lines 5–6, we may assume that 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) has never been called before, in which case we directly go to Line 7 of Algorithm 5. Lines 7–8 guarantee that with probability 2θ, we assign Xt(u) to be 0 or 1 uniformly at random. It remains to show that in Lines 9–19, we are able to resample Xt(u) from the padding distribution νupad,Xt1({u}).

To show this, we first verify that the sets Λ, V and the partial assignment σΛ{0,1,}Λ obtained in Line 19 satisfy the following four conditions:

  1. (1)

    uV;

  2. (2)

    (V)Λ{u};

  3. (3)

    for all C𝒞 such that vbl(C)V and vbl(C)(VV), C is satisfied by σΛ;

  4. (4)

    for all marked variables wV{u}, we have σΛ(w)=Xt1(w){0,1}; for all variables wΛV, we either have σΛ(w)=Xt1(w){0,1}, or have σΛ(w)=.

Here, property (1) holds because u is added to V in the initialization, and V never removes variables. Property (2) holds because if variables in some clause C are added to V in Line 15, then all marked variables in C are added to Λ in Line 14. As the while loop terminates, the opposite condition of Line 10 holds, which is exactly property (3).

We now show property (4). For every wV{u}, we know that w is added to V in Line 15 due to some clause C; if w is marked, then by Line 14 and the inductive hypothesis, we know that we have assigned σΛ(w)𝖦𝗅𝖺𝗎𝖻𝖾𝗋(𝗉𝗋𝖾𝖽w(t))=Xt1(w){0,1}. For every wΛV, by Line 18, we have assigned σΛ(w)LB-Sample(𝗉𝗋𝖾𝖽w(t)). If LB-Sample(𝗉𝗋𝖾𝖽w(t)), then we have LB-Sample(𝗉𝗋𝖾𝖽w(t))=X𝗉𝗋𝖾𝖽w(t)(w)=Xt1(w){0,1}.

Let Ψ denote the connected component in ΦσΛ that contains u. Let μΨ denote the distribution of a uniformly random satisfying assignment to Ψ. By property (3), we know that the connected component in ΦσΛ that contains u is supported on V. By property (4), we know that Xt1({u}) is an extension of σΛ, which means that the connected component in ΦXt1({u}) that contains u is also supported on V. Moreover, property (4) implies that σΛ(V{u})=Xt1(V{u}), which means that the two marginal distributions (μΨ)uσΛ and (μΨ)uXt1(V{u}) are the same. Altogether, we get that

μuσΛ=(μΨ)uσΛ=(μΨ)uXt1({u})=μuXt1({u}).

Recall that ν is the marginal distribution of μ on . Let νΨ denote the marginal distribution of μΨ on . Since Xt1({u}) and σΛ are both supported on subsets of , the above gives

νuσΛ=(νΨ)uσΛ=(νΨ)uXt1({u})=νuXt1({u}).

Recall that we wish to sample from νupad,Xt1({u}). Observe that for any partial assignment σ, νupad,σ is a deterministic function of νuσ (see Definition 19). Since νuXt1({u})=(νΨ)uσΛ, we have νvpad,Xt1({u})=(νΨ)vpad,σΛ as well. Thus, it suffices to sample c(νΨ)upad,σΛ=νupad,Xt1({u}) which was performed in Line 19. This shows that Lines 9–19 draws Xt(u) from the padding distribution νupad,Xt1({u}), and finishes the proof.

We now show that the condition |R|<80dk4logn happens with high probability. To do this, we show that in a single instance of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t), the “chain” of further recursions 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) is unlikely to be large. We build the following graph G to track these recursions.

Definition 26.

For every C𝒞 and t0, let

ϕ(C,t):=(vbl(C),{𝗉𝗋𝖾𝖽w(t):wvbl(C)}),

i.e., ϕ(C,t) is the pair comprising the variables of C and the most recent times that any marked variable in C was resampled up until time t. Consider an associated graph Gt defined by

V(Gt)={ϕ(C,t):C𝒞,Ttt},

such that ϕ(C1,t1)ϕ(C2,t2) in Gt if and only if the following holds:

  1. 1.

    vbl(C1)vbl(C2),

  2. 2.

    For 𝒯:={𝗉𝗋𝖾𝖽w(t1):wvbl(C1)}{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)}, we have max𝒯min𝒯<2m (recall that m=||).

We also recall the notion of a 2-tree in a graph or hypergraph.

Definition 27.

Let G=(V(G),E(G)) be a graph or hypergraph. We say that ZV(G) is a 2-tree if Z satisfies the following conditions:

  1. 1.

    for all u,vZ, distG(u,v)2;

  2. 2.

    the associated graph with vertex set Z and edge set {{u,v}Z:distG(u,v)=2} is connected.

There are not many 2-trees containing a fixed vertex in a graph of bounded maximum degree. We recall one example upper bound below that will be sufficient for our purposes.

Observation 28 (see [10, Corollary 5.7]).

Let G=(V(G),E(G)) be a graph or hypergraph with maximum degree D. Then for every vV, the number of 2-trees ZV(G) containing v with |Z|= is at most (eD2)12.

The following proposition shows that the size of R is unlikely to be large when we terminate 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) for any t0.

Proposition 29.

Fix t0. Suppose θ:=112exp(2edk2αk)0.4 and 36ed3k40.6αk1/2. Upon the termination of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t), for every η1, the size of R satisfies

[|R|24dk4(η+1)]2η.

Proof.

Fix t0 and consider some instance of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t). For t1<t0t, we say that 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t1) is triggered by x:=ϕ(C,t0) if 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t1) is called in Line 14 of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t0) with clause C. Let W={xV(Gt):x triggers recursive calls}. We begin by verifying a few basic properties that R and G:=Gt enjoy.

Claim 30.

Upon the termination of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t), we have |R|2k2|W|+2k.

Proof.

Observe that for every t0t, 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t0) calls LB-Sample at most k+1 times (in Line 7 and Line 12 of Algorithm 5) before possibly going into another subroutine 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t1). Let A denote the set of timestamps t0 such that 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t0) was called at least once. Then we have |R|(k+1)|A|.

Observe that every 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t0) with t0<t is triggered by some xW. Moreover, every xW triggers at most k subroutines 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t0) in Line 14. Thus, we get that |A|k|W|+1, which gives |R|(k+1)|A|(k2+k)|W|+k+12k2|W|+2k.

Claim 31.

The maximum degree of G is at most 6k2d1.

Proof.

Fix ϕ(C1,t1)W. There are at most kd clauses C2𝒞 such that vbl(C1)vbl(C2). For any such C2, we count the number of possible ϕ(C2,t2) so that 𝒯={𝗉𝗋𝖾𝖽w(t1):wvbl(C1)}{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)} satisfies max𝒯min𝒯<2m.

Suppose

{𝗉𝗋𝖾𝖽w(t1):wvbl(C1)} ={s1,,sk1} with s1<<sk1,
{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)} ={s1,,sk2} with s1<<sk2.

Observe that sk1t1<s1+m and sk2t2<s1+m. If max𝒯min𝒯<2m, then we have

sk12m<s1<sk2<s1+2m,

which gives t2<s1+m<s1+3m and t2sk2>sk12m. Thus, we have

sk12m<t2<s1+3m.

Let S:={sk12m+1,sk12m+2,,s1+3m1}. In particular, S is an interval of size 5m given by ϕ(C1,t1).

Observe that as t2 increments from sk12m+1 to s1+3m1, we have {𝗉𝗋𝖾𝖽w(t2):wvbl(C2)}{𝗉𝗋𝖾𝖽w(t21):wvbl(C2)} only if 𝗉𝗋𝖾𝖽w(t2)>𝗉𝗋𝖾𝖽w(t21) for some wvbl(C2). Moreover, since |vbl(C2)|k and |S|5m, we know that there are at most 5k such numbers t2 in S, and these numbers have been completely determined by vbl(C2) and S (i.e., by ϕ(C1,t1) and C2). These numbers partition S into at most 5k+1 intervals such that {𝗉𝗋𝖾𝖽w(t2):wvbl(C2)} is the same for all t2 on each interval. Thus, for every fixed ϕ(C1,t1) and C2, the set {{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)}:t2S} has size at most 5k+1.

Therefore, given any ϕ(C1,t1), we can pick a neighbor ϕ(C2,t2)ϕ(C1,t1) by first picking C2 (which has kd choices) and then picking an element in {{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)}:t2S} (which has 5k+1 choices). So the number of possible ϕ(C2,t2)ϕ(C1,t1) in G is at most kd(5k+1)6k2d1.

Claim 32.

Let u=ui(t) and W={ϕ(C,t)W:uvbl(C)}. Then G[W] is a clique.

Proof.

Consider any two clauses C,C such that uvbl(C)vbl(C). Suppose ϕ(C,t),ϕ(C,t)W. Clearly vbl(C)vbl(C). Moreover, since the timestamps 𝗉𝗋𝖾𝖽w(t) over all marked variables w lie in the range {tm+1,,t}, we get that the maximum and minimum of {𝗉𝗋𝖾𝖽w(t):wvbl(C)}{𝗉𝗋𝖾𝖽w(t):wvbl(C)} differ by at most m1<2m. Thus we have ϕ(C,t)ϕ(C,t) in G.

Claim 33.

Let W be as in Claim 32. For every xW, there exists a path p0p in G[W] such that p0W and p=x.

Proof.

Let x=ϕ(C,t1) be any element in W. We perform a double induction, the outside on t1 and the inside on C.

  • Base case: t𝟏=t.

    Suppose first that t1=t, so x=ϕ(C,t) for some C. Let u=ui(t). If uvbl(C), then xW and we are done. Inductively, suppose Claim 33 holds for all x=ϕ(C,t) that triggers a recursive call earlier than x in Algorithm 5. If uvbl(C), then by the while loop condition Line 10, there must exist some x=ϕ(C,t)W such that x triggers a recursive call earlier than x, and vbl(C)vbl(C). By the inductive hypothesis, there exists a path p0p in G[W] such that p0W and p=x. Since the maximum and minimum of {𝗉𝗋𝖾𝖽w(t):wvbl(C)}{𝗉𝗋𝖾𝖽w(t):wvbl(C)} differ by at most m1<2m, we get that xx in G. Therefore we can extend the path p0p with p+1=x. This finishes the inductive step.

  • Inductive step: t𝟏<t.

    Now suppose x=ϕ(C,t1) with t1<t. By the inductive hypothesis, we can assume Claim 33 for all t0{t1+1,,t}. Let u1=ui(t1).

    Suppose first that u1vbl(C). Since t1<t, there must exist t0{t1+1,,t} and C𝒞 such that ϕ(C,t0) triggers 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t1), with u1vbl(C) and t1=𝗉𝗋𝖾𝖽u1(t0). Let y=ϕ(C,t0). Clearly vbl(C)vbl(C). Since t0m<t1<t0, we also know that the maximum and minimum of {𝗉𝗋𝖾𝖽w(t1):wvbl(C)}{𝗉𝗋𝖾𝖽w(t0):wvbl(C)} differ by at most 2m1<2m. Thus we have xy in G. By the inductive hypothesis for t0, there exists a path p0p in G[W] such that p0W and p=y. Since xy in G, we can extend the path by p+1=x.

    Inductively, suppose u1vbl(C1), and Claim 33 holds for all t0{t1+1,,t} and for all x=(C,t1) that triggers a recursive call earlier than x. Then there must exist some x=ϕ(C,t1)W such that x triggers a recursive call earlier than x, and vbl(C)vbl(C). By the inductive hypothesis, there exists a path p0pG[W] such that p0W and p=x. Since the maximum and minimum of {𝗉𝗋𝖾𝖽w(t1):wvbl(C)}{𝗉𝗋𝖾𝖽w(t1):wvbl(C)} differ by at most m1<2m, we get that xx in G. Thus we can extend the path by p+1=x. This finishes the inductive step.

Claim 34.

For all x=ϕ(C,t)W and wvbl(C), the function 𝖫𝖡-𝖲𝖺𝗆𝗉𝗅𝖾(𝗉𝗋𝖾𝖽w(t)) does not satisfy C.

Proof.

This directly follows from Lines 12–14 of Algorithm 5. Since ϕ(C,t) triggers a recursion in Line 14, by the condition in Line 12, for all marked variables wvbl(C), the function LB-Sample(𝗉𝗋𝖾𝖽w(t)) does not satisfy C.

With these claims, we can now prove the proposition. Fix t0 and η1. Assume |R|24dk4(η+1), which by Claim 30 implies that |W|6dk2(η+1). By Claim 33, we know that WW; by Claim 31, G has maximum degree 6dk21. Thus, by a greedy selection, we can find a 2-tree ZW containing some element in W such that |Z|=η+1.

Fix any 2-tree ZV(G) such that |Z|=η+1. For every x=ϕ(C,t)Z, if xW, then we know from Claim 34 that LB-Sample(𝗉𝗋𝖾𝖽w(t)) does not satisfy C for all wvbl(C). Since the latter happens with probability at most (1θ)αk, we have

[xW](1θ)αk.

Since Z is a 2-tree, we know that for every two x=ϕ(C1,t1),y=ϕ(C2,t2)Z, we have {𝗉𝗋𝖾𝖽w(t1):wvbl(C1)}{𝗉𝗋𝖾𝖽w(t2):wvbl(C2)}= (as otherwise C1 and C2 would share a variable, and the union of these two sets would span an interval of size at most 2(m1)+1=2m1, meaning that xy in G, which contradicts the fact that Z is an independent set in G). In particular, the sets

{{𝗉𝗋𝖾𝖽w(t):wvbl(C)}:ϕ(C,t)Z}

are disjoint. Thus, for every fixed 2-tree |Z|=η+1 in V(G), we have

[ZW][xW for all xZ](1θ)αkη.

Let 𝒯 denote the set of 2-trees of V(G) of size η+1 that intersects with W. Since |W|d and G has maximum degree at most 6k2d1, by Observation 28, we have

|𝒯|d(e(6k2d)2)η2(36ed3k4)η.

Thus, we have

[|W|6dk2(η+1)]Z𝒯[ZW](36ed3k4)η(1θ)αkη2η,

where the last step used the assumption that θ0.4 and 36ed3k40.6αk1/2. This implies that

[|R|24dk4(η+1)][|W|6dk2(η+1)]2η.

Setting η=(3+c)logn, we get the following correctness statement on Algorithm 5.

Corollary 35.

For every t0, we have

[𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t)Xt(u)]25[|R|>80dk4logn][|R|>24dk4((3+c)logn+1)]29n(3+c).

In particular, for every u, we have

[𝖦𝗅𝖺𝗎𝖻𝖾𝗋(𝗉𝗋𝖾𝖽u(0))X0(u)]=[𝖦𝗅𝖺𝗎𝖻𝖾𝗋(𝗉𝗋𝖾𝖽u(0))X𝗉𝗋𝖾𝖽u(0)(u)]n(3+c).

Taking a union bound over all u, we get that

[(𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))uX0]n(2+c).

Since we have picked T in Algorithm 3 sufficiently large so that d𝖳𝖵(X0,ν)n(2+c), we get that the joint output (𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u satisfies d𝖳𝖵((𝖬𝖺𝗋𝗀𝗂𝗇𝖲𝖺𝗆𝗉𝗅𝖾(u))u,ν)nc, proving Proposition 20.

Appendix B Efficiency of Algorithm 5

We now move on to show the efficiency of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) for all t0. Observe that for every r48dk4, by Proposition 29, we have

[|R|r]2r48dk4. (4)

Moreover, we terminate 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) once we reach |R|80dk4logn. We will use these information to give an upper bound on the expected cost of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t).

We start by upper bounding the size of the final sets V and Λ in Algorithm 5 in terms of |R|.

Lemma 36.

The V and Λ in Line 19 of Algorithm 5 satisfy

|Λ|kd|V|2kd2|R|/α.

Proof.

By Line 10 of Algorithm 5, we know that for all uΛ, there exists clause C𝒞 such that uvbl(C) and vbl(C)V. This shows that |Λ|kd|V|. Observe that V contains at least |V|/(Δ+1)|V|/(2dk) clauses with disjoint variable sets. Since every marked variable in the clauses added to V have gone through at least one round of LB-Sample (per Line 5 and Line 12), we get that |R|αk|V|/(2dk)=α|V|/(2d).

To upper bound the expected cost of Line 19, we further need the result of He–Wang–Yin [18] on the existence of a “Bernoulli factory” algorithm 𝖡𝖥() such that for every locally uniform CNF and variable v, the Bernoulli factory efficiently draws a {0,1}-random variable according to the padding distribution of v.

Proposition 37 ([18, Lemma 3.10 and Appendix A]).

Let Ψ=(VΨ,𝒞Ψ) be a k-CNF, σ be a feasible partial assignment of Ψ, and Ψσ=(VΨσ,𝒞Ψσ) be the reduced CNF (see Definition 5). Suppose we have

[¬Cσ]ζfor every C𝒞Ψσ.

Then there exists an algorithm 𝖡𝖥() such that for every vVσ,

  • 𝖡𝖥(v) with probability 1 returns ξ(μΨ)vpad,σ;

  • 𝖡𝖥(v) has expected cost O(k9d6|𝒞Ψσ|(1eζ)|𝒞Ψσ|).

We remark that in Algorithm 5, our partial assignment σΛ is always supported on the marked variables. Since every clause has at least αk unmarked variables that are not assigned by σΛ, we get that [¬CσΛ]2αk for every C. Thus when applying Proposition 37, we will take ζ=2αk.

We can now give an upper bound on the expected cost of Algorithm 5, proving Lemma 21.

Proof of Lemma 21.

Let A be the set of all t such that 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) is executed at least once as subroutine of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t). Since we run LB-Sample(t) at the beginning (Line 7) of each round 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t), we know that |A||R|.

Observe that for every t, if 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) has been computed once, then we will permanently assign M(t)=𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t){0,1}, and later executions of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) will terminate at Line 4. Thus, it suffices to upper bound the cost of every first execution of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) before entering another 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t′′). Multiplying this by |A| would give an upper bound on the cost of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t).

Suppose we are at the first time of executing 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) for some t. We first estimate the cost of the while loop in Lines 10–18, which is a constant multiple of the number of executions of Line 14 and Line 18. Note that every time we execute Line 14 or Line 18, some w is added to Λ due to a clause C chosen in Line 11, with wvbl(C). Moreover, each clause C can be chosen in Line 11 at most once. Thus, the number of pairs (w,C) that correspond to an execution of Line 14 or Line 18 is at most d|Λ|. Consequently, the cost of the while loop in Lines 10–18 is O(d|Λ|)=O(kd3|R|/α).

We now estimate the cost of Line 19. Observe that in Line 19, we can apply the Bernoulli factory in Proposition 37 to compute the padding distribution (νΨ)upad,σΛ. Since the connected component Ψ in Line 19 has at most d|V|2d2|R|/α clauses, by Proposition 37, the expected cost of Line 19 is at most

O(k9d8|R|/α(1e2αk)2d2|R|/α).

Let Rmax=80dk4logn. Combining the above and applying Equation 4, we get that the expected cost of 𝖦𝗅𝖺𝗎𝖻𝖾𝗋(t) is at most

𝔼[|A|O(kd3|R|/α+k9d8|R|/α(1e2αk)2d2|R|/α)]
𝔼[|R|O(kd3|R|/α+k9d8|R|/α(1e2αk)2d2|R|/α)]
r=0Rmax[|R|r]rO(kd3r/α+k9d8r/α(1e2αk)2d2r/α)
r=0RmaxO(2r48dk4rkd3r/α+k9d8r/α(1e2αk)2d2r/α)
r=0RmaxO(2r48dk4r(kd3rα+k9d8rαe2d2r/α2αk))=O(k9d8Rmax2/α)=O(k17d10log2n/α),

where in the last step we used the information that

2148dk4e2d2/α2(1α)k0.9r=0Rmax(2148dk4e2d2/α2αk)r=O(1).