Hardness of Sampling for the Anti-Ferromagnetic Ising Model on Random Graphs
Abstract
We prove a hardness of sampling result for the anti-ferromagnetic Ising model on random graphs of average degree for large constant , proving that when the normalized inverse temperature satisfies (asymptotically corresponding to the condensation threshold), then w.h.p. over the random graph there is no stable sampling algorithm that can output a sample close in distance to the Gibbs measure. The results also apply to a fixed-magnetization version of the model, showing that there are no stable sampling algorithms for low but positive temperature max and min bisection distributions. These results show a gap in the tractability of search and sampling problems: while there are efficient algorithms to find near optimizers, stable sampling algorithms cannot access the Gibbs distribution concentrated on such solutions.
Our techniques involve extensions of the interpolation technique relating behavior of the mean field Sherrington-Kirkpatrick model to behavior of Ising models on random graphs of average degree for large . While previous interpolation arguments compared the free energies of the two models, our argument compares the average energies and average overlaps in the two models.
Keywords and phrases:
Random graph, spin glass, sampling algorithmFunding:
Neng Huang: Work was primarily done when NH was affiliated with the University of Chicago, supported partly by the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) with NSF grant ECCS-2216912.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Mathematics of computing Probability and statistics ; Theory of computation Randomness, geometry and discrete structuresEditors:
Raghu MekaSeries and Publisher:

1 Introduction
The study of disordered systems is at the intersection of statistical physics, probability theory, and computer science. Models of disordered systems, such as the Edwards–Anderson model, Sherrington–Kirkpatrick model, random constraint satisfaction problems, and combinatorial optimization problems on random networks, have been studied from many different perspectives, including as toy models of complex physical systems and as a source of random computational problems on which to test algorithms and understand the sources of algorithmic intractability.
In the algorithmic context, disordered systems present at least two natural algorithmic challenges: the first is the search problem, that of finding a high quality solution to an optimization problem or finding a near ground state in the language of statistical physics; the second is the sampling problem, that of (approximately) sampling from a Gibbs measure that weights solutions exponentially by their objective value.
One striking phenomenon that has been discovered is a gap between the tractability of search and sampling. That is, for certain choices of parameters in some models of disordered systems, the search problem is tractable (there exist polynomial-time or even near-linear time algorithms to find high quality solutions), while no efficient sampling algorithms are known, and in fact broad classes of sampling algorithms can be proved to be ineffective. So while high quality solutions can be found efficiently, these solutions are not typical, in the sense that the distribution of solutions found by a given search algorithm is far from the equilibrium Gibbs measure.
A prime example of this phenomenon occurs in the Sherrington–Kirkpatrick (SK) model [36], an Ising model on the complete graph on vertices in which the coupling constants are independent Gaussian random variables (see e.g [32] for a mathematical survey of the model). Here, after appropriate normalization, the key parameter is , the inverse temperature of the model. Parisi famously predicted a formula for the limiting free energy of the SK model as a function of [33, 34]; this formula is non-analytic at , indicating the existence of the phase transition at this point. The Parisi formula was proved rigorously by Guerra [23] and Talagrand [39], with methods that have proved very influential in mathematics, physics, and computer science in the past two decades. Along with the Parisi formula comes information about the SK Gibbs measure itself; in particular, Parisi identified the “average overlap” of two independent samples from the Gibbs measure as an order parameter for the phase transition [35] and developed the theory of a hierarchy of “replica symmetry breaking”. Thus the equilibrium picture of the SK model is fairly well understood, with the phase transition identified at .
Turning to the algorithmic search problem, the task is to find a solution that (approximately) maximizes the probability mass function of the SK model; that is, the task of finding a ground state or near ground state of the model. Montanari [30], partly inspired by the work of Subag on spherical spin glasses [38], proposed an algorithm for this task based on approximate message passing, which finds an approximate optimizer under the widely believed assumption that the SK model has “no overlap gap” (sometimes also referred to as “full replica symmetry breaking”) for large enough . This approach was later extended to mixed -spin models [3], which can be thought of as the generalization of the Ising model to hypergraphs.
For the sampling problem, however, the picture is less optimistic. It was predicted by physicists that the simple Glauber dynamics should converge fast as long as [37], but progress on a rigorous proof of this statement has only come recently. A series of recent works [7, 22, 18, 5] showed that Glauber dynamics does indeed converge quickly if (more specifically, in steps). A more recent work [6] has further extended the range of rapid convergence to values beyond . Using a different approach based on stochastic localization, El Alaoui, Montanari, and Sellke [4] gave a different sampling algorithm that approximately samples from the SK Gibbs measure for , with the approximation in terms of the Wasserstein distance instead of the total variation distance guarantees given by previous works. This result was later improved to the entire replica-symmetric phase by Celentano [11]. On the other hand, obstacles seem to arise after the phase transition at . El Alaoui, Montanari, and Sellke showed that when , the onset of disorder chaos naturally obstructs stable sampling algorithms [4]. Similar ideas were used recently by El Alaoui and Gamarnik to rule out stable sampling algorithms for the symmetric binary perceptron problem [1].
One can ask whether this gap in tractability for the search and sampling problems is universal for certain classes of disordered systems. The SK model is an example of a mean-field model: all spins (variables) interact with each other. Mean-field models have the advantage of being mathematically tractable to some degree, but the disadvantage of being unrealistic from the physics perspective as well as the perspective of large real-world networks. Seeking a trade-off between these two aspects, physicists have studied diluted mean-field models: that is, statistical physics models on random graphs or hypergraphs of constant average degree [29]. These models inherit some of the symmetries of mean-field models while also having some non-trivial local geometry. The study of diluted mean field models in physics led to rich and surprising predictions for the behavior of optimization problems on random graphs and random constraint satisfaction problems [27, 25, 20]. Rigorously proving these predictions is a major task in mathematics and theoretical computer science.
The specific diluted mean-field model we address here is the anti-ferromagnetic Ising model on sparse random graphs. The Hamiltonian of this model is the number of edges in the random graph cut by the given configuration, and the ground states correspond to the maximum cuts in the graph. Finding a maximum cut in a graph is one of the best-known constraint satisfaction problems, and due to this connection this model has been studied extensively in computer science as well as statistical physics. It is known that as the average degree of the random graph increases, many aspects of this model can be understood via the SK model. Dembo, Montanari, and Sen showed that the typical size of a maximum cut in both Erdős-Rényi random graphs and random regular graphs can be related to the ground state energy of SK model [21]. They proved this by interpolating the free energy between the two models (described in more detail below). El Alaoui, Montanari, and Sellke then gave an algorithm for finding a near maximum cut in a random regular graph by adapting Montanari’s message-passing algorithm for the SK model [2]. Similar results have also been shown for the more general mixed -spin models and their diluted counterparts [24, 13].
It is natural to ask if this connection between the anti-ferromagnetic Ising model and the SK model can be extended further. In particular, does the search vs. sampling phenomenon also arise in the anti-ferromagnetic Ising model on random graphs? In this paper, we give an affirmative answer to this question.
1.1 Main Results
Before stating our main results we introduce some necessary definitions and notation. We refer to an element in as a configuration. Given a graph and its adjacency matrix , we define the Ising Hamiltonian . Note that is exactly the number of edges in the graph whose endpoints are assigned the same spin by . In particular, a ground state (i.e., a configuration that minimizes the Hamiltonian) of corresponds to a maximum cut in the graph. For any inverse temperature parameter , induces the Gibbs distribution
(1.1) |
where
(1.2) |
is the partition function, the normalizing constant of the Gibbs distribution.
In this paper, as is common in prior works, the random graph model we consider will be the Poissonized random multigraph model , where we first sample the number of edges from a Poisson distribution with mean (so that the expected average degree is ), and then for each edge sample both of its endpoints independently from .
Given any two measures over , we define their normalized Wasserstein distance to be
(1.3) |
where is the set of all couplings of and ; that is, measures over whose marginal on the first argument is and the second .
Our main result is a hardness of sampling theorem in terms of the distance for the measure against stable sampling algorithms where . Informally, a sampling algorithm is stable if its output is insensitive to small perturbations of the input. See Definition 12 for a formal definition.
Theorem 1.
Let be a family of randomized sampling algorithms where takes as input an -vertex (multi-)graph and an inverse temperature parameter and produces an output law over . For every and sufficiently large, if is stable at inverse temperature parameter , then
(1.4) |
That is, stable algorithms cannot sample from the model with vanishing error in Wasserstein distance.
It is known that the replica symmetry breaking threshold for occurs at the Kesten-Stigum bound 111Unless otherwise specified, all logarithms in this paper are natural logarithms. [31, 19], so the threshold corresponds exactly to the normalized limit . Note that is also the replica symmetry breaking threshold of the Sherrington-Kirkpatrick model.
As it turns out, the same proof for Theorem 1 yields sampling hardness for near-maximum and near-minimum bisections as well. Let be the set of configurations in which the numbers of s and s differ by at most one, which we refer to as bisections. The Gibbs distribution when restricted to gives
(1.5) |
where
(1.6) |
is the bisection partition function. Note that prefers bisections that cut more edges when , and bisections with fewer edges when . This model is also known as the zero-magnetization Ising model and has been studied both in the statistical physics literature on disordered systems [28] and recently in computer science [10, 8, 26].
Theorem 2.
Let be a family of randomized sampling algorithms where takes as input an -vertex (multi-)graph and an inverse temperature parameter and produces an output law over . For every such that and sufficiently large, if is stable at inverse temperature parameter , then
(1.7) |
Theorems 1 and 2 exhibit a gap between search and sampling for the maximum cut and max/minimum bisection problem: while we have a search algorithm known under the widely believed “No Overlap Gap” conjecture [2] 222[2] stated the result for random regular graphs, but it can be transferred to other graph models using e.g. the argument in [13]., no stable algorithm can sample from the Gibbs distribution with arbitrary precision in the Wasserstein metric. Put another way, the algorithm of [2] finds solutions of value for any fixed ; Theorems 1 and 2 rule out stable algorithms that sample solutions of these values approximately uniformly, since a standard reduction of counting to sampling (e.g [9]) reduces the task of sampling from the Gibbs distribution to sampling uniformly from solutions of a given value.
1.2 Techniques
To show that sampling from the anti-ferromagnetic Ising Gibbs distribution on random graphs is hard for stable sampling algorithms, we use the framework that was used by [4] to show that sampling from the Sherrington-Kirkpatrick Gibbs distribution is hard. The key quantity in this approach is the overlap between two configurations , defined by
The framework consists of the following two steps:
-
1.
Show that w.h.p. over the disorder, the average of the squared overlap with respect to two independent samples and from the Gibbs distribution is at least some positive constant independent of . In particular, this is expected to hold when the Gibbs distribution exhibits replica symmetry breaking, which explains the threshold .
-
2.
Show that the Gibbs distribution exhibits disorder chaos. That is, if we perturb the input (the random coupling constants in the case of the SK model or the random graph in the case of the Ising model) slightly (see Section 2 for a formal definition of this perturbation) and then take a sample from the Gibbs distribution for the original system and a sample from the Gibbs distribution for the perturbed system, with high probability is very close to zero.
Combining these two steps and using a connection between and distance (see Lemma 15), we have that for arbitrarily small perturbations, the distance between the original Gibbs distribution and the perturbed Gibbs distribution is at least some positive constant. However, for stable sampling algorithms, if we make the perturbation sufficiently small then the distance between the old and new output distributions can also be arbitrarily small. This implies that the output distribution of any stable sampling algorithm will have strictly positive distance from the target Gibbs distribution.
In order to carry out this framework for sparse random graph models, we prove new structural properties of the Gibbs measures and and connect them to those of the Gibbs measures and of the SK model (see Section 2 for the formal definition of and ). This further extends the close connection between the behavior of the SK model and the sparse random graph model. Previously, it was shown by Dembo, Montanari, and Sen [21] that the free energy of the sparse model converges to that of the SK model as the average degree increases to infinity. We first extend this to the average Hamiltonian of a sample drawn from the Gibbs distribution, showing that this quantity also converges in the same manner, regardless of the bisection restriction. This is done by using convexity of the free energy and a well-known observation that the average Hamiltonian can be obtained by taking derivative of the free energy (see (3.1) and (3.2)). The main difficulty here is establishing that the limiting free energy is the same regardless of the bisection restriction, the details of which can be found in the full version of this paper.
Proposition 3.
-
(a)
For every , we have
(1.8) -
(b)
For every , we have
(1.9)
Our main technical contribution is the following proposition which says that as the average degree goes to infinity, the expected average squared overlap between two independent samples from the Gibbs measure converges to those sampled from the Gibbs measure of the SK model, again regardless of the bisection restriction. This gives the first ingredient in the sampling hardness framework. For the SK Hamiltonian, there is a known connection between the average Hamiltonian and the average squared overlap, which can be established using the Gaussian integration by parts trick (see e.g. [32] for more details). In this paper, we show that one can use the Stein-Chen identity for the Poisson distribution (see Lemma 24) in place of the Gaussian integration by parts trick to establish a similar connection for the sparse model in the large degree limit.
Proposition 4.
-
(a)
For any ,
(1.10) -
(b)
For any ,
(1.11)
Finally, to obtain disorder chaos for the sparse model which is the second component in the sampling hardness framework, we use the fact that two configurations sampled from the coupled system for the SK model have nontrivial overlap with only exponentially small probability (see Theorem 29). This causes a gap in free energy between the coupled system and uncoupled system, which can then be transferred to the sparse model using the interpolation result by [17, 16]. We can then translate this gap back to a disorder chaos statement.
1.3 Organization
The paper is organized as follows. In Section 2 we set up the notation and give a detailed overview of the proof. In Section 3 and Section 4, we establish the correspondence of average energy and average overlap between the two models. Finally in Section 5, we transfer the disorder chaos property from the SK model to the sparse random graph models.
2 Overview of the proof
Gibbs distributions
For , we define the Hamiltonian
For any , it induces the following Gibbs distribution on :
where is the normalizing factor commonly referred to as the partition function. is commonly referred to as the inverse temperature parameter in statistical physics. We also consider the restriction of the above distribution to bisections , with
We refer to such restricted versions as bisection models. In statistical physics literature, this is sometimes also called the model with zero magnetization and the version without the bisection restriction is called the model with non-fixed magnetization. Here, the term magnetization refers to the quantity . We sometimes also write and instead of and if we wish to stress the dependence on and .
is a random matrix drawn from some distribution and in this paper we consider two important cases. In the first case, , where independently. This is known as the Sherrington-Kirkpatrick model. In the second case we have , where each entry is an independent random variable for some parameter . This case corresponds to sparse random (multi-)graphs with average degree , where the vertex set is and the multiplicity of the edge is if and if (note that here is not the adjacency matrix of the graph and can be asymmetric). Any configuration can be thought of as the indicator vector for a cut where the vertices assigned 1 by is on one side and those assigned is on the other side, and is equal to the difference between the number of edges crossing the cut and the number of edges not crossing the cut. When , the Gibbs measure prefers those configurations that cut more edges, and this corresponds to the Maximum Cut problem for the non-fixed magnetization model and the Maximum Bisection problem for the zero-magnetization model. This case is known as the anti-ferromagnetic Ising model in the statistical physic literature. When , the non-fixed magnetization model corresponds to the Minimum Cut problem while the zero-magnetization model corresponds to the Minimum Bisection problem. This case is known as the ferromagnetic Ising model in the statistical physic literature.
For notational convenience, we sometimes drop the random matrix and write and . We sometimes refer to these two models as dense or sparse models respectively.
Gibbs average
For any function , we can define its average with respect to or :
We can extend this definition to functions which take multiple configurations as inputs, in which case we assume that the configurations are sampled independently from the Gibbs distribution:
and the average over is defined similarly. We sometimes drop the subscripts if they are clear from context. One function of particular interest to us is the overlap between two configurations, defined as
gives the normalized inner product between two configurations. If is close to zero, then two configurations independently sampled from the Gibbs distribution are nearly orthogonal.
Free energy
The free energy for these models is defined as follows.
We can also define free energy in a similar way when restricted to bisections.
The following proposition is a simple consequence of Cauchy-Schwarz.
Proposition 5.
Both and are convex in .
Proof.
For any we have
Taking the logarithm of both sides, we get that is convex in . The convexity of can be shown in the same way.
By taking expectation over the randomness of we immediately obtain the following corollary.
Corollary 6.
are all convex in .
Correspondence between dense and sparse models
It is known that as the average degree gets larger, the sparse model will “converge” to the dense model in some sense. Dembo, Montanari, and Sen proved this for the free energy of these two models using a clever interpolation argument:
Lemma 7 ([21]).
There exist constants independent of such that for any ,
(2.1) |
In this paper, we further extend this correspondence to the average energy as well as the average overlap .
Proposition 8 (Restatement of Proposition 3).
-
(a)
For every , we have
(2.2) -
(b)
For every , we have
(2.3)
Proposition 9 (Restatement of Proposition 4).
-
(a)
For any ,
(2.4) -
(b)
For any ,
(2.5)
Disorder chaos and stable algorithms
Disorder chaos is a well-studied phenomenon in spin glass theory. The term “disorder” refers to the random matrix , and “chaos” describes what happens to the Gibbs distribution if we slightly perturb . For the SK model, consider the following notion of perturbation. Let be two independent copies of Gaussian matrices where independently for each . Given any perturbation parameter , we can consider the two measures and where . Define to be the average of where and sampled independently from these two distributions respectively, i.e.,
It is known that two such samples will likely have some nontrivial overlap when , in which case there is no perturbation and is simply .
Theorem 10 (See e.g. [4]).
If , then there exists such that
(2.6) |
However, when , then the overlap will be zero in the limit.
Theorem 11 ([12]).
For all , we have
(2.7) |
If we think of the expression on the left hand side of (2.6) and (2.7) as a function of , then (2.6) and (2.7) together suggest that this function is not right-continuous at if . It was shown in [4] that this property very naturally obstructs a class of sampling algorithms for .
Definition 12 (Definition 2.2 in [4]).
Let be a family of sampling algorithms for the Gibbs measure , where takes an matrix and as input and outputs an assignment in . Let be the output law of . We say that is stable (with respect to disorder at ), if
(2.8) |
Intuitively, (2.8) means that stable algorithms are not able to make the leap at the discontinuity implied by (2.6) and (2.7), and therefore must be producing a distribution that is bounded away from the Gibbs distribution in terms of the distance. This intuition is formalized in [4] as the following theorem.
Theorem 13 (Theorem 2.6 in [4]).
Fix such that . Let be a family of sampling algorithms for the Gibbs measure that is stable with respect to disorder at , then
(2.9) |
In this paper, we obstruct stable sampling algorithms for sparse models as well. For the sparse models, we consider a slightly different notion of perturbation. Fix the average degree and the perturbation parameter , we will take three independent random matrices such that , independently for all . We then define and . Similarly to Definition 12, we say that a family of sampling algorithms with inputs and is stable at inverse temperature if
(2.10) |
As an example, it was recently shown that stable algorithms implemented by Boolean circuits with bounded depth are stable [1].
When , we have by Proposition 9 and Theorem 10 that for ,
(2.11) |
and for with
(2.12) |
On the other hand, if , we show in the following lemma that the overlap tends to zero as goes to infinity.
Lemma 14.
Fix .
-
(a)
For any ,
(2.13) -
(b)
For any ,
(2.14)
The following lemma establishes the connection between the overlap and the distance.
Lemma 15 (Lemma 5.3 in [4]).
Fix . Let be distributions over . We have
(2.15) |
We now have the ingredients needed for proving Theorem 1. Theorem 2 can be proved in the same way and we omit the details here.
Proof of Theorem 1.
We drop the inverse temperature parameter in the notation for brevity. For any arbitrary , by choosing and in Lemma 15, we get that
(2.16) |
Taking expectation on both sides, we obtain
In the second inequality we used the fact that for any . Taking on both sides, we obtain
By (2.11) and (2.13), if is sufficiently large, for some independent of ,
(2.17) |
By the triangle inequality, we have
Since and have the same distribution, we have
(2.18) |
Since is stable at any inverse temperature, by (2.10) and taking sufficiently small we have
(2.19) |
It follows that
Free energy of coupled bisection models
In subsequent proofs, we will need the following results on the free energy for coupled bisection models. More specifically, we show that the limiting free energy does not change under the bisection constraint for both dense and sparse models (for the sparse model we require ). Intuitively, in the case of SK model this holds because the symmetry of its disorder implies that there is nothing special about the all-one direction. The sparse case requires more care since it no longer has the symmetry, but we overcome this difficulty using Poisson concentration arguments. Due to the space constraint, proofs will be omitted here and we refer interested readers to the full version of this paper.
Let us define ( is omitted from the notation for simplicity)
and
Similarly for the sparse model, let us define
(2.20) | ||||
(2.21) |
Theorem 16.
Let . For any , we have
By letting and , we immediately obtain the following.
Corollary 17.
We have
Lemma 18.
Fix and . For every , there exists a constant such that for all sufficiently large
(2.22) |
and
(2.23) |
Theorem 19.
Let . For every , if is sufficiently large, then for every ,
(2.24) |
Again, by letting and , we immediately obtain the following.
Theorem 20.
Let . For every , if is sufficiently large, then for every ,
(2.25) |
3 Interpolating the average energy between SK and the diluted model
In this section we prove Proposition 8 and establish the interpolation of average energy between sparse and dense models. By Lemma 7, we already know that the free energy of the sparse model converges to that of the dense model as the average degree goes to infinity. The following connection between free energy and average energy is well-known:
(3.1) |
Similarly,
(3.2) |
To obtain convergence of the partial derivatives, we use the following elementary fact sometimes known as Griffith’s lemma in statistical physics.
Proposition 21 (See e.g. [41]).
Let be a family of convex and differentiable functions that converges pointwise in an open interval to , then at every where exists.
Proof of Proposition 8.
Proposition 8 says that
-
(a)
For every ,
-
(b)
For every ,
We first prove part (b). Assume for the sake of contradiction that for some
(3.3) |
It follows that we can choose a sequence of pairs such that
(3.4) |
Note that for every , can be chosen to be sufficiently large so that the term in Lemma 7 is , and under this assumption we have as a family of functions of converges pointwise444The convergence is in fact uniform in any bounded interval. to . Here we remark that it is known that the limit exists and is differentiable in [39, 40], and by Corollary 17 we have
(3.5) |
By Proposition 21, we have
(3.6) |
for every . By (3.2), we have
(3.7) |
and
(3.8) |
(3.6), (3.7), and (3.8) imply that
(3.9) |
However, this contradicts (3.4), so we obtain part (b).
For part (a), note that by Theorem 20, for any bounded open interval , we can choose the sequence to satisfy the additional requirement that
(3.10) |
for all . In particular, we can choose to include any desired (when the Gibbs measure is uniform and part (a) trivially holds). We can then invoke Proposition 21 again and proceed with the same proof.
4 Interpolating the average overlap between SK and the diluted model
In this section we prove Proposition 9. Since the proofs are mostly the same, we will only prove item (a) in Proposition 9. We first state the following well-known fact which relates the average energy to the average overlap of the Gibbs distribution in the SK model.
Lemma 22 (See e.g. [32]).
For every , we have
(4.1) |
The proof of Lemma 22 uses the following proposition.
Proposition 23.
Let . We have
(4.2) |
Proof.
For every and , we have
(4.3) |
Summing over all , we obtain
Lemma 22 can be proved using Proposition 23 and the Gaussian integration by parts formula. We prove a statement similar to Lemma 22, but for the sparse model. We will use the following lemma, known as the Stein-Chen identity for the Poisson distribution, in place of Gaussian integration by parts.
Lemma 24 ([14]).
Let . For any bounded function , we have
(4.4) |
Lemma 25.
Let be the matrix with 1 in the -th entry and 0 everywhere else. Then there exists such that
(4.5) |
Proof.
Since is infinitely differentiable in each entry of , by Taylor’s Theorem, there exists such that
(4.6) |
It remains to bound . By (4.3), we have (omitting from the notation for simplicity)
Since
(4.7) |
and similarly
(4.8) |
it follows that
(4.9) |
Proposition 26.
Let be the matrix with 1 in the -th entry and 0 everywhere else. For any , we have
(4.10) |
Proof.
We have
(4.11) |
which implies that
(4.12) |
Summing over , we obtain
(4.13) |
Since these quantities are all strictly positive, we have
(4.14) |
and similarly
Lemma 27.
For every , we have
Proof.
For simplicity, we drop the subscripts from . We have
(4.15) |
Here in the second equality we invoked Lemma 24. By Lemma 25, for some we have
(4.16) |
where the last inequality is due to Proposition 26. We observe that
(4.17) |
By Proposition 23, we also have
(4.18) |
The lemma follows by combining (4.15), (4.16), (4.17), and (4.18). The next proposition deals with the first term in the above lemma. Note that this term is trivially bounded in the bisection models for all .
Proposition 28.
For every , we have
(4.19) |
Proof.
Fix an arbitrary and let . We have
(4.20) |
Recall that . Given , take a such that . By Poisson concentration properties (see full version for details), we have that there exists a constant such that with probability we have , and consequently
(4.21) |
By taking expectations, we obtain
(4.22) |
Here in the last inequality we used the fact that is the same for all .
Combining (4.20) and (4.22), we obtain
(4.23) |
If is sufficiently large, the above gives
(4.24) |
The proposition follows since is chosen arbitrarily.
We are now ready to prove Proposition 9.
Proof of Proposition 9.
5 Disorder chaos for sparse models
Let be random matrices as defined in Section 2 and let . Recall that we also defined the following short-hand notation:
(5.1) | ||||
(5.2) | ||||
(5.3) | ||||
(5.4) |
It is known that the SK model exhibits disorder chaos at any temperature.
Theorem 29 (Theorem 9 in [15]).
Let , . Fix an arbitrary . Let . There exists some constant such that for every ,
(5.5) |
Theorem 29 immediately implies that the overlap of two configurations sampled from the coupled system is nearly zero.
Corollary 30.
If , then
(5.6) |
Proof.
For any , we have
Here we used Theorem 29 and the fact that . It follows that
for any arbitrary , and the corollary follows.
The exponentially small fraction in Theorem 29 can be translated into a constant gap between the free energy of coupled and uncoupled systems.
Proposition 31.
For every , there exists some constant such that
(5.7) |
Proof.
The following theorem, which is a generalized version of Lemma 7, allows us to transfer this free energy gap to sparse models.
We are now ready to prove Lemma 14.
Proof of Lemma 14.
As before, we only prove part (a). Fix some arbitrary . By Theorem 16, for both and we have
(5.9) |
By Proposition 31, for some constant , we have
(5.10) |
We can then use Theorem 32 to transfer this gap to the sparse model and obtain
(5.11) |
By Theorem 19, we then have
(5.12) |
This means that if is sufficiently large, then for all sufficiently large we have
(5.13) |
By Lemma 18, we have that with probability at least
(5.14) |
which rearranges to
(5.15) |
It follows that
(5.16) |
By taking to infinity, we have . This completes the proof.
References
- [1] Ahmed El Alaoui and David Gamarnik. Hardness of sampling solutions from the Symmetric Binary Perceptron, July 2024. arXiv:2407.16627 [cs, math]. doi:10.48550/arXiv.2407.16627.
- [2] Ahmed El Alaoui, Andrea Montanari, and Mark Sellke. Local algorithms for Maximum Cut and Minimum Bisection on locally treelike regular graphs of large degree, November 2021. arXiv:2111.06813 [math-ph]. doi:10.48550/arXiv.2111.06813.
- [3] Ahmed El Alaoui, Andrea Montanari, and Mark Sellke. Optimization of mean-field spin glasses. The Annals of Probability, 49(6):2922–2960, November 2021. Publisher: Institute of Mathematical Statistics. doi:10.1214/21-AOP1519.
- [4] Ahmed El Alaoui, Andrea Montanari, and Mark Sellke. Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization, March 2022. arXiv:2203.05093 [cond-mat]. doi:10.48550/arXiv.2203.05093.
- [5] Nima Anari, Vishesh Jain, Frederic Koehler, Huy Tuan Pham, and Thuy-Duong Vuong. Entropic independence I: Modified log-Sobolev inequalities for fractionally log-concave distributions and high-temperature Ising models. arXiv preprint arXiv:2106.04105, 2021.
- [6] Nima Anari, Frederic Koehler, and Thuy-Duong Vuong. Trickle-down in localization schemes and applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1094–1105, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649622.
- [7] Roland Bauerschmidt and Thierry Bodineau. A very simple proof of the LSI for high temperature spin systems. Journal of Functional Analysis, 276(8):2582–2588, 2019.
- [8] Roland Bauerschmidt, Thierry Bodineau, and Benoit Dagallier. Kawasaki dynamics beyond the uniqueness threshold. arXiv preprint arXiv:2310.04609, 2023. doi:10.48550/arXiv.2310.04609.
- [9] Ivona Bezáková, Daniel Štefankovič, Vijay V Vazirani, and Eric Vigoda. Accelerating simulated annealing for the permanent and combinatorial counting problems. SIAM Journal on Computing, 37(5):1429–1454, 2008. doi:10.1137/050644033.
- [10] Charlie Carlson, Ewan Davies, Alexandra Kolla, and Will Perkins. Computational thresholds for the fixed-magnetization Ising model. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, pages 1459–1472, 2022. doi:10.1145/3519935.3520003.
- [11] Michael Celentano. Sudakov–Fernique post-AMP, and a new proof of the local convexity of the TAP free energy. The Annals of Probability, 52(3):923–954, 2024. doi:10.1214/23-AOP1675.
- [12] Sourav Chatterjee. Disorder chaos and multiple valleys in spin glasses. arXiv preprint, 2009. arXiv:0907.3381.
- [13] Antares Chen, Neng Huang, and Kunal Marwaha. Local algorithms and the failure of log-depth quantum advantage on sparse random CSPs, October 2023. arXiv:2310.01563 [quant-ph]. doi:10.48550/arXiv.2310.01563.
- [14] Louis H. Y. Chen. Poisson Approximation for Dependent Trials. The Annals of Probability, 3(3):534–545, June 1975. Publisher: Institute of Mathematical Statistics. doi:10.1214/aop/1176996359.
- [15] Wei-Kuo Chen. Variational representations for the Parisi functional and the two-dimensional Guerra–Talagrand bound. The Annals of Probability, 45(6A):3929–3966, November 2017. Publisher: Institute of Mathematical Statistics. doi:10.1214/16-AOP1154.
- [16] Wei-Kuo Chen, David Gamarnik, Dmitry Panchenko, and Mustazee Rahman. Suboptimality of local algorithms for a class of max-cut problems. The Annals of Probability, 47(3):1587–1618, May 2019. Publisher: Institute of Mathematical Statistics. doi:10.1214/18-AOP1291.
- [17] Wei-Kuo Chen and Dmitry Panchenko. Disorder chaos in some diluted spin glass models. The Annals of Applied Probability, 28(3):1356–1378, June 2018. doi:10.1214/17-AAP1331.
- [18] Yuansi Chen and Ronen Eldan. Localization schemes: A framework for proving mixing bounds for Markov chains. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 110–122. IEEE, 2022.
- [19] Amin Coja-Oghlan, Philipp Loick, Balázs F. Mezei, and Gregory B. Sorkin. The Ising Antiferromagnet and Max Cut on Random Regular Graphs. SIAM Journal on Discrete Mathematics, 36(2):1306–1342, June 2022. Publisher: Society for Industrial and Applied Mathematics. doi:10.1137/20M137999X.
- [20] Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics, 84(6):066106, 2011.
- [21] Amir Dembo, Andrea Montanari, and Subhabrata Sen. Extremal cuts of sparse random graphs. The Annals of Probability, 45(2):1190–1217, March 2017. Publisher: Institute of Mathematical Statistics. doi:10.1214/15-AOP1084.
- [22] Ronen Eldan, Frederic Koehler, and Ofer Zeitouni. A spectral condition for spectral gap: fast mixing in high-temperature Ising models. Probability theory and related fields, 182(3):1035–1051, 2022.
- [23] Francesco Guerra. Broken replica symmetry bounds in the mean field spin glass model. Communications in mathematical physics, 233:1–12, 2003.
- [24] Chris Jones, Kunal Marwaha, Juspreet Singh Sandhu, and Jonathan Shi. Random Max-CSPs Inherit Algorithmic Hardness from Spin Glasses. In Yael Tauman Kalai, editor, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023), volume 251 of Leibniz International Proceedings in Informatics (LIPIcs), pages 77:1–77:26, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ITCS.2023.77.
- [25] Florent Krzakała, Andrea Montanari, Federico Ricci-Tersenghi, Guilhem Semerjian, and Lenka Zdeborová. Gibbs states and the set of solutions of random constraint satisfaction problems. Proceedings of the National Academy of Sciences, 104(25):10318–10323, 2007. doi:10.1073/PNAS.0703685104.
- [26] Aiya Kuchukova, Marcus Pappik, Will Perkins, and Corrine Yap. Fast and slow mixing of the Kawasaki dynamics on bounded-degree graphs. arXiv preprint, 2024. doi:10.48550/arXiv.2405.06209.
- [27] M Mézard, G Parisi, and R Zecchina. Analytic and algorithmic solution of random satisfiability problems. Science, pages 815–814, 2002.
- [28] Marc Mézard and Giorgio Parisi. Mean-field theory of randomly frustrated systems with finite connectivity. Europhysics Letters, 3(10):1067, 1987.
- [29] Marc Mézard and Giorgio Parisi. The Bethe lattice spin glass revisited. The European Physical Journal B-Condensed Matter and Complex Systems, 20:217–233, 2001.
- [30] Andrea Montanari. Optimization of the Sherrington–Kirkpatrick hamiltonian. SIAM Journal on Computing, 0(0):FOCS19–1–FOCS19–38, 2021. doi:10.1137/20M132016X.
- [31] Elchanan Mossel, Joe Neeman, and Allan Sly. Reconstruction and estimation in the planted partition model. Probability Theory and Related Fields, 162(3):431–461, August 2015. doi:10.1007/s00440-014-0576-6.
- [32] Dmitry Panchenko. The Sherrington-Kirkpatrick Model. Springer Monographs in Mathematics. Springer New York, NY, 2013.
- [33] Giorgio Parisi. Infinite number of order parameters for spin-glasses. Physical Review Letters, 43(23):1754, 1979.
- [34] Giorgio Parisi. A sequence of approximated solutions to the SK model for spin glasses. Journal of Physics A: Mathematical and General, 13(4):L115, 1980.
- [35] Giorgio Parisi. Order parameter for spin-glasses. Physical Review Letters, 50(24):1946, 1983.
- [36] David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical review letters, 35(26):1792, 1975.
- [37] H. Sompolinsky and Annette Zippelius. Dynamic theory of the spin-glass phase. Phys. Rev. Lett., 47:359–362, August 1981. doi:10.1103/PhysRevLett.47.359.
- [38] Eliran Subag. Following the ground states of full-RSB spherical spin glasses. Communications on Pure and Applied Mathematics, 74(5):1021–1044, 2021.
- [39] Michel Talagrand. The Parisi Formula. Annals of Mathematics, 163(1):221–263, 2006. Publisher: Annals of Mathematics. URL: https://www.jstor.org/stable/20159953.
- [40] Michel Talagrand. Parisi measures. Journal of Functional Analysis, 231(2):269–286, February 2006. doi:10.1016/j.jfa.2005.03.001.
- [41] Michel Talagrand. Mean Field Models for Spin Glasses: Volume I: Basic Examples. Springer, Berlin, Heidelberg, 2011. doi:10.1007/978-3-642-15202-3.