Abstract 1 Introduction 2 Low-degree framework and algorithmic contiguity 3 Partial recovery in correlated random graphs 4 Detection in correlated SBMs References

Algorithmic Contiguity from Low-Degree Conjecture and Applications in Correlated Random Graphs

Zhangsong Li ORCID School of Mathematical Sciences, Peking University, Beijing, China
Abstract

In this paper, assuming a natural strengthening of the low-degree conjecture, we provide evidence of computational hardness for two problems: (1) the (partial) matching recovery problem in the sparse correlated Erdős-Rényi graphs 𝒢(n,q;ρ) when the edge-density q=n1+o(1) and the correlation ρ<α lies below the Otter’s threshold, this resolves a remaining problem in [15]; (2) the detection problem between a pair of correlated sparse stochastic block model 𝒮(n,λn;k,ϵ;s) and a pair of independent stochastic block models 𝒮(n,λsn;k,ϵ) when ϵ2λs<1 lies below the Kesten-Stigum (KS) threshold and s<α lies below the Otter’s threshold, this resolves a remaining problem in [9].

One of the main ingredient in our proof is to derive certain forms of algorithmic contiguity between two probability measures based on bounds on their low-degree advantage. To be more precise, consider the high-dimensional hypothesis testing problem between two probability measures and based on the sample 𝖸. We show that if the low-degree advantage 𝖠𝖽𝗏D(dd)=O(1), then (assuming the low-degree conjecture) there is no efficient algorithm 𝒜 such that (𝒜(𝖸)=0)=1o(1) and (𝒜(𝖸)=1)=Ω(1). This framework provides a useful tool for performing reductions between different inference tasks.

Keywords and phrases:
Algorithmic Contiguity, Low-degree Conjecture, Correlated Random Graphs
Category:
RANDOM
Funding:
Zhangsong Li: Partially supported by National Key R&D program of China (Project No. 2023YFA1010103) and NSFC Key Program (Project No. 12231002)
Copyright and License:
[Uncaptioned image] © Zhangsong Li; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Random network models
; Mathematics of computing Random graphs
Related Version:
Full Version: https://arxiv.org/abs/2502.09832 [41]
Acknowledgements:
Z.L. thanks Jian Ding and Jingqiu Ding for helpful discussions, Alexander S. Wein for helpful comments on the revised low-degree conjecture, and Hang Du for pointing him to the reference [56].
Editors:
Alina Ene and Eshan Chattopadhyay

1 Introduction

Graph matching, also referred to as network alignment, is the problem of identifying a bijection between the vertex sets of two graphs that maximizes the number of common edges. When the two graphs are exactly isomorphic to each other, this problem reduces to the classical graph isomorphism problem, for which the best known algorithm runs in quasi-polynomial time [1]. In general, graph matching is an instance of the quadratic assignment problem [7], which is known to be NP-hard to solve or even approximate [42]. Motivated by real-world applications (such as social network de-anonymization [49] and computational biology [55]) as well as the need to understand the average-case computational complexity, recent research has focused on developing theoretical foundations and efficient algorithms for graph matching under statistical models. These models assume that the two graphs are randomly generated with correlated edges under a hidden vertex correspondence, and a canonical model among them is the following correlated random graph model. For any integer n, denote by U=Un the set of unordered pairs (i,j) with 1ijn.

Definition 1 (Correlated random graph model).

Given an integer n1, for (i,j)U let Ji,j and Ki,j be independent Bernoulli variables with parameter s. In addition, let π be an independent uniform permutation on [n]={1,,n}. Then, we define a triple of correlated random graphs (G,A,B) as follows: first we generate G independently with {Ii,j,Ki,j} and π from a specific probability distribution over all graphs on [n], and then (conditioned on G) we define for each (i,j)U that (note that we identify a graph with its adjacency matrix)

Ai,j=Gi,jJi,j,Bi,j=Gi,jKπ1(i),π1(j).

In short, we will subsample A,B from G with subsampling probability s and then permute the vertices of B by a uniform permutation.

Of particular interest in our paper are the following two special cases, namely the correlated Erdős-Rényi model and the correlated stochastic block model (SBM).

Definition 2 (Correlated Erdős-Rényi graph model).

Given an integer n1 and two parameters p,s(0,1), we generate a triple of correlated random graphs (G,A,B) such that we first generate G according to an Erdős-Rényi graph distribution 𝒢(n,p) (i.e., for each (i,j)U we connect (i,j) in G independently with probability p), and then generate (A,B) from G according to Definition 1. For ease of presentation, we shall reparameterize such that q=ps and ρ=s(1p)1ps respectively. We will denote the marginal law of (A,B) as 𝒢(n,q;ρ).

Definition 3 (Stochastic block model).

Given an integer n1 and three parameters k,λ>0,ϵ(0,1), we define a random graph G as follows: (1) sample a labeling σ[k]n={1,,k}n uniformly at random; (2) for every distinct pair (i,j)Un, we let Gi,j be an independent Bernoulli variable such that Gi,j=1 (which represents that there is an undirected edge between i and j) with probability (1+(k1)ϵ)λn if σ(i)=σ(j) and with probability (1ϵ)λn if σ(i)σ(j). In this case, we say that G is sampled from a stochastic block model 𝒮(n,λn;k,ϵ).

Definition 4 (Correlated stochastic block model).

Given an integer n1 and four parameters k,λ>0,ϵ,s(0,1), we define a triple of correlated random graphs (G,A,B) as follows: we first sample G according to the law of a stochastic block model 𝒮(n,λn;k,ϵ) and then generate (A,B) from G according to Definition 1. We will denote the marginal law of (A,B) as 𝒮(n,λn;k,ϵ;s).

Two fundamental problems in the study of correlated random graph model are as follows: (1) the detection problem, which involves determining whether a given pair of graphs (A,B) is sampled from a pair of correlated random graphs or from a pair of independent random graphs; (2) the matching problem, which focuses on recovering the latent matching π from a sample (A,B) from the distribution of correlated random graphs. In recent years, significant progress has been made in understanding these problems for both the correlated Erdős-Rényi model and correlated stochastic block models. Through the collective efforts of the community, the information-theoretic thresholds for detection and matching have been fully characterized for correlated Erdős-Rényi model [10, 31, 58, 59, 27, 13, 14, 21] and partially characterized for correlated SBMs [51, 30]. Additionally, various efficient detecting and matching algorithms have been developed with performance guarantees [11, 4, 18, 22, 23, 26, 28, 29, 43, 44, 46, 45, 16, 17, 8, 9, 40]. Notably, like many other inference tasks in high-dimensional statistics [61, 52, 39, 24], these problems appear to exhibit information-computation gaps. Specifically, for certain ranges of the correlation strength, detection or matching is information theoretically possible but no efficient algorithm is known to achieve these tasks. We now focus on the algorithmic side of these problems as they are more relevant to our work. Indeed, it has been shown that many inference tasks in the correlated random graph models exhibits sharp algorithmic phase transitions, as we elaborate below:

  • For the detection problem between a pair of correlated Erdős-Rényi models 𝒢(n,q;ρ) and two independent Erdős-Rényi models 𝒢(n,q), we focus on the sparse regime where q=n1+o(1). In this regime, on the one hand, it was shown in [46] that when ρ>α where α0.338 is the Otter’s constant [50], there is an efficient algorithm that strongly distinguish these two models; on the other hand, it was shown in [15] that when s<α there are evidence suggesting that all algorithms based on low-degree polynomials fail to strongly distinguish these two models.

  • For the detection problem between a pair of correlated SBMs 𝒮(n,λn;k,ϵ;s) and two independent Erdős-Rényi graphs 𝒢(n,λsn), we focus on the constant degree regime where λ=O(1). In this regime, on the one hand, it was shown in [9] that when s>min{α,1ϵ2λ} where α0.338 is the Otter’s constant and 1ϵ2λ is the Kesten-Stigum threshold [35], there is an efficient algorithm that strongly distinguish these two models; on the other hand, it was also shown in [9] that when s<min{α,1ϵ2λ} there are evidence suggesting that all algorithms based on low-degree polynomials fail to strongly distinguish these two models.

The lower bound in the aforementioned results explored inherent computational barriers from the perspective of the low-degree polynomial framework. Indeed, it has been proved that the class of low-degree polynomial algorithms is a useful proxy for computationally efficient algorithms, in the sense that the best-known polynomial-time algorithms for a wide variety of high-dimensional inference problems are captured by the low-degree class; see e.g. [33, 32, 54, 39]. However, these aforementioned results suffer from two significant limitations, which we now discuss. Firstly, the aforementioned result cannot provide the evidence that partial matching recovery (that is, recover a positive fraction of the coordinates of the latent matching π) is impossible by efficient algorithms although efficient detection has already been ruled out. Secondly, (in the case of correlated SBM) they are only able to establish the computation threshold on the detection problem between the correlated model and a pair of independent Erdős-Rényi graphs. These limitations motivate two natural questions:

  1. (1)

    Can we provide the evidence that partial matching recovery is impossible (by efficient algorithms) in the same parameter regime where there are evidence suggesting that detection is impossible (by efficient algorithms)?

  2. (2)

    What can we say about the (arguably more natural) detection problem between a pair of correlated SBMs 𝒮(n,λn;k,ϵ;s) and a pair of independent SBMs 𝒮(n,λsn;k,ϵ)?

The aim of this paper is to (partially) answer these two problems. Our main result can be informally summarized as follows:

Theorem 5 (Informal).

We have the following results.

  1. (1)

    Assuming a natural strengthening of the low-degree conjecture (see Conjecture 12 for its precise meaning), then for the correlated Erdős-Rényi model 𝒢(n,q,ρ), when q=n1+o(1) and ρ<α it is impossible to recover a positive fraction of the coordinates of π with constant probability by efficient algorithms.

  2. (2)

    Assuming a natural strengthening of the low-degree conjecture (see Conjecture 12 for its precise meaning), then for the correlated stochastic block models 𝒮(n,λn;k,ϵ;s), when λ=O(1) and s<min{α,1ϵ2λ} it is impossible to recover a positive fraction of the coordinates of π with probability tending to 1 as n by efficient algorithms.

  3. (3)

    Assuming the low-degree conjecture (see Conjecture 9 for its precise meaning), then for the correlated stochastic block models 𝒮(n,λn;k,ϵ;s), when s<min{α,1ϵ2λ} it is impossible to strongly distinguish this model and a pair of independent SBMs 𝒮(n,λsn;k,ϵ) by efficient algorithms, provided that the average degree λs is sufficiently large.

 Remark 6.

Note that when ρ>α, the results in [45, 26, 28, 29] show that there exists an efficient algorithm that achieves partial recovery of π in a pair of correlated Erdős-Rényi models with probability tending to 1 as n. Thus, Item (1) in Theorem 5 is tight in some sense and the algorithmic partial recovery threshold is indeed given by ρ=α.

Similarly, the results in [45, 26, 28, 29] naturally extends to show that there exists an efficient algorithm that achieves partial recovery of π in a pair of correlated SBMs when s>α with probability tending to 1 as n. Thus, Item (2) in Theorem 5 is also tight in the subcritical regime (i.e., when marginally two graphs are below the KS threshold).

 Remark 7.

From Item (3) in Theorem 5 we see that for a pair of correlated SBMs (A,B), when marginally both A and B are below the KS threshold (i.e., when ϵ2λs<1), there is evidence suggesting that no efficient algorithm can strongly distinguish (A,B) from a pair of independent stochastic block models when the correlation s<α. Since the result in [46] extends naturally to the case of stochastic block models which provides an efficient algorithm that strongly distinguish these two models when s>α, we see that in the subcritical regime (i.e., when marginally both A and B are below the KS threshold) the algorithmic correlation detection threshold is given by α. On the contrary, in the supercritical regime where A and B are above the KS threshold (i.e, when ϵ2λs>1), we believe that the algorithmic correlation detection threshold should be strict lower than α.

1.1 Key challenges and innovations

In this subsection, we briefly discuss our approach of proving Theorem 5 and some conceptual innovations behind it. Our idea can be informally summarized as follows:

  1. (1)

    As for Items (1) and (2), for simplicity we take correlated Erdős-Rényi model for example. Denote to be the law of 𝒢(n,q,ρ) and to be the law of two independent 𝒢(n,q). We will argue by contradiction and assume that there is a partial recovery algorithm. Then we show that we can use this algorithm to efficiently construct a family of statistics {gi:1in} such that gi approximates 𝟏{π(1)=i} under in a certain sense.

  2. (2)

    We will show that the low-degree advantage (see (1) for its precise meaning) between (π(1)=i) and is bounded by an absolute constant. Then, from the standard low-degree conjecture (see Conjecture 9 for details) these two measures cannot be strongly distinguished by efficient algorithms. Thus, since gi is “of positive constant order” under the measure (π(i)=1) (as it should approximate 𝟏{π(1)=i} in some sense) we expect that gi should also be “of positive constant order” under .

  3. (3)

    Define g=g1++gn. We then have a statistics g that can be efficiently computed, and (I) under we expect that g is “large” as it is the sum of n “of positive constant order” terms; (II) under we expect that g is “not large” as each gi should approximate 𝟏{π(1)=i} in some sense. Thus, the statistics g accumulates more signals than all low-degree polynomials, which (non-rigorously speaking) violates the low-degree conjecture.

  4. (4)

    As for Item (3), denote to be the law of correlated SBMs and of independent SBMs. Also denote ~ to be the law of independent Erdős-Rényi graphs. As it was already shown in [9] that and ~ cannot be strongly distinguished, (non-rigorously speaking) we only need to show that and ~ are also “indistinguishable” in some sense.

However, there are certain obstacles when implementing the above ideas, as we shall discuss below. In Step (2) we need to “transfer” the behavior of a statistics gi under (π(1)=i) to its behavior under . If the low-degree advantage between (π(1)=i) and is bounded by 1+o(1), then the standard low-degree conjecture implies that we cannot distinguish between (π(1)=i) and better than random by efficient algorithms. Consequently, we expect the behavior of gi to be “almost identical” under both (π(1)=i) and (as gi can be efficiently computed). However, in our case the low-degree advantage is just bounded by a large (but fixed) constant. This weaker condition means that the standard low-degree conjecture only rules out the possibility to distinguish these two measures efficiently with vanishing errors, leaving room for non-negligible distinctions. Similar issues also arise in Step (4), in which simply showing the low-degree advantage between and ~ and the low-degree advantage between and ~ is bounded is not strong enough for our goal.

To address these issues, one of the main conceptual contributions in our work is to give a more refined characterization of the limitations of efficient algorithms when the low-degree advantage is O(1). Specifically, we show that (assuming low-degree conjecture) bounded low-degree advantage does not only excludes all algorithms that strongly distinguish and , but also suggest a certain kind of algorithmic contiguity. To be more precise, if the low-degree advantage 𝖠𝖽𝗏D(dd) is bounded, then there is no efficient algorithm 𝒜 such that (𝒜=0)=1o(1) and (𝒜=1)=Ω(1). This framework allows us to transfer the behavior of the behavior of a efficiently computable statistics g under different probability measures more easily. For example, if we know that (gc)Ω(1) it immediately holds that (gc)Ω(1).

Another difficulty is that reducing between different inference tasks requires a framework that better constrains the behavior of efficiently computable estimators. To address this, we introduce a natural refinement of the low-degree conjecture: we posit that low-degree polynomials achieve signal-to-noise ratios that are at least as good as those of any efficient algorithm (see Conjecture 12 for a more precise description). While similar modifications to the low-degree conjecture have been explored in [47], our motivations differ significantly. Their work focused on characterizing the limits of precise error rate for all computationally feasible algorithms, whereas our goal is to enable reductions between distinct inference tasks.

1.2 Comparison to concurrent work

In a concurrent work [19], the authors investigate the computational hardness of the weak recovery problem in the stochastic block model 𝒮(n,λn;k,ϵ) when k=no(1) and ϵ2λ<1 lies below the KS threshold. Notably, their proof is also based on a “recovery-to-detection reduction” approach, which argues that if there exists an efficient algorithm that achieves weak recovery, then we can use this algorithm to efficiently construct a statistics for detection (in a certain sense). Their results also relies on a strengthening of the low-degree conjecture, which is similar to the assumptions made in this paper.

However, the reduction technique employed in this paper differs significantly from that of [19]. The authors of [19] construct their statistic for detection using a correlation preserving projection technique established in [34]. This technique, which is designed specifically for weak recovery in block models, enables them to “regularize” the statistics and thus bound its moments under the null hypothesis. In contrast, our construction is more straightforward as we leverage the universal framework described in Section 1.1 to “transfer” the behavior of statistics across different probability measures. It seems that in comparison to [19], our approach is easier to implement, less dependent on problem-specific details, and potentially applicable to a broader range of problems.

1.3 Notations

In this subsection, we record a list of notations that we shall use throughout the paper. Denote 𝔖n the set of permutations over [n] and denote μ the uniform distribution on 𝔖n. In addition, denote ν to be the uniform distribution on [k]n. We will use the following notation conventions on graphs.

  • Labeled graphs. Denote by 𝒦n the complete graph with vertex set [n] and edge set Un. For any graph H, let V(H) denote the vertex set of H and let E(H) denote the edge set of H. We say H is a subgraph of G, denoted by HG, if V(H)V(G) and E(H)E(G). Define the excess of the graph τ(H)=|E(H)||V(H)|.

  • Isolated vertices. For uV(H), we say u is an isolated vertex of H, if there is no edge in E(H) incident to u. Denote (H) the set of isolated vertices of H. For two graphs H,S, we denote HS if HS and (S)(H), and we denote HS if HS and (H)=. For any graph H𝒦n, let H~ be the subgraph of H induced by V(H)(H).

  • Graph intersections and unions. For H,S𝒦n, denote by HS the graph with vertex set given by V(H)V(S) and edge set given by E(H)E(S). Denote by SH the graph with vertex set given by V(H)V(S) and edge set E(H)E(S). In addition, denote by SH, SH and S

    H
    the graph induced by the edge set E(S)E(H), E(S)E(H) and E(S)E(H), respectively (in particular, these induced graphs have no isolated points).

  • Paths. We say a subgraph H𝒦n is a path with endpoints u,v (possibly with u=v), if there exist distinct w1,,wmu,v such that V(H)={u,v,w1,,wm} and E(H)={(u,w1),(w1,w2),(wm,v)}. We say H is a simple path if its endpoints uv. Denote EndP(P) as the set of endpoints of a path P.

  • Cycles and independent cycles. We say a subgraph H is an m-cycle if V(H)={v1,,vm} and E(H)={(v1,v2),,(vm1,vm),(vm,v1)}. For a subgraph KH, we say K is an independent m-cycle of H, if K is an m-cycle and no edge in E(H)E(K) is incident to V(K). Denote by 𝙲m(H) the set of m-cycles of H and denote by 𝒞m(H) the set of independent m-cycles of H. For HS, we define m(S,H) to be the set of independent m-cycles in S whose vertex set is disjoint from V(H). Define (S,H)=m3m(S,H).

  • Leaves. A vertex uV(H) is called a leaf of H, if the degree of u in H is 1; denote (H) as the set of leaves of H.

  • Graph isomorphisms and unlabeled graphs. Two graphs H and H are isomorphic, denoted by HH, if there exists a bijection π:V(H)V(H) such that (π(u),π(v))E(H) if and only if (u,v)E(H). Denote by the isomorphism class of graphs; it is customary to refer to these isomorphic classes as unlabeled graphs. Let Aut(H) be the number of automorphisms of H (graph isomorphisms to itself).

For two real numbers a and b, we let ab=max{a,b} and ab=min{a,b}. We use standard asymptotic notations: for two sequences an and bn of positive numbers, we write an=O(bn), if an<Cbn for an absolute constant C and for all n (similarly we use the notation Oh is the constant C is not absolute but depends only on h); we write an=Ω(bn), if bn=O(an); we write an=Θ(bn), if an=O(bn) and an=Ω(bn); we write an=o(bn) or bn=ω(an), if an/bn0 as n. In addition, we write anbn if an=[1+o(1)]bn. For a set 𝖠, we will use both #𝖠 and |𝖠| to denote its cardinality. For two probability measures and , we denote the total variation distance between them by TV(,).

1.4 Organization of this paper

The rest part of this paper is organized as follows: in Section 2 we state the precise framework of low-degree polynomials and how we relate it to the notion of algorithmic contiguity (see Theorem 11). In Section 3 we use this framework to deduce the hardness of partial matching in correlated Erdős-Rényi models and correlated SBMs, thus verifying Items (1) and (2) in Theorem 5 (see Theorem 14). In Section 4 we again use this framework to deduce the hardness of testing correlated SBMs against independent SBMs, thus verifying Item (3) in Theorem 5 (see Corollary 23).

2 Low-degree framework and algorithmic contiguity

The low-degree polynomial framework first emerged from the works of [5, 34, 33, 32] and has since been refined and extended in various directions. It has found applications across a broad spectrum of problems, including detection tasks such as planted clique, planted dense subgraph, community detection, sparse PCA, and tensor PCA (see [34, 33, 32, 39, 54, 12, 3, 20, 48, 15, 38]), optimization problems like finding maximal independent sets in sparse random graphs [25, 57], and constraint satisfaction problems such as random k-SAT [6]; see also the survey [39]. Additionally, it is conjectured in [32] that the failure of degree-D polynomials implies the failure of all “robust” algorithms with running time nO~(D) (here O~ means having at most this order up to a polylogn factor). In the remaining of this paper, we will focus on applying this framework in the context of high-dimensional hypothesis testing problems.

To be more precise, consider the hypothesis testing problem between two probability measures and based on the sample 𝖸N. We will be especially interested in asymptotic settings where N=Nn,=n,=n,𝖸=𝖸n scale with n as n in some prescribed way. The standard low-degree polynomial framework primarily focus on the following notions on strong and weak detection.

Definition 8 (Strong/weak detection).

We say an algorithm 𝒜 that takes 𝖸 as input and outputs either 0 or 1 achieves

  • strong detection, if the sum of type-I and type-II errors (𝒜(𝖸)=1)+(𝒜(𝖸)=0) tends to 0 as n.

  • weak detection, if the sum of type-I and type-II errors is uniformly bounded above by 1ϵ for some fixed ϵ>0.

Roughly speaking, the low-degree polynomial approach focuses on understanding the capabilities and limitations of algorithms that can be expressed as low-degree polynomial functions of the input variables (in our case, the entries of 𝖸). To be more precise, let 𝒫D=𝒫n,D denote the set of polynomials from N to with degree no more than D. With a slight abuse of notation, we will often refer to “a polynomial” to mean a sequence of polynomials f=fn𝒫n,D, where each fn corresponds to problem size n; the degree D=Dn of such a polynomial may scale with n. As suggested by [32], the key quantity is the low-degree advantage

𝖠𝖽𝗏D(dd):=supf𝒫D𝔼[f]𝔼[f2]. (1)

Note that if we denote the likelihood ratio L(𝖸)=dd(𝖸), it is well-known (see, e.g., [32]) the right hand side equals the L2-norm of the projection of L(𝖸) into the subspace spanned by all polynomials of degree bounded by D (the norm is induced by natural inner product under ) and thus we characterize it with dd. The low degree conjecture, proposed in [32], can be summarized as follows.

Conjecture 9 (Low-degree conjecture).

For “natural” high-dimensional hypothesis testing problems between and , the following statements hold.

  1. (1)

    If 𝖠𝖽𝗏D(dd)=O(1) as n, then there exists a constant C such that there is no algorithm with running time nD/(logn)C that achieves strong detection between and .

  2. (2)

    If 𝖠𝖽𝗏D(dd)=1+o(1) as n, then there exists a constant C such that there is no algorithm with running time nD/(logn)C that achieves weak detection between and .

Motivated by [32, Hypothesis 2.1.5 and Conjecture 2.2.4] as well as the fact that low-degree polynomials capture the best known algorithms for a wide variety of statistical inference tasks, Conjecture 9 appears to hold for distributions of a specific form that frequently arises in high-dimensional statistics. For further discussion on what types of distributions are suitable for this framework, we refer readers to [32, 39, 37, 60]. In addition, we point out that although in most applications (and in the statement of [32, Hypothesis 2.1.5]) it is typically to take to be a “null” measure and to be a “planted” measure (which makes (1) more tractable), several recent works [53, 36] showed that this framework might also be applicable for many “planted-versus-planted” problems. Nevertheless, in this paper, we adopt a more conservative view and will explicitly indicate whenever is treated as a planted measure.

The framework in Conjecture 9 provides a useful tool for probing the computational feasibility of strong or weak detection. However, as discussed in Section 1.1, it turns out that the failure of strong detection is not enough in our cases, especially when we hope to perform some reductions between statistical models in a regime where weak (but not strong) detection is possible. Thus, in this regime, we aim to characterize a stronger framework that rules out all one-sided test. This motivates thus to propose the following notion of algorithmic contiguity.

Definition 10.

For “natural” high-dimensional hypothesis testing problems between and , we say an algorithm 𝒜 that takes 𝖸 as input and outputs either 0 or 1 is a -based one-sided test, if

(𝒜(𝖸)=0)=1o(1) and (𝒜(𝖸)=1)=Ω(1). (2)

We say that is time-nD algorithmic contiguous with respect to , denoted as D, if no -based one-sided testing algorithm runs in time nD. We say that and are degree-D algorithmic mutually contiguous, denoted as D, if both D and D hold.

Recall that in probability theory we say a sequence of probability measure =n is contiguous with respect to =n, if for all sequence of events {An} we have n(An)0 implies that n(An)0. Thus, our definition can be regarded as the generalization of contiguity in algorithmic view. Our main result in this section can be stated as follows.

Theorem 11.

Assuming the Conjecture 9, for the high-dimensional hypothesis testing problem between and , if 𝖠𝖽𝗏D(dd)=O(1) for some , such that TV(,)=o(1) and TV(,)=o(1), then we have D/(logn)C for some constant C.

In the remaining part of this work we will also need a strengthening of Conjecture 9, as introduced in Section 1.1. As discussed in the beginning of this section, the low-degree conjecture asserts that (for certain testing problems) low-degree polynomials are at least as powerful as all algorithms of the corresponding runtime (where the correspondence is described in Conjecture 9). For our purposes, we introduce a natural refinement of the low-degree conjecture, which posits that low-degree polynomials perform at least as well as all algorithms of the corresponding runtime in terms of the value of the ratio on the right-hand side of (1).

Conjecture 12 (Revised low-degree conjecture).

For “natural” high-dimensional hypothesis testing problems between and , the following statements hold:

  1. (1)

    If 𝖠𝖽𝗏D(dd)=O(1) as n for some , such that TV(,),TV(,)=o(1), then there is a constant C such that there is no algorithm with running time nD/(logn)C that achieves strong detection between and .

  2. (2)

    If there is no algorithm with running time nD that achieves strong detection between and , then for all statistics f=f(𝖸) that can be computed in running time nD, there exists some , (we allow , depend on f) such that TV(,),TV(,)=o(1) such that 𝖠𝖽𝗏(f)=O(1), where

    𝖠𝖽𝗏(f)=𝔼[f]𝔼[f2].
  3. (3)

    If 𝖠𝖽𝗏D(dd)=1+o(1) as n for some , such that TV(,),TV(,)=o(1), then there is a constant C such that there is no algorithm with running time nD/(logn)C that achieves weak detection between and .

  4. (4)

    If there is no algorithm with running time nD that achieves weak detection between and , then for all statistics f=f(𝖸) that can be computed in running time nD, there exists some , (we allow , depend on f) such that TV(,),TV(,)=o(1) such that 𝖠𝖽𝗏(f)=1+o(1)).

Note that in Conjecture 12 we are allowed to replace , with some , that are statistically indistinguishable with ,. This modification enables us to avoid some situations where the low-degree advantage explodes due to some “rare events” (see [2, 15, 12] for example).111In fact, as pointed out by Alexander S. Wein (who learned the following example from Ansh Nagda) in personal communication with the author, such truncation appears necessary for the conjecture to hold. Consider the sparse-PCA problem where the observation Y=λvvv2+W where W is an nn GOE matrix and vn having coordinates i.i.d. drawn from the sparse Rademacher prior with parameter ρ=o(1). In this case, when 0<λ<1 we have 𝔼[f]/𝔼[f2]=O(1) for all low-degree polynomials f but 𝔼[f]/𝔼[f2]=ω(1) if f is a thresholding function on v0Yv0 where v0 is a fixed sparse {0,1}-vector. However, our truncated conjecture still holds as we can take to be the conditional measure of given v0Yv0 is small. This revised low-degree conjecture was first proposed in [47] (in a slightly different manner) for a different purpose where they aimed to study the limits of precise error of all computationally feasible algorithms. However, we will show in the next two sections that this conjecture is also useful in performing reductions between different inference tasks.

3 Partial recovery in correlated random graphs

In this section, we will use the framework we established in Section 2 to show the hardness of partial matching in correlated random graphs, thus justifying Items (1) and (2) in Theorem 5. To this end, we first state the precise meaning that an algorithm achieves partial matching.

Definition 13 (Partial recovery algorithm in correlated Erdős-Rényi model).

Given a sample (A,B) from the law a pair of correlated random graphs in Definition 1 (we denote this law of ). We say an algorithm 𝒜 achieves strong partial matching, if it takes (A,B) as input and outputs a family of estimators {hi,j:1i,jn} such that

  1. (1)

    hi,j{0,1} for all 1i,jn a.s. under ;

  2. (2)

    hi,1++hi,n=1 for all 1in a.s. under ;

  3. (3)

    There exists a fixed constant ι>0 such that

    (h1,π(1)++hn,π(n)ιn)=1o(1).

We say an algorithm 𝒜 achieves weak partial matching, if it takes (A,B) as input and outputs a family of estimators {hi,j:1i,jn} satisfying Items (1), (2) above and

(h1,π(1)++hn,π(n)ιn)=Ω(1).

We point out that in the above definition hi,j can be thought as the estimator of 𝟏{π(i)=j}, where Item (3) implies this algorithm correctly matches a positive fraction of π(i) with probability 1o(1) (or with positive probability) and Items (1) and (2) are some regularity requirements. Our result in this section can be stated as follows:

Theorem 14.

Assuming Conjecture 12, we have the following:

  1. (1)

    For the correlated Erdős-Rényi graphs 𝒢(n,q,ρ) where q=n1+o(1) and ρ<αδ for a fixed constant δ>0. There exists constant C such that no algorithm with running time nD/(logn)C that achieves weak partial matching, provided that

    D=exp(o(lognlog(nq)logn)). (3)
  2. (2)

    For the correlated SBMs 𝒮(n,λn;k,ϵ;s) where λ=O(1) and ϵ2λ<1δ,s<αδ for a fixed constant δ>0. There exists constant C such that no algorithm with running time nD/(logn)C that achieves strong partial matching, provided that

    D=no(1). (4)

The main step of proving Theorem 14 is to show the following proposition:

Proposition 15.

Assuming Conjecture 12, we have the following:

  1. (1)

    If (G,A,B)𝒢(n,q,ρ) and let to be the joint law of (π,G,A,B) where π is the latent matching. Suppose that q,ρ,D satisfy the assumptions in Item (1) of Theorem 14. Then there exists constant C such that for all {fi,j:1i,jn} that can be computed in time nD/(logn)C, we have 𝔼[fi,π(i)]=o(1) for all 1in.

  2. (2)

    If (G,A,B)𝒮(n,λn;k,ϵ,s) and let to be the joint low of (π,G,A,B) where π is the latent matching. Suppose that λ,k,ϵ,s satisfy the assumptions in Item (2) of Theorem 14. Then there exist a constant C and an event such that ()=Ω(1), and for all {fi,j:1i,jn} that can be computed in time nD/(logn)C we have 𝔼[fi,π(i)]=o(1) for all 1in.

Clearly, based on Proposition 15, we can deduce Theorem 14 via a simple Markov inequality. The rest of this section is devoted to the proof of Proposition 15. In the following subsections, our main focus is on proving Item (1) of Proposition 15. Given the similarity between the proofs of Item (1) and Item (2), for Item (2) we will provide an outline with the main differences while adapting arguments from proving Item (1) without presenting full details.

3.1 Proof of Item (1) in Proposition 15

This subsection is devoted to the proof of Item (1) in Proposition 15. Throughout this subsection, we will denote to be the law of (π,G,A,B) where (G,A,B)𝒢(n,q;ρ). We will also denote to be the marginal law of (A,B). In addition, we assume throughout this subsection that there exists a small constant 0<δ<0.01 such that

ρ2<αδ,q=n1+o(1),logD=o(lognlog(nq)logn). (5)

We first introduce some notations used in [15].

Definition 16.

Given a graph H=H(V,E), define

Φ(H)=(n1+4/DD20)|V(H)|(qD6)|E(H)|, (6)

and the graph H is said to be bad if Φ(H)<(logn)1. Furthermore, we say a graph is admissible if it contains no bad subgraph, and we say it is inadmissible otherwise.

Denote for the event that G does not contain any bad subgraph with no more than d2 vertices. In addition, let ¯ be the conditional version of given , and let ¯ be the corresponding marginal distribution of ¯ on (A,B).

We remark here that our definition of “bad” amounts to an atypically large edge density, with a carefully chosen quantitative threshold on “large”. Roughly speaking, we expect that any subgraph with size no more than D2=no(1) of a sparse Erdős-Rényi graph has edge-to-vertex ratio 1+o(1). In the definition of Φ, the term n1+4/DD20 should be thought as n1+o(1), and qD6 as n1+o(1). The o(1) terms are tuned carefully so that for a typical subgraph H of a sparse Erdős-Rényi graph, Φ(H) is much bigger than 1. The choice of (logn)1 as the Φ-threshold for bad graph is somewhat arbitrary, which we will only need to be vanishing as n. In [15], the authors showed that one the one hand, we have ()=1o(1) and thus TV(,¯),TV(,¯)=o(1); on the other hand, we have 𝖠𝖽𝗏D(d¯d)=Oδ(1), thus verifying the low-degree hardness for the detection problem. The first step of our proof is to generalize the result in [15], as incorporated in the following lemma.

Lemma 17.

For all 1i,jn, we have 𝖠𝖽𝗏D(d¯(π(i)=j)d)=Oδ(1).

Now, based on Lemma 17, we establish the following result, which basically suggests that it is impossible to obtain a “good approximation” of 𝟏{π(i)=j} in some sense.

Lemma 18.

Assuming Conjecture 12, there exists a constant C such that for all 1in and all statistics {gi,j=gi,j(A,B):1jn} that can be computed in time nD/(logn)C, it holds that

j=1n𝔼[(𝟏{π(i)=j}gi,j)2]1o(1). (7)

Now we can finish the proof of Item (1) in Proposition 15.

Proof of Item (1) in Proposition 15.

We choose C as in the statement of Lemma 18. Suppose on the contrary there are statistics {fj:1jn} that can be computed in time nD/(logn)C with 𝔼[fπ(i)]1c for some fixed constant 0<c<0.01. Using Items (1) and (2) in Definition 13, we see that fjfk=0 for all kj, and thus

1=𝔼[(f1++fn)2]=j=1n𝔼[fj2]. (8)

In addition, we have

𝔼[fπ(i)]=1nj=1n𝔼[fjπ(i)=j]c. (9)

Thus, for all λ[0,1] we have

j=1n𝔼[(𝟏{π(i)=j}1λnλfj)2](8),(9) 1+λ22cλ+O(1n).

Thus, by choosing λ=λ(c) to be a sufficiently small positive constant we get that

j=1n𝔼[(𝟏{π(i)=j}gj)2]=1Ω(1) where gj=1λn+λfj,

contradicting to Lemma 18. This leads to the desired result.

3.2 Proof of Item (2) in Proposition 15

This subsection is devoted to the proof of Item (2) in Proposition 15. Recall Definitions 3 and 4. Throughout this subsection, we will denote to be the joint law of (π,σ,G,A,B) where (G,A,B)𝒮(n,λn;k,ϵ;s) and the marginal law of (A,B). In addition, we denote to be the law of a pair of independent Erdős-Rényi models 𝒢(n,λsn). Also, we assume throughout this subsection that there exists a small constant 0<δ<0.01 such that

s<αδ,ϵ2λs<1δ. (10)

We start again by recalling some notations introduced in [9]. We choose a sufficiently large constant N=N(k,λ,δ,ϵ,s)2/δ such that

(αδ)(1+ϵNk)αδ/2;10k(1δ)N(1δ/2)N; (11)
(αδ/2)(1+(1δ/2)N)2αδ/4;(1δ/2)N(N+1)1.

We first show how to construct the event in Item (2) in Proposition 15.

Definition 19.

Denote λ~=λ1. Given a graph H=H(V,E), define

Υ(H)=(2λ~2k2nD50)|V(H)|(1000λ~20k20D50n)|E(H)|. (12)

Then we say the graph H is bad if Υ(H)<(logn)1, and we say a graph H is self-bad if H is bad and Υ(H)<Υ(K) for all KH. Furthermore, we say that a graph H is admissible if it contains no bad subgraph and 𝙲j(H)= for jN; we say H is inadmissible otherwise. Denote =(1)(2), where (1) is the event that G does not contain any bad subgraph with no more than D3 vertices, and (2) is the event that G does not contain any cycles with length at most N.

Definition 20.

List all self-bad subgraphs of 𝒦n with at most D3 vertices and all cycles of 𝒦n with lengths at most N in an arbitrary but prefixed order (B1,,B𝙼). Define a stochastic block model with “bad graphs” removed as follows: (1) sample G𝒮(n,λn;k,ϵ); (2) for each 𝟷𝚒𝙼 such that B𝚒G, we independently uniformly remove one edge in B𝚒. The unremoved edges in G constitute a graph G, which is the output of our modified stochastic block model. Clearly, from this definition G does not contain any cycle of length at most N nor any bad subgraph with at most D3 vertices. Conditioned on G and π, we define

Ai,j=Gi,jJi,j,Bi,j=Gπ1(i),π1(j)Ki,j,

where J and K are independent Bernoulli variables with parameter s. Let ~=~,n be the law of (σ,π,G,G,A,B) and denote ~=~n the marginal law of (A,B).

It was shown in [9, Lemmas 4.2 and 4.4] that

()=Ω(1) and TV(~,())=o(1).

Similarly as in Section 3.1, our first step is to show the following lemma.

Lemma 21.

We have 𝖠𝖽𝗏D(d~(π(i)=j)d)=Oδ,k(1).

Based on Lemma 21, we can deduce our main result just as how we deduce Theorem 14 from Lemma 17. The only difference is that we will replace all ¯ with ~ and replace all with () so we omit further details here.

4 Detection in correlated SBMs

In this section, we will use the framework we established in Section 2 in stochastic block models below KS-threshold. The main results of this section is incorporated as follows.

Theorem 22.

For any constant K, denote to be the law of K independent stochastic block models 𝒮(n,λn;k,ϵ) and denote to be the law of K independent Erdős-Rényi graphs 𝒢(n,λn). Then, assuming Conjecture 9, for any δ>0 there exists λ0=λ0(δ,k) to be a sufficiently large constant such that when ϵ2λ<1δ and λ>λ0, we have D for any D=no(1).

Our result has an immediate corollary in the detection problem between a pair of correlated SBMs and a pair of independent SBMs, as incorporated in the following corollary.

Corollary 23.

Assuming Conjecture 9, when ϵ2λs<1δ, s<αδ and λ>λ0(δ,k) there is no algorithms with polynomial running time that can strongly distinguish 𝒮(n,λn;k,ϵ;s) and two independent 𝒮(n,λsn;k,ϵ).

The rest part of this section is devoted to the proof of Theorem 22. For notational simplicity, in the following we will only prove the case where K=1 and the proof for general K is similar. Note that ϵ2λ<1δ and λ>λ0 implies that ϵ<ϵ0=λ01/2. We choose λ0=λ0(δ,k) to be a sufficient large constant such that

(1δ)1(k1)1ϵ+1+ϵ(k1)k(1δ/2)1 for all ϵ<ϵ0=λ01/2. (13)

In the rest part of this section we will always assume that

D=no(1),λ>λ0 and ϵ2λ<1δ for some constant 0<δ<0.01. (14)

Clearly, using Theorem 11, it suffices to show that under (14) and λ0 we have

𝖠𝖽𝗏D(dd)=Oδ,k(1) and 𝖠𝖽𝗏D(dd)=Oδ,k(1).

Indeed, it has been shown in [34] that 𝖠𝖽𝗏D(dd)=Oδ,k(1) provided with (14). It remains to show that under (14) we have (note that now is the planted measure)

𝖠𝖽𝗏D(dd)=supf𝒫D𝔼[f]𝔼[f2]=Oδ,k(1). (15)

We point out that our approach to proving (15) is based on the the work [56]. To this end, define ω(σi,σj)=k1 for σi=σj and ω(σi,σj)=1 otherwise. In addition, for all S𝒦n define

ϕS({Gi,j})=(i,j)E(S)Gi,jλnλn(1λn). (16)

It is well known in [34] that {ϕS:S𝒦n,|E(S)|D} constitutes a standard orthogonal basis of 𝒫D under . Thus, each f𝒫D can be written as

f(G)=S𝒦n,|E(S)|Df^SϕS(G), (17)

which means that f is uniquely characterized by a vector f^ indexed by {S𝒦n:|E(S)|D}. In addition, direct calculation yields that 𝔼[ϕS(G)]=𝟏{S=}. Thus, we have 𝔼[f]=f^=f^,c, where c is a vector indexed by {S𝒦n:|E(S)|D} with cS=𝟏{S=}. We now turn to 𝔼[f2]. For any σ[k]n and S𝒦n, define

ψσ,S({Gi,j})=kn2𝟏σ=σ(i,j)E(S)Gi,j(1+ϵω(σi,σj))λn(1+ϵω(σi,σj))λn(1(1+ϵω(σi,σj))λn) (18)

We can check that {ψσ,S:σ[k]n,S𝒦n} is standard orthogonal under , i.e., we have

𝔼[ψσ,Sψσ,S]=𝟏{σ=σ,S=S}. (19)
Lemma 24.

We have

𝔼[ϕS(G)ψσ,H(G)]=𝟏HSkn2(i,j)E(H)𝚑(σi,σj)(i,j)E(S)E(H)ω(σi,σj)ϵ2λn, (20)

where

𝚑(σi,σj)=(1(1+ϵω(σi,σj))λn)(1+ϵω(σi,σj))1λn. (21)

Based on Lemma 24, we define a matrix M with rows indexed by {S:S𝒦n,|E(S)|D} and columns indexed by {(σ,S):σ[k]n,S𝒦n,|E(S)|D}, and entries given by

MS;(σ,H)=𝟏HSkn2(i,j)E(H)𝚑(σi,σj)(i,j)E(S)E(H)ω(σi,σj)ϵ2λn. (22)

From Parserval’s inequality, we see that

𝔼[f2] σ[k]n,H𝒦n|E(H)|D𝔼[fψσ,H]2=(17)σ[k]n,H𝒦n|E(H)|D(S𝒦n|E(S)|Df^S𝔼[ϕSψσ,H])2
=σ[k]n,H𝒦n|E(H)|D(S𝒦n|E(S)|Df^SMS;(σ,H))2=f^M2. (23)

Thus, we have

𝖠𝖽𝗏D(dd)=supf𝒫D{𝔼[f]𝔼[f2]}supf^{f^,cf^M}infMu=c{u}, (24)

where the last inequality follows from the fact that for Mu0=c we have

f^,c=f^,Mu0=f^M,u0u0f^M.

Regarding (24), it suffices to show that there exists Mu=c and u=Oδ,k(1). We now construct the solution {uσ,H:σ[k]n,H𝒦n,|E(H)|D} as follows: let uσ,H=1kn2Ξ(H), where

Ξ()=1 and Ξ(S)=0 for (S) (25)

and then iteratively define for all (S)= by

Ξ(S)= (𝔼σν[(i,j)E(S)𝚑(σi,σj)])1HS(H)=(ϵ2λn)|E(S)||E(H)|2Ξ(H) (26)
×𝔼σν[(i,j)E(H)𝚑(σi,σj)(i,j)E(S)E(H)ω(σi,σj)],

To prove Theorem 22, it suffices to show the following lemma.

Lemma 25.

The vector {uσ,H:σ[k]n,H𝒦n,|E(H)|D} satisfies Mu=c and u=Oδ,k(1).

References

  • [1] László Babai. Graph isomorphism in quasipolynomial time. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC), pages 684–697. ACM, 2016.
  • [2] Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, and Ilias Zadik. The Franz-Parisi criterion and computational trade-offs in high dimensional statistics. In Advances in Neural Information Processing Systems (NIPS), volume 35, pages 33831–33844. Curran Associates, Inc., 2022.
  • [3] Afonso S. Bandeira, Dmitriy Kunisky, and Alexander S. Wein. Computational hardness of certifying bounds on constrained PCA problems. In Proceedings of the 11th Innovations in Theoretical Computer Science Conference (ITCS). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.ITCS.2020.78.
  • [4] Boaz Barak, Chi-Ning Chou, Zhixian Lei, Tselil Schramm, and Yueqi Sheng. (Nearly) efficient algorithms for the graph matching problem on correlated random graphs. In Advances in Neural Information Processing Systems (NIPS), volume 32, pages 9190–9198. Curran Associates, Inc., 2019.
  • [5] Boaz Barak, Samuel B. Hopkins, Jonathan Kelner, Pravesh K. Kothari, Ankur Moitra, and Aaron Potechin. A nearly tight sum-of-squares lower bound for the planted clique problem. SIAM Journal on Computing, 48(2):687–735, 2019. doi:10.1137/17M1138236.
  • [6] Guy Bresler and Brice Huang. The algorithmic phase transition of random k-SAT for low degree polynomials. In Proceedings of the IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 298–309. IEEE, 2022. doi:10.1109/FOCS52979.2021.00038.
  • [7] Rainer E. Burkard, Eranda Cela an Panos M. Pardalos, and Leonidas S. Pitsoulis. The quadratic assignment problem. Handbook of combinatorial optimization, pages 1713–1809, 1998.
  • [8] Shuwen Chai and Miklós Z. Rácz. Efficient graph matching for correlated stochastic block models. In Advances in Neural Information Processing Systems (NIPS), volume 38, pages 116388–116461. Curran Associates, Inc., 2024.
  • [9] Guanyi Chen, Jian Ding, Shuyang Gong, and Zhangsong Li. A computational transition for detecting correlated stochastic block models by low-degree polynomials. arXiv preprint, 2024. arXiv:2409.00966.
  • [10] Daniel Cullina and Negar Kiyavash. Exact alignment recovery for correlated Erdős-Rényi graphs. arXiv preprint, 2024. arXiv:1711.06783.
  • [11] Osman E. Dai, Daniel Cullina, Negar Kiyavash, and Matthias Grossglauser. Analysis of a canonical labeling algorithm for the alignment of correlated Erdős-Rényi graphs. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3(2):1–25, 2019. doi:10.1145/3341617.3326151.
  • [12] Abhishek Dhawan, Cheng Mao, and Alexander S. Wein. Detection of dense subhypergraphs by low-degree polynomials. arXiv preprint, 2024. to appear in Random Structures and Algorithms. arXiv:2304.08135.
  • [13] Jian Ding and Hang Du. Detection threshold for correlated Erdős-Rényi graphs via densest subgraph. IEEE Transactions on Information Theory, 69(8):5289–5298, 2023. doi:10.1109/TIT.2023.3265009.
  • [14] Jian Ding and Hang Du. Matching recovery threshold for correlated random graphs. Annals of Statistics, 51(4):1718–1743, 2023.
  • [15] Jian Ding, Hang Du, and Zhangsong Li. Low-degree hardness of detection for correlated Erdős-Rényi graphs. arXiv preprint, 2023. to appear in Annals of Statistics. arXiv:2311.15931.
  • [16] Jian Ding and Zhangsong Li. A polynomial time iterative algorithm for matching Gaussian matrices with non-vanishing correlation. arXiv preprint, 2023. to appear in Foundations of Computational Mathematics. arXiv:2212.13677.
  • [17] Jian Ding and Zhangsong Li. A polynomial-time iterative algorithm for random graph matching with non-vanishing correlation. arXiv preprint, 2023. arXiv:2306.00266.
  • [18] Jian Ding, Zongming Ma, Yihong Wu, and Jiaming Xu. Efficient random graph matching via degree profiles. Probability Theory and Related Fields, 179(1-2):29–115, 2021.
  • [19] Jingqiu Ding, Yiding Hua, Lucas Slot, and David Steurer. Low degree conjecture implies sharp computational thresholds in stochastic block model. arXiv preprint, 2025. arXiv:arXiv:2502.15024.
  • [20] Yunzi Ding, Dmitriy Kunisky, Alexander S. Wein, and Afonso S. Bandeira. Subexponential-time algorithms for sparse PCA. Foundations of Computational Mathematics, 22(1):1–50, 2022.
  • [21] Hang Du. Optimal recovery of correlated Erdős-Rényi graphs. arXiv preprint, 2025. arXiv:2502.12077.
  • [22] Zhou Fan, Cheng Mao, Yihong Wu, and Jiaming Xu. Spectral graph matching and regularized quadratic relaxations I: Algorithm and Gaussian analysis. Foundations of Computational Mathematics, 23(5):1511–1565, 2023. doi:10.1007/S10208-022-09570-Y.
  • [23] Zhou Fan, Cheng Mao, Yihong Wu, and Jiaming Xu. Spectral graph matching and regularized quadratic relaxations II: Erdős-Rényi graphs and universality. Foundations of Computational Mathematics, 23(5):1567–1617, 2023.
  • [24] David Gamarnik. The overlap gap property: A topological barrier to optimizing over random structures. Proceedings of the National Academy of Sciences of the United States of America, 118(41):e2108492118, 2021.
  • [25] David Gamarnik, Aukosh Jagannath, and Alexander S. Wein. Low-degree hardness of random optimization problems. In Proceedings of the IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 131–140. IEEE, 2020. doi:10.1109/FOCS46700.2020.00021.
  • [26] Luca Ganassali and Laurent Massoulié. From tree matching to sparse graph alignment. In Proceedings of the 33rd Conference on Learning Theory (COLT), pages 1633–1665. PMLR, 2020. URL: http://proceedings.mlr.press/v125/ganassali20a.html.
  • [27] Luca Ganassali, Laurent Massoulié, and Marc Lelarge. Impossibility of partial recovery in the graph alignment problem. In Proceedings of the 34th Conference on Learning Theory (COLT), pages 2080–2102. PMLR, 2021. URL: http://proceedings.mlr.press/v134/ganassali21a.html.
  • [28] Luca Ganassali, Laurent Massoulié, and Marc Lelarge. Correlation detection in trees for planted graph alignment. Annals of Applied Probability, 34(3):2799–2843, 2024.
  • [29] Luca Ganassali, Laurent Massoulié, and Guilhem Semerjian. Statistical limits of correlation detection in trees. Annals of Applied Probability, 34(4):3701–3734, 2024.
  • [30] Julia Gaudio, Miklós Z. Rácz, and Anirudh Sridhar. Exact community recovery in correlated stochastic block models. In Proceedings of the 35th Conference on Learning Theory (COLT), pages 2183–2241. PMLR, 2022. URL: https://proceedings.mlr.press/v178/gaudio22a.html.
  • [31] Georgina Hall and Laurent Massoulié. Partial recovery in the graph alignment problem. Operations Research, 71(1):259–272, 2023. doi:10.1287/OPRE.2022.2355.
  • [32] Samuel B. Hopkins. Statistical inference and the sum of squares method. Cornell University, 2018. PHD thesis.
  • [33] Samuel B. Hopkins, Pravesh K. Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer. The power of sum-of-squares for detecting hidden structures. In Proceedings of the IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 720–731. IEEE, 2017. doi:10.1109/FOCS.2017.72.
  • [34] Samuel B. Hopkins and David Steurer. Efficient Bayesian estimation from few samples: community detection and related problems. In Proceedings of the IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 379–390. IEEE, 2017. doi:10.1109/FOCS.2017.42.
  • [35] Harry Kesten and Bernt P. Stigum. Additional limit theorems for indecomposable multidimensional Galton-Watson processes. Annals of Mathematical Statistics, 37:1463–1481, 1966.
  • [36] Pravesh K. Kothari, Santosh S. Vempala, Alexander S. Wein, and Jeff Xu. Is planted coloring easier than planted clique? In Proceedings of the 36th Conference on Learning Theory (COLT), pages 5343–5372. PMLR, 2023. URL: https://proceedings.mlr.press/v195/kothari23a.html.
  • [37] Dmitriy Kunisky. Hypothesis testing with low-degree polynomials in the Morris class of exponential families. In Proceedings of the 34th Conference on Learning Theory (COLT), pages 2822–2848. PMLR, 2021. URL: http://proceedings.mlr.press/v134/kunisky21a.html.
  • [38] Dmitriy Kunisky, Cristopher Moore, and Alexander S. Wein. Tensor cumulants for statistical inference on invariant distributions. In Proceedings of the IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 1007–1026. IEEE, 2024. doi:10.1109/FOCS61266.2024.00067.
  • [39] Dmitriy Kunisky, Alexander S. Wein, and Afonso S. Bandeira. Notes on computational hardness of hypothesis testing: Predictions using the low-degree likelihood ratio. In Mathematical Analysis, its Applications and Computation: ISAAC, pages 1–50. Springer, 2022.
  • [40] Zhangsong Li. Robust random graph matching in dense graphs via vector approximate message passing. arXiv preprint, 2024. arXiv:2412.16457.
  • [41] Zhangsong Li. Algorithmic contiguity from low-degree conjecture and applications in correlated random graphs. arXiv preprint, 2025. arXiv:arXiv:2502.09832.
  • [42] Konstantin Makarychev, Rajsekar Manokaran, and Maxim Sviridenko. Maximum quadratic assignment problem: Reduction from maximum label cover and lp-based approximation algorithm. In Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), pages 594–604. Springer, 2010. doi:10.1007/978-3-642-14165-2_50.
  • [43] Cheng Mao, Mark Rudelson, and Konstantin Tikhomirov. Random graph matching with improved noise robustness. In Proceedings of the 34th Conference on Learning Theory (COLT), pages 3296–3329. PMLR, 2021. URL: http://proceedings.mlr.press/v134/mao21a.html.
  • [44] Cheng Mao, Mark Rudelson, and Konstantin Tikhomirov. Exact matching of random graphs with constant correlation. Probability Theory and Related Fields, 186(2):327–389, 2023.
  • [45] Cheng Mao, Yihong Wu, Jiaming Xu, and Sophie H. Yu. Random graph matching at Otter’s threshold via counting chandeliers. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing (STOC), pages 1345–1356. ACM, 2023. doi:10.1145/3564246.3585156.
  • [46] Cheng Mao, Yihong Wu, Jiaming Xu, and Sophie H. Yu. Testing network correlation efficiently via counting trees. Annals of Statistics, 52(6):2483–2505, 2024.
  • [47] Ankur Moitra and Alexander S. Wein. Precise error rates for computationally efficient testing. Annals of Statistics, 53(2):854–878, 2023.
  • [48] Andrea Montanari and Alexander S. Wein. Equivalence of approximate message passing and low-degree polynomials in rank-one matrix estimation. Probability Theory and Related Fields, 191(1-2):181–233, 2025.
  • [49] Arvind Narayanan and Vitaly Shmatikov. Robust de-anonymization of large sparse datasets. In IEEE Symposium on Security and Privacy, pages 111–125. IEEE, 2008. doi:10.1109/SP.2008.33.
  • [50] Richard Otter. The number of trees. Annals of Mathematics, 49(3):583–599, 1948.
  • [51] Miklós Z. Rácz and Anirudh Sridhar. Correlated stochastic block models: exact graph matching with applications to recovering communities. In Advances in Neural Information Processing Systems (NIPS), volume 34, pages 22259–22273. Curran Associates, Inc., 2021. URL: https://proceedings.neurips.cc/paper/2021/hash/baf4f1a5938b8d520b328c13b51ccf11-Abstract.html.
  • [52] Prasad Raghavendra, Tselil Schramm, and David Steurer. High-dimensional estimation via sum-of-squares proofs. In Proceedings of the International Congress of Mathematicians (ICM 2018), pages 3389–3423, 2019.
  • [53] Cynthia Rush, Fiona Skerman, Alexander S. Wein, and Dana Yang. Is it easier to count communities than find them? In Proceedings of the 14th Innovations in Theoretical Computer Science Conference (ITCS). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPIcs.ITCS.2023.94.
  • [54] Tselil Schramm and Alexander S. Wein. Computational barriers to estimation from low-degree polynomials. Annals of Statistics, 50(3):1833–1858, 2022.
  • [55] Rohit Singh, Jinbo Xu, and Bonnie Berger. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences of the United State of America, 105(35):12763–12768, 2008. doi:10.1073/PNAS.0806627105.
  • [56] Youngtak Sohn and Alexander S. Wein. Sharp phase transitions in estimation with low-degree polynomials. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC), pages 891–902. ACM, 2025. doi:10.1145/3717823.3718294.
  • [57] Alexander S. Wein. Optimal low-degree hardness of maximum independent set. Mathematical Statistics and Learning, 4(3/4):221–251, 2022.
  • [58] Yihong Wu, Jiaming Xu, and Sophie H. Yu. Settling the sharp reconstruction thresholds of random graph matching. IEEE Transactions on Information Theory, 68(8):5391–5417, 2022. doi:10.1109/TIT.2022.3169005.
  • [59] Yihong Wu, Jiaming Xu, and Sophie H. Yu. Testing correlation of unlabeled random graphs. Annals of Applied Probability, 33(4):2519–2558, 2023.
  • [60] Ilias Zadik, Min Jae Song, Alexander S. Wein, and Joan Bruna. Lattice-based methods surpass sum-of-squares in clustering. In Proceedings of the 35th Conference on Learning Theory (COLT), pages 1247–1248. PMLR, 2022. URL: https://proceedings.mlr.press/v178/zadik22a.html.
  • [61] Lenka Zdeborová and Florent Krzakala. Statistical physics of inference: thresholds and algorithms. Advances in Phyisics, 65(5):453–552, 2016.