Abstract 1 Introduction 2 Our main results 3 Proof sketch 4 Technical details 5 Connection to previous work References Appendix A CKG Lindbladian in the energy basis Appendix B Graph-Local Jumps for Cyclic Graphs Appendix C Bounded Degree Systems with 1-Design Jumps Appendix D Unbounded Degree Systems with 1-Design Jumps

Mixing Time of Quantum Gibbs Sampling for Random Sparse Hamiltonians

Akshar Ramkumar ORCID Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA, USA Mehdi Soleimanifar ORCID Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA, USA
Abstract

Providing evidence that quantum computers can efficiently prepare low-energy or thermal states of physically relevant interacting quantum systems is a major challenge in quantum information science. A newly developed quantum Gibbs sampling algorithm [11] provides an efficient simulation of the detailed-balanced dissipative dynamics of non-commutative quantum systems. The running time of this algorithm depends on the mixing time of the corresponding quantum Markov chain, which has not been rigorously bounded except in the high-temperature regime. In this work, we establish a polylog(n) upper bound on its mixing time for various families of random n×n sparse Hamiltonians at any constant temperature. We further analyze how the choice of the jump operators for the algorithm and the spectral properties of these sparse Hamiltonians influence the mixing time. Our result places this method for Gibbs sampling on par with other efficient algorithms for preparing low-energy states of quantumly easy Hamiltonians.

Keywords and phrases:
Quantum algorithms, quantum Gibbs sampling, mixing time analysis
Funding:
Akshar Ramkumar: AR acknowledges the Caltech SURF program and Zhuang Tang and Gebing Yi for their generous support.
Mehdi Soleimanifar: MS is supported by AWS Quantum Postdoctoral Scholarship and the National Science Foundation NSF CAREER award CCF-2048204.
Copyright and License:
[Uncaptioned image] © Akshar Ramkumar and Mehdi Soleimanifar; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Quantum information theory
; Theory of computation Design and analysis of algorithms
Related Version:
Extended Version: https://arxiv.org/abs/2411.04454 [32]
Supplementary Material:
Software  (Source Code): https://github.com/countableclouds/lindbladian_gap
  archived at Software Heritage Logo swh:1:dir:9db70c0155cd182d780b7562eca062cc64a051ad
Acknowledgements:
We thank Jiaoyang Huang, Luca Nashabeh, John Preskill, and Yongtao Zhan for valuable discussions. We are grateful to Chi-Fang Chen for insightful discussions and proposing this project in its early stages. We also thank Hongrui Chen and the authors of [12] for informing us about the overlap between our results and those in Section IV of their work. Institute for Quantum Information and Matter is an NSF Physics Frontiers Center.
Editor:
Bill Fefferman

1 Introduction

One of the main anticipated applications of quantum computers is the simulation and characterization of quantum systems in condensed matter physics [40], quantum chemistry [29], and high-energy physics [30, 4]. The problem of simulating the dynamics (time evolution) of an interacting quantum system under a local or sparse Hamiltonian 𝑯 has largely been addressed, with efficient algorithms [22, 27, 5, 28, 20] that scale well with the number of particles, simulation time, and required precision. However, the ability of quantum computers to evaluate the static features of quantum systems, such as their ground state or thermal properties, is less understood.

In this work, we focus on preparing the Gibbs (thermal) state 𝝆β=eβ𝑯Tr(eβ𝑯) of a quantum system, which represents the equilibrium state when the system is in contact with a thermal bath at a fixed temperature β1. This computational problem, known as Gibbs sampling or “cooling,” is valuable not only for simulating thermodynamic properties but also as a subroutine in quantum algorithms for optimization and learning [7, 2, 6]. However, to prepare the Gibbs state, quantum computers face challenges. In general, it is not believed that estimating the low-temperature properties of quantum systems can be solved efficiently by a quantum computer in the worst-case [24]. Fortunately, it has been hypothesized that this worst-case hardness of finding low-temperature states implied by arguments from complexity theory is due to pathological Hamiltonians, which are not apparent in many physical systems that normally occur in nature. This hypothesis is substantiated by the empirical success of natural cooling, such as using refrigerators, in reaching thermal equilibrium.

Quantum Gibbs sampling.

Aiming to mimic nature’s cooling processes, a series of recent works have introduced quantum Markov Chain Monte Carlo (MCMC) algorithms, or quantum Gibbs samplers [11, 10, 36, 42, 31, 23, 43, 16, 19], as promising alternatives for tackling a range of classically intractable low-temperature simulation tasks on quantum computers. These algorithms are designed to replicate the success of classical Markov chains in preparing Gibbs states for classical Hamiltonians. The analysis of classical MCMC algorithms relies on the principle of detailed balance; however, achieving this in the quantum setting has been challenging and was only recently addressed by an algorithm in [11]. Part of the difficulty arises from a conflict between the finite energy resolution σE achievable by efficient quantum algorithms and the seemingly strict requirement to precisely distinguish energy levels to satisfy detailed balance. In this work, we focus primarily on this algorithm, referring to it as the CKG algorithm or the quantum Gibbs sampler when the context is clear. We give a detailed review of this algorithm in Section 4.1.3 and Appendix 4.2.1.

The Gibbs sampling algorithm provides a fully general method for preparing Gibbs states by evolving an initial state 𝝆0 under a Lindbladian β, which is efficiently implementable on a quantum computer and produces the state 𝝆t=eβt[𝝆0] after time t. The runtime of the quantum Gibbs sampler is governed by the mixing time of the corresponding quantum Markov chain, which is roughly the time required for 𝝆t to approach the Gibbs state 𝝆β. This in turn is bounded by the spectral gap λgap(β) of the Lindbladian by

tmix(β)𝒪(β𝑯+log(n))λgap(β).

The spectral gap is defined here to be λmin, the smallest eigenvalue of β for any eigenvector other than the fixed point ρβ. Bounding the spectral gap, therefore, proves not only that β has a unique fixed point, but also quantifies the rate of convergence. The mixing time varies based on the quantum system in question. Bounding this mixing time is challenging without access to fault-tolerant quantum computers, as we cannot run and benchmark the algorithm directly, making theoretical analysis essential. However, such analysis is hindered by a lack of technical tools for two key reasons. Firstly, the theory of convergence of quantum Markov chains is new, unlike the very mature twin field for classical Markov chains. Secondly, the Markov chain described by the algorithm is considerably complex, and depends on several parameters that we will discuss in more detail shortly: an energy resolution σE, a series of jump operators 𝑨a for a[M], and the inverse temperature β. The space of possibilities makes the algorithm’s performance more difficult to characterize.

This motivates the identification of quantum systems whose mixing times are tractable for analysis yet exhibit rich features that provide insights into the performance of the quantum Gibbs sampler for more general non-commuting Hamiltonians. In line with this, the mixing time of the CKG algorithm has recently been bounded for local Hamiltonians, showing a polynomial scaling with system size at high enough temperatures [33].

Mixing time of sparse Hamiltonians.

In this work, we consider an alternative approach by characterizing the mixing time of a family of sparse Hamiltonians of the form

𝑯=i,j[n]Hij|eiej|. (1)

Such an operator can be understood as the Hamiltonian on a graph G=(V,E) with n=|V| vertices indexed by basis states |ei, i[n] and a set of edges E connecting vertices with Hij0. When non-zero entries Hij are all equal to 1, the Hamiltonian 𝑯 corresponds to the n×n adjacency matrix of the n-vertex graph. We define the degree d of the graph G as the sparsity of the underlying Hamiltonian and refer to Hamiltonians with constant or slowly increasing degrees d=polylog(n) as sparse. Note that any log(n)-qubit Hamiltonian that consists of m=polylog(n) terms each acting locally on κ=O(1) qubits is a sparse Hamiltonian with degree dm2κpolylog(n). However, not all sparse Hamiltonians admit local qubit encodings.

Having defined sparse Hamiltonians, we now consider the dissipative dynamics of the system induced by a set of M jump operators 𝑨a=i,j[n]Aija|eiej|,a[M]. We will soon explain how the jump operators 𝑨a relate to the Lindbladian β. Briefly, the resulting dynamics can be understood as a combination of two processes: a continuous-time quantum walk of a single particle on the graph of states due to the coherent evolution of the Hamiltonian 𝑯, which is combined with stochastic jumps on the graph determined by the jump operators 𝑨a.

Our interest in bounding the mixing time of the sparse Hamiltonians is multifaceted:

  1. (1)

    Single-particle dynamics. As stated earlier, bounding the mixing time of general interacting multipartite Hamiltonians is a challenging task. However, for simple choices of graphs G, the mixing time of the quantum Gibbs sampler may be easier to analyze, potentially leading to relevant techniques for tackling the case of interacting particles. In fact, we can think of the dynamics induced by the Hamiltonian 𝑯 (1) as the dynamics of a single-particle hopping on the graph G. This single-particle evolution on path graphs or grids is commonly analyzed in the tight-binding model in condensed matter physics. That being said, even in the simplified case of a single particle, the Hamiltonian 𝑯 is non-commuting, characterizing a continuous-time quantum walk that can yield exponential quantum advantage for certain oracular problems on graphs such as the glued trees [13].

  2. (2)

    Chaotic Hamiltonians. Our additional motivation for studying random sparse Hamiltonians stems from the fact that their spectra exhibit many of the same characteristics as chaotic Hamiltonians, such as the SYK model [34, 25, 26] and random p-spin models [37, 41]. Understanding whether chaotic Hamiltonians have a fast mixing time as they approach their thermal and low-energy states is a fundamental question in the study of quantum chaos [8, 1]. As a concrete step toward addressing this problem, we identify key spectral properties of random sparse Hamiltonians that can ensure a fast mixing time.

  3. (3)

    Algorithmic applications. Preparing quantum Gibbs states, and more broadly computing the matrix exponential of sparse matrices such as the adjacency or Laplacian of a graph, is a fundamental subroutine in solving various graph and optimization problems. For instance, the Estrada index – defined as the trace of the matrix exponential of a graph’s adjacency matrix – measures subgraph centrality and provides structural insights [18]. Computing the matrix exponential is also related to matrix inversion and linear system solvers [35]. Moreover, quantum Gibbs sampling has been applied to solving semidefinite programs (SDPs) in optimization problems [21, 7, 6, 3], offering quantum speedups for these problems.

2 Our main results

Motivated by these considerations, we investigate the mixing time of quantum Gibbs samplers for sparse Hamiltonians and different choices of jump operators. Our study addresses two key questions regarding the performance of the quantum Gibbs sampler for sparse Hamiltonians. First, we ask

What choices of jump operators lead to a fast mixing time?

After exploring the effects of different jump operators 𝑨a, we then focus on the spectral properties of sparse Hamiltonians to understand:

What spectral property of the Hamiltonian determines its mixing time?

Answering these questions allows us to provide broad and intuitive insights on how the quantum Gibbs sampler operates for general families of sparse Hamiltonians.

2.1 Choice of jumps: graph-local vs unitary design

A natural set of jump operators for a given n×n Hamiltonian on a graph G are 𝑨a=1n|eaea| or similar operators supported on a few neighboring vertices of G. Importantly, these are not “local” in the sense of multi-particle Hamiltonians, which refers to being composed of terms that act on a small number of qubits – often also geometrically close to one another. Utilizing graph-local jump operators also significantly simplifies the structure of the Lindbladian and the analysis of mixing times for certain graph families.

Moving beyond graph-local jumps, the Lindbladian β of the quantum Gibbs sampler can still be efficiently implemented on a quantum computer with a much broader class of jumps. This is possible as long as each jump 𝑨a is efficiently implementable, the set of jumps M includes both 𝑨a and its adjoint 𝑨a, and a[M]𝑨a𝑨a=1 (due to this normalization condition, we will sometimes speak of the jumping distribution 𝒜, from which the jump operators 𝑨a are sampled with probability 𝑨a𝑨a). This raises the question of whether there is an advantage in using non-local jumps that have a bounded spectral norm, or if more structured local jumps are sufficient to achieve a fast-mixing quantum MCMC. After all, classical continuous-time random walks are typically considered with local jumps on the graph vertices. However, in the context of graphs, we will see that the structured nature of graph-local jumps offers no advantage, but rather seems to cause a slowdown of the resulting algorithm.

Graph-local jumps.

To this end, in the next theorem, we establish tight bounds on the spectral gap of the Lindbladian for cyclic graphs for graph-local jumps 𝑨a, with an approach similar to the one used in [38] to bound the spectral gap of a Davies generator.

Theorem 2.1 (Spectral gap of cyclic graphs with local jumps).

Fix temperature β1. There exists some constant energy resolution σE for which the spectral gap of the CKG Lindbladian β for a cyclic graph with n vertices with jump operators 𝐀a=1n|eaea| is asymptotically Θ(n3).

In addition to theoretical analysis, we also generated data for cyclic graphs, path graphs, and random d-regular graphs with n vertices, as shown in Figure 1. These numerical results suggest spectral gaps of o(n1) for generic sparse graphs with graph-local jumps. We observed that increasing the constant d does improve the spectral gap decay, though it never improved past the asymptotic decay O(n1).

These results are all suboptimal, since for an n×n Hamiltonian 𝑯, we expect an efficient result would be polynomial in the number of qubits, i.e. polylog(n) rather than poly(n). The poor performance can be attributed to two factors. (1) The operators 𝑨a𝑨a have L1 norm 1n. (2) In the energy basis, many entries of 𝑨a are highly correlated.

The first drawback effectively scales the Lindbladian down by 1n, since the L1 norm of 𝑨𝑨 can be as high as 1 when the operator norm 𝑨𝑨=1n. However, the chosen jump operators are projectors, so their L1 and operator norms are equal. In both Theorem 2.1 and in the data, a spectral gap even worse than 1n is observed. This is due to the second drawback. The aforementioned correlations lead to off-diagonal terms in the Lindbladian, which in general have the potential to dampen the spectral gap, and in the case of the cyclic graph provably do so. It appears that more generally, the biases of an ensemble of local jumps can introduce off-diagonal terms to the Lindbladian that decrease the spectral gap. The same harmful correlations appear to exist in higher degree graphs in addition to cyclic ones, though to a lesser extent as evidenced by the improved spectral gap.

CKG Lindbladian Spectral Gap Data

Figure 1: Linear (above) and log-log (below) graphs of spectral gap with respect to system size. Gaps of ten random 4-regular graphs were averaged for each data point. For the cyclic graphs (one-dimensional lattices), the proven asymptotic decay aligns closely with the data.
Unitary design jumps.

To address some of these shortcomings, we next consider non-local jump operators, each independently drawn (along with its adjoint pair) according to a unitary 1-design 𝒟(U(n)) on n vertices. More precisely, we define

Definition 2.2.

A set of jump operators {𝐀a:a[M]} is drawn from a 1-design jumping distribution if it is obtained by sampling M/2 jump operators i.i.d from a unitary 1-design 𝒟(U(n)), normalizing each by 1M, and including these operators along with their adjoint.

We include the adjoint of each randomly chosen jump since the CKG Lindbladian requires the set of jump operators to be closed under adjoint, {𝑨a:a[M]}={𝑨a:a[M]}. When n is a power of 2, the unitary 1-design can be constructed as a tensor product of random Pauli operators on log2(n) qubits, in which case the jumps are self-adjoint and can be sampled and implemented efficiently. The efficiency of our results on a general system relies on the ability to efficiently implement some unitary 1-design.

As we will see, in our application, this 1-design sampling is effectively equivalent to sampling from a Haar-random distribution. This approach improves on the results given for graph-local jumps, and is able to achieve an efficient algorithm in the number of qubits for a graph (running time polylog(n)) for Gibbs sampling. This improved performance is in part because all the eigenvalues of a Haar random unitary have magnitude 1. Hence, it avoids the problem of 𝑨𝑨 having a relatively small L1 norm given the constraint on its operator norm 𝑨𝑨. These jumps also avoid the second problem encountered for the graph-local jumps: Since the number of degrees of freedom of randomness is very large over the ensemble, any form of bias is mitigated. Indeed, the resulting Lindbladian over the full ensemble has no off-diagonal terms resulting from correlated elements of the jump operator.

Our results extend beyond cyclic graphs to any graph of bounded degree d=O(1) where 𝑯d at constant temperature, or more generally when β𝑯=O(1). We refer to these sparse Hamiltonians as bounded degree and formally define them as:

Definition 2.3.

A bounded degree system is a sequence of temperatures β(n)1 and Hamiltonians 𝐇(n) for which β(n)𝐇(n) is bounded from above by a constant independent of system size.

Theorem 2.4 (Constant spectral gap of Lindbladian in bounded degree systems).

Let β(n)1 be a sequence of temperatures and 𝐇(n) a sequence of n×n Hamiltonians such that β(n)𝐇(n)=O(1).

With any constant probability 1ξ, the spectral gap of a Lindbladian β with σE=β1 and M jump operators sampled from a 1-design jumping distribution for some M=Θ(log(n)), is bounded below by a constant, i.e. λgap=Ω(1).

Assume access to an efficient block-encoding of 𝐇(n). Then as a consequence and in the same setup, the Gibbs state of 𝐇 can be prepared with error ϵ in trace distance, in time poly(log(n),log(ϵ1)).

While the examples of bounded degree Hamiltonians we consider are mostly graphs, the above theorem applies to preparing the Gibbs state for any Hamiltonian at a temperature β1 such that β1=O(𝑯).

2.2 Mixing time from spectral profile

Theorem 2.4 demonstrates that bounded-degree Hamiltonians with non-local jumps exhibit fast mixing times. However, it leaves open the case of Hamiltonians with unbounded degrees, such as those with dpolylog(n). More formally, we define

Definition 2.5.

An unbounded degree system is a sequence of temperatures β(n)1 and Hamiltonians 𝐇(n) for which limnβ(n)𝐇(n)=. However, we still assume that β𝐇=polylog(n), polynomial in the number of qubits.

In our next result, we show that again selecting the jumping distribution (including adjoints) to be M samples from a unitary 1-design for sufficiently large M and choosing the energy resolution σE=β1 yields an algorithm whose efficiency depends on its low-energy spectrum. In particular, the runtime scales inverse polynomially with the fraction of eigenvalues δ(n) of 𝑯 that are within O(β1) of the minimum eigenvalue.

Theorem 2.6 (Spectral gap of Lindbladian in unbounded degree systems).

Let β(n)1 be a sequence of temperatures and 𝐇(n) a sequence of n×n Hamiltonians, and let δ(n) be the fraction of eigenvalues of 𝐇(n) within O(β1) of λmin.

With any constant probability 1ξ, the spectral gap of a Lindbladian β with σE=β1 and M jump operators sampled from a 1-design jumping distribution for some M=Θ(δ(n)2log(n)log(β𝐇)), is lower bounded by Ω(δ(n)).

Assume access to an efficient block-encoding of 𝐇(n). As a consequence, and in the same setup, the Gibbs state of 𝐇 can be prepared with error ϵ in trace distance, in time poly(δ(n)1,log(n),log(ϵ1)).

Note that if β𝑯 is bounded, then δ(n)=1. This result therefore generalizes Theorem 2.4.

2.3 Explicit examples of random sparse Hamiltonians

Having established a sufficient spectral condition for the fast mixing of random ensembles of sparse Hamiltonians, we now give explicit examples that satisfy this criterion. We also give one example, the hypercube, which does not, and for which local jumps in place of unitary design jumps achieve an exponential speedup. This example elucidates the potential of structured local jumps for speedups, in contrast to the case of the cyclic graph in which structured graph-local jumps yielded a slowdown.

Random regular graphs.

The first example is when 𝑯 is the adjacency matrix of a randomly selected log(n)-regular graph, with polylog(n) random 1-design jumps. In Section 4.4.1, we prove using Theorem 2.6 that this ensemble has a Lindbladian spectral gap of Ω(log(n)3/4) at constant temperature. This yields a polynomial algorithm to prepare the Gibbs state, given access to an efficient block-encoding of 𝑯.

Random signed Pauli ensemble.

The second example is the family of sparse Hamiltonians considered in [9], composed of random Pauli strings with random sign coefficients given by 𝑯PS=j=1mrjm𝝈j, where 𝝈 is a random Pauli string on n0 qubits (such that the size of Hamiltonian is n×n for n=2n0), each rj is sampled randomly from {1,1}, and m=O(n05ϵ4) for a parameter 1ϵ2o(n0). We show in Section 4.4.2 that the CKG Lindbladian has a spectral gap of Ω(ϵ3/2) when we choose M=O~(n02ϵ3) unitary 1-design jumps, inverse temperature β=O(ϵ1), and σE=β1.

Hypercubes.

The final example is the family of hypercubes. A hypercube with 2d vertices and degree d can be interpreted as a Hamiltonian on d qubits i𝑿i. At constant temperature, only an exponentially small fraction of eigenvalues lie near the minimum eigenvalue. As a result, the spectral profile implies a poor mixing time with unitary design jumps.

However, we show in Theorem 4.5 that by choosing local jumps 1d𝒁i, the spectral gap is 1d, yielding an efficient algorithm for Gibbs sampling. Hypercubes therefore provide an example where not graph-local jumps, but rather local jumps on the qubits, ensure fast mixing. The crucial feature of local jumps that improves mixing time is that a local jump 𝑨a on a local Hamiltonian 𝑯 satisfies [𝑨a,𝑯]=O(1) – i.e., the jump operator only jumps between nearby eigenstates. This property is not held by graph-local jumps in general, so they displayed no improvement in the studied cases. In general, local jumps are the strongest candidates for fast-mixing on local Hamiltonians, though for which classes of local Hamiltonians fast-mixing can be achieved is still largely open. For most interesting classes of local Hamiltonians, the condition of Theorem 2.6 is unlikely to hold.

3 Proof sketch

3.1 Graph-local jumps

The proof of the mixing time for graph-local jumps in the cyclic graph involves two steps: First, in Appendix A, we derive a general expression for the CKG Lindbladian in the energy basis given by equation (5).

We then utilize this expression along with the fully known spectrum of the cyclic graph to show that the Lindbladian is block-diagonal in the energy basis of this graph, as demonstrated in Appendix B. One of these blocks corresponds to a classical Markov chain on the diagonal entries of the state in the energy basis, for which we establish a spectral gap lower bound using the canonical path method. For the remaining n1 blocks, we apply the Gershgorin circle theorem to bound their eigenvalues.

3.2 Unitary design jumps

Bounded-degree systems.

To establish a lower bound on the spectral gap of the Lindbladian β with unitary 1-design jumps, we consider a decomposition of the form β=μ+δ, where μ=𝔼𝑨𝒟(U(n))[β] is the expected Lindbladian with the expectation taken over a single jump operator sampled from a unitary 1-design distribution 𝒟(U(n)), and δ represents the remainder term. Due the quadratic form of the Lindbladian given in expression (3) we see that 𝔼𝑨(U(n))[β]=𝔼𝑨𝒟(U(n))[β]. Here, (U(n)) is a Haar random distribution over jump operators.

Note that a CKG Lindbladian must have jump operators in adjoint pairs, so a Lindbladian with a single jump operator will not satisfy detailed balance. However, the expected Lindbladian over one Haar random jump operator is equal to the expected Lindbladian over the adjoint pair of a Haar random jump operator, by linearity of expectation.

The proof of Theorem 2.4 proceeds by first showing that this expected Lindbladian 𝔼𝑨(U(n))[β] has a constant spectral gap, as long as β(n)𝑯(n) is bounded by a constant as a function of system size. As before, we call such systems bounded degree, since for constant β bounded degree graphs satisfy the required property. Indeed, if such a system has degree bounded by d, it must have spectrum in [d,d] by Gershgorin’s circle theorem, since every row consists of zeros and at most d ones. Adding phases to the edges of these bounded degree graphs remains feasible by a similar argument, so there is no constraint of stoquasticity.

The result is stated formally as follows:

Lemma 3.1 (Constant spectral gap of average Lindbladian for bounded degree systems).

Let β(n)1 be a sequence of temperatures and 𝐇(n) be a sequence of n×n Hamiltonians such that β𝐇=O(1). The spectral gap of μ=𝔼𝐀𝒟(U(n))[β], the expected CKG Lindbladian with energy resolution σE=β1 over the ensemble of one jump operator sampled from a unitary 1-design, is asymptotically Ω(1).

To establish Lemma 3.1, we show that for any system, the average Lindbladian over a Haar random ensemble of jump operators decomposes as μ=classical+dephasing. For a density matrix in the energy basis, the evolution of classical is a classical continuous Markov chain of the diagonal. The evolution of this classical Lindbladian maps the diagonal, in the limit, to the Gibbs distribution. The spectral gap of classical can be analyzed with the large suite of techniques for classical Markov chains.

Meanwhile, dephasing damps the off-diagonal terms of the density matrix. In the limit as t, the state therefore converges to a classical distribution on the diagonal in the energy basis, with no off-diagonal terms, as desired. The operator dephasing diagonalizes in the energy basis of density matrices, with each off-diagonal element decaying at an independent rate. It is therefore simple to analyze as well. In summary, Lemma 3.1 establishes that the λgap(μ)=min(λgap(classical),λmin(dephasing))=Ω(1).

However, this result does not imply that any given jump sampled from the unitary 1-design (with its adjoint pair) would yield a gapped Lindbladian. As a result, it does not yet yield an efficient Gibbs sampling algorithm. To obtain such a result in Theorem 2.4, we demonstrate that the remainder term δ=βμ has a small spectral norm when a Lindbladian is constructed from a sufficiently large number of jumps M, rather than just one. In particular, a Lindbladian sampled with Θ(log(n)) normalized jumps from any 1-design concentrates closely to its expectation, thereby establishing a spectral gap lower bound. Since this lower bound applies to any graph at constant temperature β1 with bounded degree, it applies to the periodic lattices, path graphs, and k-regular graphs discussed in the previous section.

Unbounded degree systems.

In the context of unbounded degree systems, 1-design unitaries can no longer, in general, achieve an algorithm that is efficient in log(n) at constant temperature. Indeed, μ=𝔼𝑨𝒟(U(n))[β]=classical+dephasing does not necessarily have a constant spectral gap in general, as it did in the case of bounded degree systems. However, we may establish a condition on the spectrum of 𝑯, with which we can recover a lower bound for the spectral gap:

Lemma 3.2 (Spectral gap of average Lindbladian for unbounded systems).

Let 𝐇(n) be a sequence of n×n Hamiltonians. For some C, let δ(n) be the proportion of eigenvalues λj of 𝐇 such that β1(λjλmin)C. The spectral gap of μ=𝔼𝐀𝒟(U(n))[β], the expected CKG Lindbladian over the Haar random unitary ensemble of its jump operator at temperature β1 with σE=Θ(β1), is asymptotically Ω(δ(n)).

The lemma expresses that if λmin is within O(β1) of δ(n) of the eigenvalues, the spectral gap is at least δ(n). Similarly to Theorem 2.4, using this result to obtain an efficient Gibbs sampling algorithm amounts to showing that a Lindbladian with enough independently sampled jump operators shares a similar asymptotic spectral gap to the average Lindbladian, using a concentration bound. When the average spectral gap from the above lemma is δ(n), the number of jump operators to concentrate around the expectation increases to δ(n)2, along with an overhead of log(n)log(β𝑯)2. This result is captured in Theorem 2.6, and results in an algorithm with runtime poly(δ(n)1,log(n),log(ϵ1)) for Gibbs sampling, where ϵ is the error in trace distance. This runtime bound relies on the standing assumption in this paper that log(β𝑯)=poly(log(n)).

4 Technical details

4.1 The quantum Gibbs sampler

4.1.1 Lindbladian evolution

The recently proposed CKG quantum MCMC algorithm addresses the problem of finding thermal states by imitating thermodynamic processes [11, 10]. In this process, a system of particles evolves in contact with a thermal bath at some fixed temperature β1. Due to interactions with the bath, the system is described by a probabilistic mixture of quantum states 𝝆. This state evolves in time, by approximation, with Markovian dissipative dynamics, d𝝆dt=β[𝝆], given in terms of an operator β known as the Lindbladian. This operator involves a coherent term 𝑩 that describes the interaction among the particles in the system. There is also a term in β that is specified by a series of Lindblad operators 𝑳j that drive the dissipative transitions. The dynamics of the coherent term 𝑩 are reversible, while the dissipative transitions drive all states toward some “stationary state”. These transitions can be understood as perturbations from the bath, and as all states converge to the Gibbs state, the information of the system is leaking via these perturbations to the bath. The expression, in terms of 𝑩 and 𝑳j, is:

β[]=i[𝑩,]+j(𝑳j()𝑳j12{𝑳j𝑳j,}).

The summands 𝑳j()𝑳j are termed the transition part of the Lindbladian, and 12{𝑳j𝑳j,} are the decay part of the Lindbladian. The choice of the Lindbladian operator β can vary depending on the precise nature of interactions between the system and the bath. However, to prepare the Gibbs (thermal) state at temperature β1, the Lindbladian should satisfy

d𝝆βdt=β[𝝆β]=0where𝝆β:=eβ𝑯/Tr(eβ𝑯), (2)

and moreover 𝝆β should be the unique stationary state of the Lindbladian. The long-term evolution of the system under this Lindbladian, as a result, would converge to the Gibbs state of the Hamiltonian 𝑯 at temperature 1/β.

4.1.2 Detailed balance

To ensure that the Lindbladian β converges to a state 𝝆β, [11] designs a Lindbladian that satisfies Kubo-Martin-Schwinger (KMS) detailed balance with respect to 𝝆β. KMS detailed balance is one of several ways of quantizing the notion of classical detailed balance for Markov chains. KMS detailed balance of β is self-adjointness with respect to the inner product

σ1,σ2ρβ1=Tr(σ1𝝆β1/2σ2𝝆β1/2). (KMS Inner Product)

In particular, it is equivalent to the relation that

β[]=𝝆β1/2β[𝝆β1/2()𝝆β1/2]𝝆β1/2 (Detailed Balance)

where β is the adjoint Lindbladian with respect to the Hilbert-Schmidt inner product σ1,σ2=Tr(σ1σ2). The adjoint operator β, in the Heisenberg picture, describes the dynamics of observables under evolution by β. The Lindbladian evolution is described by some quantum channel and therefore the observable I must always be fixed by exp(β). This implies that β[I]=0. The detailed balance formula thereby implies that β[𝝆β]=0, as desired. Note that KMS detailed balance can be dually described as the self-adjointness of β with respect to the inner product σ1,σ2ρβ=Tr(σ1𝝆β1/2σ2𝝆β1/2).

4.1.3 Construction and parameters

The quantum Gibbs sampler in [11] constructs a Lindbladian that satisfies two properties:

  1. 1.

    β satisfies detailed balance with respect to 𝝆β, and therefore β[𝝆β]=0.

  2. 2.

    The dynamics of β can be efficiently implemented.

Their Lindbladian, which we term the CKG Lindbladian, can be simulated on a quantum computer with a cost per unit time t=1 roughly equal to that of simulating the Hamiltonian dynamics of 𝑯. The CKG Lindbladian is closely related to the Davies generator, which is a physically motivated Lindbladian that satisfies detailed balance, but that is not efficiently implementable in general. A full description of both Lindbladians are given in the Appendix.

The Gibbs sampling algorithm evolves an initial state 𝝆0 according to the efficiently implemented Lindbladian β, and produces the state 𝝆t=eβt[𝝆0] after time period t. The mixing time is roughly the time that it takes for the state 𝝆t to approach the Gibbs state 𝝆β. That is, eβtmix[𝝆0]𝝆β. The efficiency of the algorithm therefore scales linearly with the unit time simulation cost and the mixing time. The algorithm has several parameters in the Lindbladian’s construction. In addition to the inverse temperature β, the algorithm specifies an energy resolution σE. A salient feature of [11]’s construction is that it can achieve detailed balance even though the algorithm only probes the energies of the Hamiltonian 𝑯 with approximate precision. σE quantifies this level of precision. The cost of the Lindbladian simulation depends linearly on σE1, but increasing the precision may also improve the mixing time. Taking σE0 for absolute precision recovers the Davies generator – when distinguishing the energies of the system exactly is infeasible, this Lindbladian cannot be simulated efficiently.

A set of jump operators 𝑨a must also be specified for the Lindbladian. These operators are decomposed by frequency and reassembled in a particular way to construct the Lindblad operators that help β satisfy detailed balance. They must appear in adjoint pairs: i.e., if 𝑨{𝑨a}, then 𝑨{𝑨a}. The cost of simulation scales with the cost of implementing the oracle |a|a𝑨a. In particular, the jump operators must be normalized when implemented for the algorithm, satisfying a𝑨a𝑨a1. CKG Lindbladians are linear in their jump operators – if 1 has one jump operator 𝑨1 and 2 has one jump operator 𝑨2, then a Lindbladian with jump operators 𝑨1 and 𝑨2 satisfies =1+2. If β was constructed from jumps 𝑨a, then jump operators s𝑨a produce the Lindbladian sβ, scaling the mixing time by s. So we may therefore assume that a𝑨a𝑨a=1 exactly, since renormalizing can only improve the spectral gap. In its normalized form, the set of jump operators can be understood as a jumping distribution over 𝑨a which we will notate a𝒜, where each is sampled with probability 𝑨a𝑨a.

4.2 Mathematical description

We begin with a description of the Davies generator, which is the limit of the CKG Lindbladian as σE0. This generator was developed from a physical approximation of an open thermalizing quantum system, but at low temperatures it is unphysical and can be hard to implement. We then generalize the notions to the implementable CKG Lindbladian.

4.2.1 Davies generator

In the description of the Davies generator for a given system 𝑯, there is a coherent term and there are jump operators 𝑨a. The 𝑨a terms must appear in adjoint pairs in the Davies generator. The dissipative part of the Lindbladian is expressed as follows:

β[]=a[M]γ(ω)(𝑨ωa()𝑨ω12{𝑨ω𝑨ω,})𝑑ω, (3)

where 𝑨ωa is the Operator Fourier Transform (OFT) of jump operator 𝑨a:

𝑨ωa=12πei𝑯t𝑨aei𝑯teiωt𝑑t.

The Davies’ generator chooses Lindblad operators γ(ω)𝑨ωa, each of which selects the energy transitions, or Bohr frequencies, in 𝑨a that are precisely ω. Because it requires certainty in energy, by Heisenberg’s uncertainty principle of energy and time, in the general case simulating the evolution of the Davies generator efficiently is infeasible. In the above, γ is some function satisfying γ(ω)=γ(ω)=eβωγ(ω). The Lindblad operators are scaled by γ(ω) precisely to satisfy KMS detailed balance. Since Aωa represents jumps with Bohr frequency ω, the functional equation of γ ensures a desired ratio of jumps with Bohr frequency ω and ω. We choose the Metropolis filter, γ(ω)=min(1,eβω), though another common filter γ(ω)=11+eβω for “Glauber dynamics” could also be used for the same results.

The Davies generator satisfies detailed balance with respect to 𝝆β, the thermal state. In some presentations of the Davies generator, it contains a coherent term i[𝑯,]. If this term is included, the generator does not satisfy detailed balance, so we do not follow this convention. However, the term does not affect the fixed point of the generator, since 𝝆β commutes with 𝑯 and therefore i[𝑯,𝝆β]=0.

4.2.2 CKG Lindbladian

The CKG Lindbladian is defined almost identically to the Davies generator, but is altered slightly so that it still obeys detailed balance, but is efficiently implementable.

β[]=i[𝑩,]coherent term+a[M]γ(ω)(𝑨^a(ω)()𝑨^a(ω)transition term12{𝑨^a(ω)𝑨^a(ω),}decay term)𝑑ω,

where 𝑨^a(ω) is now the Gaussian-supported OFT of jump operator 𝑨a:

𝑨^a(ω)=12πei𝑯t𝑨aei𝑯teiωtf(t)𝑑t.

To ensure that the jump operators do not have infinite precision in energy, a Gaussian supported OFT is performed instead to obtain 𝑨^a(ω), which selects a Gaussian band energies of around ω.

Here, f(t)=eσE2t2σE2/π, with Fourier transform f^(ω)=1σE2πexp(ω24σE2). As a result, the operator 𝑨^a(ω) can be shown to be equal to νf^(ων)𝑨νa. The function f(t) was chosen so that its squared Fourier transform f^2(ω) is a Gaussian with standard deviation σE, which features prominently in the Lindbladian (since it consists of quadratic terms in 𝑨^a(ω)). Taking σE=Θ(β1) yields an efficient simulation algorithm with the assumption of a block-encoding of 𝑯 and a block-encoding for the jump operators a[M]|a𝑨a, so σE is taken to be on the order of β1 in this paper.

Since 𝑨a(ω) is a noisy decomposition of 𝑨a into frequencies, it is not immediately clear whether there is a choice of function γ(ω) for which they can be recombined to achieve detailed balance. Indeed, as shown in [11], there is! The choice of γ(ω) is such that the transition part of β, the summand a[M]γ(ω)𝑨^a(ω)()𝑨^a(ω)𝑑ω, still satisfies KMS detailed balance. [11] proved that there is a unique choice of 𝑩, up to translation by a scalar, such that i[𝑩,]12a[M]γ(ω){𝑨^a(ω)𝑨^a(ω),}𝑑ω also satisfies detailed balance. For the Davies generator, this coherent term 𝑩 is simply 0 (or corresponds to a Lamb shift that commutes with the Hamiltonian), and the decay term by itself already satisfies detailed balance. 𝑩 can be expressed in general as:

𝑩=a[M]ν1,ν2tanh(β(ν1ν2)/4)2i(𝑨ν2a)𝑨ν1a.

The choice of γ for this algorithm, for which the filter is efficiently implementable, is γ(ω)=exp(βmax(ω+βσE22,0)). As σE0 it converges to the Metropolis filter of the Davies generator. In particular, this γ is precisely Metropolis filter for the Davies generator shifted by βσE2. The CKG Lindbladian requires a choice of γ for which α=γg satisfies the functional equation that was originally satisfied by γ in the Davies generator.

We also note that β is bounded in operator norm.

Lemma 4.1.

Consider the CKG Lindbladian β with temperature β1, using the Metropolis filter, and with jump operators 𝐀a for which a𝐀a𝐀a1. This Lindbladian satisfies β=O(log(β𝐇)), where is the operator norm of β, with respect to the operator norm on the input and output vector spaces.

Proof.

The result follows from citing Proposition B.2 in [11] to bound the operator norm of the coherent term, and bounding the transition and decay terms manually. This proof is described in the arXiv version [32].

4.3 Spectral gap

Since the quantum MCMC algorithm was proposed recently, numerical and analytic characterizations of algorithm are limited. As for classical Markov chains, it has been shown that the mixing time of the algorithm can be characterized by the spectral gap λgap(β) of the Lindbladian. If the first eigenvalue λ1=0 corresponds to eigenvector ρβ, then the spectral gap is λgap(β)=minj>1|λj| [11]. Lindbladians are in general negative semidefinite like classical Markov chain generators, so λgap(β)=minj(λj). More precisely, it holds that

Ω(1)λgap(β)tmix()log(ρβ1/2)λgap(β)𝒪(β𝑯+log(dim(𝑯)))λgap(β). (4)

In particular, analytically bounding this spectral gap from below is sufficient to prove that ρβ is the unique fixed point, and for obtaining an upper bound on the mixing time. For so-called rapid mixing, in which the mixing time is logarithmic in the number of qubits, the spectral gap bound often does not suffice. For our purposes of proving efficiency in the number of qubits, however, this issue is moot.

4.4 Unbounded degree systems

We now prove efficient Gibbs sampling results for certain unbounded sparse Hamiltonians.

4.4.1 Random log(𝐧)-regular graphs

With high probability at constant temperature, a randomly selected d=log(n)-regular graph, with poly(d) random 1-design jumps, has a Lindbladian spectral gap of Ω(d3/4). This gives a polynomial algorithm to prepare the Gibbs state for most such graphs at constant temperature.

The gap of Ω(d3/4) arises because a random d-regular graph, for d, has one eigenvalue at d and the rest distributed from 2d1 to 2d1 in a distribution that converges to a (normalized) semicircle. This semicircular distribution frequently appears in random matrix theory, for instance in the Gaussian unitary ensemble (GUE), which models the spectrum of many chaotic quantum systems. When the spectrum of a quantum system indeed follows this distribution, it implies that δ(n)=Ω(d3/4) of the eigenvalues lie within a constant of the minimum eigenvalue.

Theorem 4.2.

With any constant probability 1ξ, for a randomly selected d=log(n)-regular graph, there are δ=Ω(d3/4) eigenvalues within O(1) of the minimum eigenvalue.

As an immediate result of Theorem 4.2 and Theorem 2.6, we obtain an algorithm polynomial in d to prepare the Gibbs state of a d-regular graph. To prove the corollary, we use Theorem 2 in [17]. For this context, the following statement suffices.

Theorem 4.3.

Let degree d=log(n). For sufficiently large n, there exists some D>0 such that for any interval I, 0<α<1, and 0<ϵ<α such that |I|>Ddα+ϵ, with probability 1o(n1) over all random d-regular graphs, |δ(n)μ|<dϵ|I|, where μ=Iρsc(x)𝑑x and δ(n) is the fraction of eigenvalues of the d-regular graph in Id1.

In the above, ρsc is the asymptotic distribution as d of a random d-regular graph is the semicircular distribution with radius 2d1, ρsc(x)=12π(d1)4(d1)x2. From this result, Theorem 4.2 follows. As described more explicitly in [32], the estimated density ρsc near the bottom of the semicircle can be estimated. Then, using the theorem, this can be related to the fraction of eigenvalues near the minimum eigenvalue as well, proving the result.

4.4.2 Pauli String Ensemble

We now mention another ensemble of Hamiltonians studied by [9] in the context of low-energy state preparation. In [9], efficient low energy state preparation with phase estimation is demonstrated under the same conditions as our efficient Gibbs sampling in Theorem 2.6. Indeed, if many eigenvectors are close to the ground-state energy, as we require, then performing phase estimation on the maximally mixed state has a high probability of measuring a low-energy state, so low-energy state preparation is possible as well. They study the following ensemble of Hamiltonians on n0 qubits, 𝑯PS=j=1mrjm𝝈j, where 𝝈 is a random Pauli string on n0 qubits, each rj is sampled randomly from {1,1}, and m=c2n05ϵ4. The parameter ϵ satisfies ϵ2n0/c1, and c1,c2 are absolute constants. The resulting spectrum is again close enough to a semicircular distribution to obtain an efficient Gibbs sampler for certain temperatures that depend on ϵ. As ϵ decreases, Gibbs sampling becomes efficient for even larger values of β (lower temperatures), since the ensemble’s spectrum converges closer to a perfect semicircular distribution at the edge of the spectrum.

Using the results in their paper, we establish that Gibbs sampling is efficient in n0 for certain values of ϵ and corresponding temperatures β1.

Theorem 4.4.

Say that 𝐇(n0) is sampled from the ensemble 𝐇PS on n0 qubits, with ϵ=2o(n0) and ϵ1. With any constant probability 1ξ, for sufficiently large n0, δ=Ω(ϵ3/2) fraction of the eigenvalues lie within O(ϵ) of the minimum eigenvalue.

Proof.

We utilize two results from [9]. Firstly, they argue that Pr[𝑯(n0)2(1+ϵ)]exp(c2n0) when mn03ϵ4, which is satisfied in this case. With an arbitrary constant probability for sufficiently large n0, therefore, 𝑯(n0)4, since ϵ1. The second result is that with probability 1exp(c3n01/3), at least Ω(ϵ3/2) of the eigenvalues satisfy λi(1ϵ)λmin where c3 is an absolute constant. With any large constant probability, we therefore have that |λiλmin|ϵλmin4ϵ=O(ϵ) for Ω(ϵ3/2) of the eigenvalues.

By Theorem 2.6, we obtain a Gibbs sampling algorithm that is poly(ϵ1,n0) to prepare the Gibbs state at inverse temperature ϵ1. We may rephrase this result in terms of β. For any polynomially large β, it provides a Pauli string ensemble of Hamiltonians, HPS with ϵ=β1, for which with high likelihood preparing the Gibbs state is efficient in n0, assuming access to a block-encoding of the Hamiltonian of interest.

4.4.3 Hypercube graphs

For hypercube with varying dimension at a constant temperature, using unitary 1-design jumps would yield an exponentially large runtime. The spectrum of a hypercube with dimension d and 2d vertices consists of the integers d,d+2,d2,d. The eigenvalue j has multiplicity (dd+j2). In particular, for any constant C, only an exponentially small fraction of the eigenvalues δ(d) lie below d+C. This leads to a naive algorithm with at worst exponential complexity in d.

However, a better result can be obtained considering the hypercube as a system of d qubits. The graph with dimension d has 2d vertices, which can be considered length d bitstrings. Then, the adjacency matrix is the sum of Pauli X operators on each qubit, i=1dXi, since the hypercube has an edge between any two bitstrings of Hamming distance 1. Choosing d jump operators as 1dZa, the mixing time can be improved to poly(d,log(ϵ1)):

Theorem 4.5 (Spectral Gap for Hypercube with Local Jumps).

For fixed β1, there exists some energy resolution σE such that the spectral gap of the CKG Lindbladian β for a d-dimensional hypercube with jump operators 𝐀a=1dZa, is asymptotically Ω(d1).

Proof.

The proof of this statement is given in the arXiv version [32] by showing that the Lindbladian, with this choice of jump operators, is the product of independent Lindbladians on each qubit. Each of their spectral gaps can then be calculated explicitly.

In the case of the hypercube, the local jump operators 1dZi only jump between eigenstates whose eigenvalues differ by 1. This vastly improves the performance of the classical Markov chain and dephasing Markov chain within the Lindbladian. However, the Lindbladian does not consist only of these two terms, as it did in the limit of independently sampled 1-design jumps. Off-diagonal terms do exist, and the presence of Za for every index is necessary to ensure that these off-diagonal terms do not completely eliminate the spectral gap. In some way, there must be “enough uncorrelated” local energy jumps to dampen these off-diagonal terms. For more complicated local Hamiltonians, it is not clear how correlations may be suppressed while still maintaining locality.

5 Connection to previous work

Our results show that for a Hamiltonian 𝑯 with temperature β1 such that some δ(n) fraction of the eigenstates are within O(β1) of the ground-state energy, the CKG quantum Gibbs sampler with 1-design jumps efficiently prepares the Gibbs state with trace distance at most ϵ. The running time scales polynomially with δ(n)1, β𝑯, log(ϵ1), and the complexity of the block encoding of 𝑯. This result is a baseline test that shows the CKG algorithm performs as well as other methods for preparing low-energy states of Hamiltonians. Indeed, our spectral condition is precisely the same as a condition that ensures easy quantum phase estimation of a near-ground state. Namely, performing quantum phase estimation on the maximally mixed state can prepare a random eigenstate, and with probability δ(n) it is within O(β1) of the minimum eigenvalue. Obtaining O(δ(n)1) samples and taking the minimum energy can therefore prepare a near ground-state eigenvector. This approach is the basis of the previous analysis of random sparse Hamiltonians in [9].

Moreover, in [14], a quantum algorithm is presented that prepares the Gibbs state with a complexity that scales as poly(n𝒵(β),log(ϵ1)). If δ(n) of the eigenstates are within O(β1) of the ground-state energy, then n𝒵(β)=Ω(δ(n)1), and therefore under such conditions, this algorithm efficiently prepares the Gibbs states as well. Effectively, the CKG Gibbs sampler with “generic” 1-design jumps performs the same as previously developed algorithms – the algorithm is at least as powerful, but a potential advantage in cooling must arise from a smart (i.e., local and unbiased) choice of jump operators.

After completing our work, we also became aware of [12], where, among other contributions, the authors derive a new, efficient quantum Gibbs sampler algorithm that utilizes jump operators sampled from a Clifford-random circuit. This Gibbs sampler is shown to exhibit a spectral gap bound under the same condition on the spectral density considered in Theorem 2.6. In comparison, we show that under the conditions of Theorem 2.6, the spectral gap of the CKG Lindbladian with an ensemble of 1-design jumps is bounded with high probability.

Finally, our conditions on the spectrum and the structure of random unitary design jumps resemble previous works on chaotic Hamiltonians that apply the Eigenstate Thermalization Hypothesis (ETH) to prove the fast mixing of dissipative dynamics [8, 15]. In particular, in [8], the proposed algorithm implements a “rounded” Davies generator, yielding a physical Lindbladian that block-diagonalizes into components consisting of small-energy transitions. They propose their own version of ETH that relies on jump operators, for small Bohr frequencies ω, having independent Gaussian-distributed entries. The assumption that these entries are independent for the result is very strong, allowing them to conclude that their jump operators are both local and that distinct energy transitions are completely uncorrelated.

Our work shows fast mixing unconditionally for quantumly easy Hamiltonians, replacing the local jumps and ETH assumption for the rounded Davies generator with 1-design jump operators for the CKG Lindbladian. A similar ETH assumption to [8] would also yield fast-mixing for the CKG Lindbladian with local jumps, but more generally some approach must be taken to provably mitigate the correlations induced by implementing local jumps, in contrast with 1-design jump operators.

References

  • [1] Eric R Anshuetz, Chi-Fang Chen, Bobak T Kiani, and Robbie King. Strongly interacting fermions are non-trivial yet non-glassy. arXiv preprint arXiv:2408.15699, 2024. doi:10.48550/arXiv.2408.15699.
  • [2] Joran van Apeldoorn and András Gilyén. Improvements in quantum SDP-solving with applications. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP), pages 99:1–99:15, 2019. arXiv: 1804.05058 doi:10.4230/LIPIcs.ICALP.2019.99.
  • [3] Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Quantum SDP-solvers: Better upper and lower bounds. Quantum, 4:230, 2020. Earlier version in FOCS’17. arXiv: 1705.01843 doi:10.22331/q-2020-02-14-230.
  • [4] Christian W. Bauer, Zohreh Davoudi, A. Baha Balantekin, Tanmoy Bhattacharya, Marcela Carena, Wibe A. de Jong, Patrick Draper, Aida El-Khadra, Nate Gemelke, Masanori Hanada, Dmitri Kharzeev, Henry Lamm, Ying-Ying Li, Junyu Liu, Mikhail Lukin, Yannick Meurice, Christopher Monroe, Benjamin Nachman, Guido Pagano, John Preskill, Enrico Rinaldi, Alessandro Roggero, David I. Santiago, Martin J. Savage, Irfan Siddiqi, George Siopsis, David Van Zanten, Nathan Wiebe, Yukari Yamauchi, Kübra Yeter-Aydeniz, and Silvia Zorzetti. Quantum simulation for high-energy physics. PRX Quantum, 4:027001, May 2023. doi:10.1103/PRXQuantum.4.027001.
  • [5] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor series. Physical Review Letters, 114(9):090502, 2015. arXiv: 1412.4687 doi:10.1103/PhysRevLett.114.090502.
  • [6] Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP), pages 27:1–27:14, 2019. arXiv: 1710.02581 doi:10.4230/LIPIcs.ICALP.2019.27.
  • [7] Fernando G. S. L. Brandão and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs. In Proceedings of the 58th IEEE Symposium on Foundations of Computer Science (FOCS), pages 415–426, 2017. arXiv: 1609.05537 doi:10.1109/FOCS.2017.45.
  • [8] Chi-Fang Chen and Fernando G. S. L. Brandão. Fast thermalization from the eigenstate thermalization hypothesis, 2021. doi:10.48550/arXiv.2112.07646.
  • [9] Chi-Fang Chen, Alexander M. Dalzell, Mario Berta, Fernando G. S. L. Brandão, and Joel A. Tropp. Sparse random Hamiltonians are quantumly easy. Phys. Rev. X, 14:011014, February 2024. doi:10.1103/PhysRevX.14.011014.
  • [10] Chi-Fang Chen, Michael J. Kastoryano, Fernando G. S. L. Brandão, and András Gilyén. Quantum thermal state preparation. arXiv: 2303.18224, 2023.
  • [11] Chi-Fang Chen, Michael J Kastoryano, and András Gilyén. An efficient and exact noncommutative quantum Gibbs sampler. arXiv preprint arXiv:2311.09207, 2023. doi:10.48550/arXiv.2311.09207.
  • [12] Hongrui Chen, Bowen Li, Jianfeng Lu, and Lexing Ying. A randomized method for simulating Lindblad equations and thermal state preparation. arXiv preprint arXiv:2407.06594, 2024. doi:10.48550/arXiv.2407.06594.
  • [13] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman. Exponential algorithmic speedup by a quantum walk. In Proceedings of the 35th ACM Symposium on the Theory of Computing (STOC), pages 59–68, 2003. arXiv: quant-ph/0209131 doi:10.1145/780542.780552.
  • [14] Anirban Narayan Chowdhury and Rolando D. Somma. Quantum algorithms for Gibbs sampling and hitting-time estimation. Quantum Information and Computation, 17(1&2):41–64, 2017. arXiv: 1603.02940 doi:10.26421/QIC17.1-2.
  • [15] Zhiyan Ding, Chi-Fang Chen, and Lin Lin. Single-ancilla ground state preparation via Lindbladians. Phys. Rev. Res., 6:033147, August 2024. doi:10.1103/PhysRevResearch.6.033147.
  • [16] Zhiyan Ding, Bowen Li, and Lin Lin. Efficient quantum Gibbs samplers with Kubo–Martin–Schwinger detailed balance condition. arXiv preprint arXiv:2404.05998, 2024. doi:10.48550/arXiv.2404.05998.
  • [17] Ioana Dumitriu and Soumik Pal. Sparse regular random graphs: Spectral density and eigenvectors. The Annals of Probability, 40(5), September 2012. doi:10.1214/11-aop673.
  • [18] Ernesto Estrada and Juan A. Rodríguez-Velázquez. Subgraph centrality in complex networks. Phys. Rev. E, 71:056103, May 2005. doi:10.1103/PhysRevE.71.056103.
  • [19] András Gilyén, Chi-Fang Chen, Joao F Doriguello, and Michael J Kastoryano. Quantum generalizations of Glauber and Metropolis dynamics. arXiv preprint arXiv:2405.20322, 2024. doi:10.48550/arXiv.2405.20322.
  • [20] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics [full version], 2018. arXiv: 1806.01838
  • [21] Fernando G.S L. Brandão, Richard Kueng, and Daniel Stilck França. Faster quantum and classical SDP approximations for quadratic binary optimization. Quantum, 6:625, January 2022. doi:10.22331/q-2022-01-20-625.
  • [22] Jeongwan Haah, Matthew B. Hastings, Robin Kothari, and Guang Hao Low. Quantum algorithm for simulating real time evolution of lattice Hamiltonians. In Proceedings of the 59th IEEE Symposium on Foundations of Computer Science (FOCS), pages 350–360, 2018. arXiv: 1801.03922 doi:10.1109/FOCS.2018.00041.
  • [23] Jiaqing Jiang and Sandy Irani. Quantum Metropolis sampling via weak measurement. arXiv preprint arXiv:2406.16023, 2024. doi:10.48550/arXiv.2406.16023.
  • [24] Julia Kempe, Alexei Kitaev, and Oded Regev. The complexity of the local Hamiltonian problem. In Kamal Lodaya and Meena Mahajan, editors, FSTTCS 2004: Foundations of Software Technology and Theoretical Computer Science, pages 372–383, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg. doi:10.48550/arXiv.quant-ph/0406180.
  • [25] A Kitaev. Hidden correlations in the Hawking radiation and thermal noise, talk given at KITP. Santa Barbara, California, 12, 2015.
  • [26] Alexei Kitaev. A simple model of quantum Holography (part 2). Entanglement in strongly-correlated quantum matter, page 38, 2015.
  • [27] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing. Physical Review Letters, 118(1):010501, 2017. arXiv: 1606.02685 doi:10.1103/PhysRevLett.118.010501.
  • [28] Guang Hao Low and Isaac L. Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019. arXiv: 1610.06546 doi:10.22331/q-2019-07-12-163.
  • [29] Sam McArdle, Suguru Endo, Alán Aspuru-Guzik, Simon C. Benjamin, and Xiao Yuan. Quantum computational chemistry. Reviews of Modern Physics, 92(1):015003, 2020. arXiv: 1808.10402 doi:10.1103/RevModPhys.92.015003.
  • [30] John Preskill. Simulating quantum field theory with a quantum computer. arXiv: 1811.10085, 2018.
  • [31] Patrick Rall, Chunhao Wang, and Pawel Wocjan. Thermal state preparation via rounding promises. Quantum, 7:1132, October 2023. doi:10.22331/q-2023-10-10-1132.
  • [32] Akshar Ramkumar and Mehdi Soleimanifar. Mixing time of quantum Gibbs sampling for random sparse Hamiltonians. arXiv: 2411.04454, 2024.
  • [33] Cambyse Rouzé, Daniel Stilck França, and Álvaro M Alhambra. Efficient thermalization and universal quantum computing with quantum Gibbs samplers. arXiv preprint arXiv:2403.12691, 2024. doi:10.48550/arXiv.2403.12691.
  • [34] Subir Sachdev and Jinwu Ye. Gapless spin-fluid ground state in a random quantum Heisenberg magnet. Physical Review Letters, 70(21):3339–3342, May 1993. doi:10.1103/physrevlett.70.3339.
  • [35] Sushant Sachdeva and Nisheeth K Vishnoi. Matrix inversion is as easy as exponentiation. arXiv preprint arXiv:1305.0526, 2013. doi:10.48550/arXiv.1305.0526.
  • [36] Oles Shtanko and Ramis Movassagh. Preparing thermal states on noiseless and noisy programmable quantum processors. arXiv preprint arXiv:2112.14688, 2021. doi:10.48550/arXiv.2112.14688.
  • [37] Brian Swingle and Mike Winer. Bosonic model of quantum Holography. Physical Review B, 109(9):094206, 2024. doi:10.48550/arXiv.2311.01516.
  • [38] Kristan Temme. Lower bounds to the spectral gap of Davies generators. Journal of Mathematical Physics, 54(12), December 2013. doi:10.1063/1.4850896.
  • [39] Joel A. Tropp. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 8(1-2):1–230, 2015. arXiv: 1501.01571 doi:10.1561/2200000048.
  • [40] Dave Wecker, Matthew B. Hastings, Nathan Wiebe, Bryan K. Clark, Chetan Nayak, and Matthias Troyer. Solving strongly correlated electron models on a quantum computer. Phys. Rev. A, 92:062318, December 2015. doi:10.1103/PhysRevA.92.062318.
  • [41] Michael Winer, Richard Barney, Christopher L Baldwin, Victor Galitski, and Brian Swingle. Spectral form factor of a quantum spin glass. Journal of High Energy Physics, 2022(9):1–47, 2022. doi:10.48550/arXiv.2203.12753.
  • [42] Pawel Wocjan and Kristan Temme. Szegedy walk unitaries for quantum maps. Communications in Mathematical Physics, 402(3):3201–3231, 2023. doi:10.48550/arXiv.2107.07365.
  • [43] Daniel Zhang, Jan Lukas Bosse, and Toby Cubitt. Dissipative quantum Gibbs sampling. arXiv preprint arXiv:2304.04526, 2023. doi:10.48550/arXiv.2304.04526.

Appendix A CKG Lindbladian in the energy basis

We consider a quantum system consisting of basis states |ei and a Hamiltonian 𝑯. We choose some jump operators 𝑨a and denote Alma=l|𝑨a|m. Notate the energy eigenstates as |j with energy Ej. We independently calculate the three parts of the Lindbladian: the transition term t, the decay term d, and coherent term c so that β=c+t+d. First, as mentioned above, the OFT of the jump operator 𝑨a is 𝑨^a(ω)=lmAlmaf^(ωνlm)|lm|, where νlm=ElEm. To represent superoperators as linear maps, we vectorize operators with respect to the basis of operators |m1m2|. In particular, |𝐦 with 𝐦=(m1,m2) will notate the basis operator |m1m2|. Now, we may expand t,d,c. The full calculations are performed in the arXiv version of this paper [32], yielding the following expressions:

𝐥|t|𝐦=aAl1m1aAl2m2a¯θ(νl1m1,νl2m2),
𝐥|d|𝐦=12(δl1m1a,jAjm2a¯Ajl2aθ(νjm2,νjl2)+δl2m2a,jAjl1a¯Ajm1aθ(νjl1,νjm1)),
𝐥|c|𝐦=12(δl1m1tanh(βνm2l2/4)a,jAjm2a¯Ajl2aθ(νjm2,νjl2)
δl2m2tanh(βνl1m1/4)a,jAjl1a¯Ajm1aθ(νjl1,νjm1)). (5)

Note that they can all be taken in terms of α(ν)=θ(ν,ν) using the identity θ(ν1,ν2)=α(ν1+ν22)exp((ν1ν2)28σE2).

Appendix B Graph-Local Jumps for Cyclic Graphs

In this section we prove Theorem 2.1. Consider a cyclic graph with n vertices with adjacency matrix 𝑯, and eigenvectors |j.

The eigenbasis of a cyclic graph consists of vectors |j=n1/2aζnaj|ea with eigenvalues 2cos(2πjn). The jump operators on the graph are chosen to be graph-local 𝑨a=n1/2|eaea|, and therefore have coefficients Alma=n3/2ζna(lm). Now we observe that aζna(ij)=nδij. We therefore have the relation that

aAl1m1aAl2m2a¯=n3aζna((l1m1)(l2m2))=n2δ(l1m1)(l2m2).

As computed more explicitly in the arXiv version [32], we compute the components of the Lindbladian with these jump operators:

𝐥|t|𝐦 =n2θ(νl1m1,νl2m2)δ(l1m1)(l2m2).
𝐥|d|𝐦 =n22δl1m1δl2m2(α(νjm2)+α(νjm1))
𝐥|c|𝐦 =0.

The above formulae imply that the Lindbladian is block diagonal in the eigenbasis. The coherent term vanishes and the decay term is fully diagonal. Setting k=m1m2 and k=l1l2, the transition term 𝐥|t|𝐦 is nonzero only if k=k, due to the factor δ(l1m1)(l2m2)=δ(l1l2)(m1m2). There is therefore one block corresponding to each k, which we will denote k. The block for k=0 is the classical block of the Markov chain on the diagonal entries of the state. Finding its spectral gap, and then lower bounding the eigenvalues of the remaining n1 blocks, yields a bound for the spectral gap of the Lindbladian.

B.1 Spectral Gap Lower Bound

We will first show that the spectral gap of the classical block is asymptotically Ω(n1). We have by explicit calculation that

𝐥|t0|𝐦 =1n2α(νlm)
𝐥|d0|𝐦 =1n2δlmjα(νjm).

We will use the canonical path bound, a standard technique in the theory of classical Markov chains, to establish lower bounds on their spectral gaps. The canonical path lemma fixes a “canonical” path between each pair of vertices on a graph and obtains a corresponding spectral graph bound. For our purposes we let the canonical path between any two vertices to be the edge joining them, which obtains the following statement:

Lemma B.1.

Say L0 is a Markov chain generator with stationary state σ. Then, the spectral gap satisfies the following bound:

λmin(l,m)Llm0σl.

Applying this bound in this case, and noting that the stationary state of this Markov chain is ρll, we obtain the lower bound

λminlmα(νlm)n2ρll. (6)

The first equality holds because every canonical path is length 1, so the only path containing the edge (l,m) is γlm. We may upper bound ρll with ρlle2βiEin1e2βe2β=n1e4β. Moreover, |νlm|4 since all energies lie in [2,2], so α is bounded below by a positive constant C that is independent of n. We conclude that λminlmα(νlm)n2ρllCn2e4βn1=Ω(n1), as desired.

Now, for the blocks with k0, we utilize the Gershgorin bound on the columns of the kth block of β, k, which states that the eigenvalues of k must be larger than min𝐦[𝐦|k|𝐦𝐥𝐦|𝐥|k|𝐦|]. This approach is very similar to the one outlined in [38] for the Davies generator. By explicitly calculating and bounding this term in the arXiv version of this paper-[32], we obtain that the eigenvalues of k are Ω(n3). This completes the lower bound, showing that all the spectral gap in full must be Ω(n3).

B.2 Spectral Gap Upper Bound

To prove the upper bound on the spectral gap, we consider the row vector v of length n2, that is 1 on the indices that correspond to the block k=1, and 0 elsewhere. As an operator, it takes the value 1 on one offdiagonal with a fixed l1l2=k. When calculating (vβ)𝐦=(βv)𝐦, we obtain the same formula as found in the Gershgorin bound calculation above – indeed, the values on the diagonal are all positive, while the off-diagonal values are negative:

(βv)𝐦= 1n2𝐥(α(νl1m1)+α(νl2m2)2α(νl1m1+νl2m22))
+ 1n2𝐥α(νl1m1+νl2m22)(1exp((νl1m1νl2m2)28σE2)). (7)

The previous lower bound shows that each of these values is nonnegative. Since l1l2=m1m2=1, νl1m1νl2m2=νl1m1νl2m2 is O(n1). The first term in the expression, since it is composed of summands that are second differences in α, is n terms that are O(n2) scaled by n2 – it is therefore O(n3). The summands of the second term can be similarly estimated to be O(n2), so it is also O(n3). The terms of (βv)𝐦 are therefore nonnegative and are at most Cn3 for some constant C.

To prove the upper bound, we make use of the inner product ,ρβ1 with respect to which β is self-adjoint. Note that |i1i2|,|j1j2|ρβ1=δi2j1δi1j2(ρβ)i1i11/2(ρβ)i2i21/20. Hence, when ,ρβ1 is expressed in the energy basis as vMw for a matrix M, M has nonnegative elements. We therefore may upper bound (βv),vρβ1 by Cn3v,v, since the coefficients of (βv) and v are nonnegative and (βv) is dominated by Cn3v for some C>0. We conclude that (βv),vρβ1v,vρβ1 is O(n3). Since β is self-adjoint with respect to this inner product, we obtain that v has Rayleigh quotient O(n3). v is also orthogonal to I, since v,ρβρβ1=Tr(vρβ1/2Iρβ1/2)=Tr(vρβ)=0, where the last equality holds since as an operator v is zero along the diagonal. v has no overlap with I, the fixed point of β, and therefore its Rayleigh quotient is an upper bound on the spectral gap. The spectral gap must therefore also be O(n3). This completes the proof of Theorem 2.1.

Appendix C Bounded Degree Systems with 1-Design Jumps

In this section we prove Lemma 3.1 and Theorem 2.4, demonstrating an improvement over the result in Theorem 2.1 for local jumps in cyclic graphs.

Proof of Lemma 3.1.

To prove Lemma 3.1, we make use of the expressions (5) for the transition, decay, and coherent parts of a general Lindbladian, but with simply one Haar random jump (or equivalently any 1-design since the second moments of the operators are equal). The transition term is

𝐥|t|𝐦=Al1m1Al2m2¯θ(νl1m1,νl2m2).

The expectation of the product Al1m1Al2m2¯ is zero if l1l2 or m1m2. The expectation of the norm squared of an element, on the other hand, is n1. By explicit calculation, we therefore obtain

𝔼[𝐥|t|𝐦] =δl1l2δm1m2nα(νl1m1)
𝔼[𝐥|d|𝐦] =12(δl1m1δl2m2nj(α(νjm2)+α(νjm1))),
𝔼[𝐥|c|𝐦] =0.

The final Lindbladian μ is therefore completely diagonal except for a “classical block” 0 of indices |(m,m), whose off-diagonal terms are populated by the elements of t. The spectral gap of this Lindbladian is therefore the minimum of the values along the diagonal, which are all positive, and the spectral gap of the classical block.

As in Section B, we use the Lemma B.1 to bound the spectral gap of the classical block. Using this lemma, with the canonical path being the edge between a pair of vertices, we obtain the bound

λminlmα(νlm)n1ρll. (8)

We may upper bound ρll with ρlleEminβiEin1eEminβeEmaxβ=n1eO(1)=O(n1) due to the fact that β𝑯=O(1). Similarly, since α is the convolution of a Gaussian of radius σE=β1 with γ(ω)=exp(βmax(ω+βσE22,0)), the assumption that β𝑯=O(1) again yields that α evaluated at νlm is Ω(1). Indeed, within O(β1) of any value of νlm, γ(ω) is Ω(1), and as a consequence α(νlm)=Ω(1). This yields a lower bound on λ of minlmΩ(n1)O(n1)=Ω(1).

Now, we lower bound the diagonal elements outside of the classical block. Since each such element is of the form

𝔼[𝐦|d|𝐦]=12(1njα(νjm2)+1njα(νjm1))

and we have already established that each α term is Ω(1), so the resulting diagonal values are all Ω(1). We conclude that the spectral gap of the Lindbladian is Ω(1).

Proof of Theorem 2.4.

We construct our Lindbladian by sampling M=Θ(log(n)) unnormalized jumps 𝑨a from the 1-design 𝒟(U(n)) as in Definition 2.2, each with a corresponding Lindbladian a (which has one jump 𝑨a along with its adjoint, normalized by 2). Then, we want to prove that with high probability, β=2Ma=1M/2a, the Lindbladian with all M of these jumps now normalized by the number of jumps, has spectral gap bounded by a constant.

To prove the result, we shall make use of the matrix Bernstein’s inequality for our concentration bound:

Lemma C.1 (cf. [39]).

Say X1,,XN are independent random d×d Hermitian matrices, such that 𝔼[Xi]=0 and XiR. Define Y=1Ni=1NXi, and say that N𝔼[Y2]T. Then Pr(Yt)2dexp(32Nt23T+Rt).

Call δa=aμ, where μ=𝔼𝑨𝒟[β]. Each of these operators has zero expectation. We have that δ=2Ma=1M/2δa is precisely the discrepancy between our Lindbladian β=2Ma=1M/2a and the expected Lindbladian μ. We will apply Bernstein’s inequality for Xa=δa, Y=δ, and N=M2. By Lemma 4.1, the CKG Lindbladian has operator norm O(log(β𝑯)), which is O(1) in this regime. We denote this upper bound R2. We can now verify the condition N𝔼[Y2]T in the statement of Lemma C.1, since we have that δaa+μR=Θ(1), and

M2𝔼[(a=1M/2δaM/2)2]2Ma=1M/2δa2R2.

Every operator that satisfies detailed balance is Hermitian in some fixed basis, and therefore each a, as well as μ, can be considered Hermitian. By linearity, the same holds true for δa=aμ. The operators δa therefore satisfy the conditions of the matrix Bernstein inequality, and so their average δ satisfies

Pr(δt)2n2exp(34Mt23R2+Rt),

where the n2 is due to an overhead of the dimension of the Lindbladian.

For any constant t, there exists an M=Θ(log(n)) such that the term inside the exponential is at least 3log(n), since R is a constant. The probability that δt is then arbitrarily close to 1 for sufficiently large n and choice of M=Θ(log(n)). By Lemma 3.1, μ has a constant spectral gap bounded below by some C. By Weyl’s theorem, the eigenvalues of μ+δ may differ by at most t from those of μ. Choosing tC2, it follows that a, with any constant probability, has constant spectral gap.

Appendix D Unbounded Degree Systems with 1-Design Jumps

Proof of Lemma 3.2.

We follow the proof of Lemma 3.1. As in Lemma 3.1, we may obtain the following bound on the classical block:

λminlmα(νlm)n1ρll =α(νlm)(1njexp(βEj))exp(βEl). (9)

An explicit calculation, as in [32], therefore yields that

λ =Ω(δ(n))

by assumption that δ(n) of the eigenvalues are within O(β1) of λmin. Now bounding the diagonal elements outside of the classical block, we see that

𝔼[𝐦|d|𝐦]=12(1njα(νjm2)+1njα(νjm1)).

Again, since δ(n) of eigenvalues are within O(β1) of λmin, the above sum is Ω(δ(n)), as desired.

Proof of Theorem 2.6.

We follow the proof of Theorem 2.4. Defining once again a to be the Lindbladian with one jump operator 𝑨a and its adjoint (normalized by 2), and defining δa=aμ, we can apply the matrix Bernstein’s inequality to obtain

Pr(δt)n2exp(34Mt23R2+Rt),

since all the conditions of the inequality are again satisfied.

By Lemma 3.1, μ has a constant spectral gap bounded below by some Cδ(n). Selecting t to be C2δ(n), there exists an M=Θ(δ(n)2log(β𝑯)2log(n)) for which the value inside the exponential is at least 3log(n). The probability that δt is therefore above any constant probability for sufficiently large n. By Weyl’s theorem, the eigenvalues of μ+δ may differ by at most tC2δ(n). It follows that a, with any constant probability, has spectral gap Ω(δ(n)).