Towards a Complexity-Theoretic Dichotomy for TQFT Invariants
Abstract
We show that for any fixed -dimensional TQFT over of either Turaev-Viro-Barrett-Westbury or Reshetikhin-Turaev type, the problem of (exactly) computing its invariants on closed 3-manifolds is either solvable in polynomial time, or else it is -hard to (exactly) contract certain tensors that are built from the TQFT’s fusion category. Our proof is an application of a dichotomy result of Cai and Chen [J. ACM, 2017] concerning weighted constraint satisfaction problems over . We leave for future work the issue of reinterpreting the conditions of Cai and Chen that distinguish between the two cases (i.e. -hard tensor contractions vs. polynomial time invariants) in terms of fusion categories. We expect that with more effort, our reduction can be improved so that one gets a dichotomy directly for TQFTs’ invariants of 3-manifolds rather than more general tensors built from TQFTs’ fusion categories.
Keywords and phrases:
Complexity, topological quantum field theory, dichotomy theorems, constraint satisfaction problems, tensor categoriesCopyright and License:
2012 ACM Subject Classification:
Mathematics of computing Geometric topology ; Theory of computation Problems, reductions and completeness ; Theory of computation Quantum complexity theoryFunding:
Both authors supported in part by NSF CCF grant 2330130.Event:
20th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2025)Editor:
Bill FeffermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
1.1 Main results
Quantum computation – especially topological quantum computation – motivates a number of complexity-theoretic questions concerning TQFT invariants of manifolds, particularly in dimensions 2 and 3. One of the most central is to classify “anyonic systems” according to whether or not they are powerful enough to (approximately) encode arbitrary quantum circuits over qubits. Anyons that are powerful in this way are important because (in theory) it should be possible to build fault tolerant quantum computers using them [12, 10]. We refer the reader to [23, 19] for a broad review of the mathematical side of these matters and Subsection 1.3 for more discussion. For now, we simply note that the Property F conjecture of Naidu and Rowell is currently the only concrete, published formulation of a proposed (partial) answer to this classification question that we know. The conjecture is surprisingly easy to formulate: the possible braidings of copies of a simple anyon in a unitary modular tensor category generate only finitely many unitaries for each (and, hence, are not “braiding universal” for quantum computation) if and only if the square of the quantum dimension of is an integer [17].
In this work, we will not attack the Property F conjecture directly. However, our main result has a similar spirit and shares the same motivations. See Subsection 1.3 for some discussion of these points.
Theorem 1.
-
(a)
Fix a spherical fusion category over , presented skeletally with all data given as algebraic numbers over . Then –the problem of contracting tensor networks defined from – is either solvable in polynomial time or -hard. Moreover, if is a closed, oriented, triangulated -manifold (treated as computational input), then either the problem of computing the Turaev-Viro-Barrett-Westbury invariant is solvable in (classical) polynomial time or is -hard.
-
(b)
Fix a modular fusion category over , presented skeletally with all data given as algebraic numbers over . Then –the problem of contracting tensor networks built from – is either solvable in polynomial time or -hard. Moreover, if is a closed, oriented 3-manifold encoded via a surgery diagram (treated as computational input), then either the problem of computing the Reshetikhin-Turaev invariant is solvable in (classical) polynomial time or is -hard.
Two routine points of clarification are due.
First, we note that all fusion and modular categories over admit finite skeletal presentations using algebraic numbers over . This is because the defining equations for the skeletal data are all algebraic over . In particular, since we are interested in how the complexity of or depends on variable for fixed or , there is no harm in assuming that and are encoded in this way.
Second, as usual in computational 3-manifold topology, to say that a problem whose input is a triangulated 3-manifold is solvable in polynomial time means that there exists an algorithm to solve the problem that runs in time polynomial in the size of the triangulation. For a 3-manifold presented via integral surgery on a link diagram in , the algorithm must run in time jointly polynomial in the crossing number of the link diagram, the number of components of the link, and the absolute values of the surgery coefficients.111In particular, we might understand the surgery coefficients as being expressed in unary, not binary. (If we used the latter, then we would not be able to build a triangulation from a surgery diagram in polynomial time.)
We refer the reader to [7] for further elaboration of both of these matters.
We now explain briefly the meaning and importance of dichotomy theorems within complexity theory. Of course, it is an infamous open problem to show that (the two complexity classes might be equal, but most experts do not expect this to be the case). To establish this inequality it is necessary and sufficient to show that there exists an -complete problem with no polynomial-time algorithm. Intriguingly, Ladner showed that if , then there exist problems in that are neither in nor -complete [16]. These are usally referred to as -intermediate. In other words, an -intermediate problem is a problem in that is neither in nor -hard. Intuitively, Ladner’s theorem shows that if one considers a family of decision problems, then it need not be the case that every problem in the family is either “easy” (that is, in ) or “hard” (that is, -hard)–there could be problems that have intermediate complexity. When a given family of problems has the property that none of the problems has intermediate complexity, then one says that the family satisfies a dichotomy theorem. The archetypical dichotomy theorem was established by Schaefer, who showed that Boolean satisfiability problems in generalized conjunctive form (where the clauses are taken from a finite set of constraints) satisfy a dichotomy theorem (with respect to the set of constraints) [21].
In the case of our Theorem 1, we interpret (2+1)-d TQFT invariants as generalized (i.e. -valued instead of -valued) counting problems parametrized by spherical fusion categories and modular fusion categories (the categories are analogs of the allowed local constraints in Schaefer’s dichotomy). Our results establish the dichotomy that either a function of the type or is “easy” to compute (polynomial time) or else it is “hard” to contract certain tensors built from the category or (-hard). In this way, there are no (2+1)-d TQFTs whose tensors are of “intermediate” complexity. In fact, we conjecture that the same can be said directly of the TQFT’s invariants of 3-manifolds per se. Let us expound on these points now.
1.2 Mapping the dichotomy
Whether or not or is computable in polynomial time depends on and . Having established Theorem 1 – which only asserts the existence of a dichotomy – it is natural to wonder where one should draw the line between easy and hard. Better yet, ideally, one would like to be able to prove that the dichotomy of Theorem 1 is effective, meaning, given or , there exists a polynomial-time algorithm to decide precisely when the category falls into the easy case (here “polynomial-time” means in the size of the skeletalization of or ). Our proof of Theorem 1 relies on the main result of Cai and Chen’s work [2], which establishes a dichotomy theorem for a generalized type of “solution counting” to constraint satisfaction problems with a fixed “-weighted constraint family” . We carefully define in Subsection 2.1, but here we note that not only do Cai and Chen prove that for every choice of constraint family , is either -hard or computable in polynomial time – they also provide three necessary and sufficient conditions that characterize precisely which allow for polynomial time solutions to . These conditions are called “block orthogonality,” “Mal’tsev” and “Type Partition”. Our proof of Theorem 1 consists in converting a spherical fusion category or modular fusion category into an appropriate constraint family or such that computing or is equivalent to computing an instance (depending on ) of a problem in or , respectively. In particular, for the constraint families and we shall build, it should be possible to interpret the three conditions of Cai and Chen directly in terms of the categories and . It is beyond the scope of the present work to attempt to accomplish this. However, we believe this is an important problem, since it should shed light on variations of the Property F conjecture related to anyon classification, as we explain in the next subsection.
Let us now address the more important deficiency of Theorem 1, alluded to at the end of the previous subsection: it would be better to get an outright dichotomy for 3-manifold invariants, and not just general tensors derived from a fusion category. To this end, Theorem 1 can be understood as a first step towards proving the following more desirable result.
Conjecture 2.
-
(a)
Fix a spherical fusion category over , presented skeletally with all data given as algebraic numbers over . If is a closed, oriented, triangulated -manifold (treated as computational input), then computing the Turaev-Viro-Barrett-Westbury invariant is either solvable in (classical) polynomial time or is -hard.
-
(b)
Fix a modular fusion category over , presented skeletally with all data given as algebraic numbers over . If is a closed, oriented 3-manifold encoded via a surgery diagram (treated as computational input), then the problem of computing the Reshetikhin-Turaev invariant is either solvable in (classical) polynomial time or is -hard.
1.3 Implications for “anyon classification”
Theorem 1 and Conjecture 1 assert that for certain precise formulations of the problem of “anyon classification,” whatever the “type” is for a given modular fusion category , it can only be one of two things, with no “intermediate” cases. In order to explain this more carefully, we pause to note the many ways one can make the problem of anyon classification precise, and situate our result exactly in this milieu. Moving from the more “purely mathematical” to the more “applied” end of the spectrum, “anyon classification” could mean any of the following precise problems:
-
1.
Algebraic classification of unitary modular fusion categories (MFCs) up to ribbon tensor equivalence.
-
Much of the literature on fusion categories can be considered as contributing to this problem.
-
Presumably one would be satisfied with a solution to this problem “modulo finite group theory.”
-
-
2.
Algebraic classification of simple objects in unitary MFCs according to whether or not the braid group representations have finite image, dense image, or something else.
-
The Property F conjecture is of course directly related to this matter.
-
One can generalize this question to consider mapping class group representations of higher genus surfaces with different types of anyons on them.
-
-
3.
Complexity-theoretic classification of MFCs according to how easy or hard it is to exactly compute their Reshetikhin-Turaev 3-manifold invariants (as algebraic numbers over ).
-
One can ask this question for restricted classes of 3-manifolds (such as “knots in ” or “links in ” or “integer homology spheres”), and the classification may change [7].
-
4.
Complexity-theoretic classification of MFCs according to how easy or hard it is to “approximate” their Reshetikhin-Turaev 3-manifold invariants.
-
There are different types of approximations one might ask for. A priori, each type should be understood as giving a different version of this question.
-
“Exactly compute” is one way to “approximate.”
-
Pioneering works of Freedman, Kitaev, Larsen and Wang show that for certain approximation schemes, there exists unitary MFCs for which the ability to approximate their 3-manifold invariants is equivalent in power to (bounded-error quantum polynomial time) [9, 10]. In particular, their work established the original paradigm for topological quantum computation via anyon braiding.
-
Kuperberg showed that results for one type of approximation can have important implications for other types of approximations [13]. In particular, the kinds of approximations that a quantum computer can efficiently make for Reshetikhin-Turaev invariants are (in general/worst case) not precise enough to do anything useful for distinguishing 3-manifolds even if their invariants are promised to be unequal by a large amount.
-
-
5.
Complexity-theoretic classification of MFCs according to whether or not they support universal quantum computation with braiding and adaptive anyonic charge measurements.
-
Quantum computation using braidings and charge measurements of anyons in a fixed unitary MFC is often called “topological quantum computing with adaptive charge measurement.”
-
Our entirely subjective opinion is that this is the most important flavor of anyon classification, at least when considered from the perspective of the goal of actually building a universal, fault-tolerant quantum computer.
-
Even unitary MFCs whose anyons all have Property F (and, hence, are not universal via braiding alone) can be universal when braiding is supplemented with charge measurements, see e.g. [6].
-
While adaptive charge measurement is generally considered fault-tolerant for topological reasons, unlike the case of braiding-only topological quantum computing, the amplitudes with which one performs a quantum computation in this paradigm are not (normalizations of) Reshetikhin-Turaev invariants of 3-manifolds.
-
There are known relations between these different classification problems. For example, on one hand, if is an anyon such that is dense, then the Solovay-Kitaev theorem implies that supports universal topological quantum computation via braiding (without needing adaptive charge measurement). On the other hand, if has Property F, then it is known that braiding with is never powerful enough to encode all of in its braidings. This latter point was the main motivation for the Property F conjecture in the first place, since one would like to rule out the “obviously” un-useful anyons easily.
To understand the potential usefulness of Theorem 1 or Conjecture 2, it is perhaps helpful to pull on the thread of these motivations for the Property F conjecture a bit more so that we can compare and contrast.
On one hand, there is no “unconditional” implication known between classification problems (2) and (4) above in either direction, except if we condition on properties in a way we have already mentioned, namely: if an anyon has Property F, then it is definitely not braiding universal, while if an anyon has dense braidings, then it is universal. This is not “unconditional” in the sense that as far as problem (2) is concerned, there is a third case that remains to be addressed: anyons with braidings that are neither dense nor have Property F. Do they even exist? If so, what are we to make of them? Are they universal or not? Maybe sometimes they are and sometimes they are not? Conversely, if an anyon is braiding universal, does it necessarily have dense braid group representations? These are interesting questions worth pursuing, but it could require quite a bit of effort to resolve each of them.
On the other hand, there is an “unconditional” connection between (3) and (4) in at least one direction: (3) is simply the special case of (4) where the type of “approximation” is chosen to be “exact computation.” So classification problem (3) might be understood as a warm-up to the version of problem (4) where the type of approximation is not “exact”, but is instead the kind of approximation relevant to topological quantum computing. (For the sake of space, we refrain from precisely defining this type of approximation here; see the intro discussions of [13] or [20].) The key technical issue that this perspective highlights is the following: even categories whose anyons all have property F (and thus are not braiding universal) can have -hard invariants [11, 14, 15]. Hence, more work needs to be done to properly understand the relationship between the -universality of anyon braidings in a given modular fusion category and -hardness of (exactly) computing on 3-manifolds . At the end of the day this is not so different from the situation between (2) and (4).
However, we conclude this discussion by noting that it is conceivable there exists a very tight connection between anyon classification problems (3) and (5) (while there is essentially no way to relate (2) and (5)). Indeed, one might reasonably guess that -hardness for the exact calculation of invariants implies that topological quantum computing with adaptive charge measurements is always sufficient to generate -universal topological gates. This guess would be consistent with all known examples. We plan to explore these matters in future work.
1.4 Outline
Subsection 2.1 briefly reviews the definiton of , as well as Cai and Chen’s dichotomy theorem for these problems. Subsection 2.2 contains the proof of part (a) of Theorem 1. Subsection 2.3 contains some preliminary results about the graphical calculus in fusion categories needed for the proof of part (b). The proof of part (b) itself is relegated to Appendix A for the sake of space. Section 3 contains some further discussion.
2 Proof of Theorem 1
2.1 Cai and Chen’s dichotomy for weighted CSPs
Before proving either part of our main theorem, we review the definition of , following [2]:
-
We fix a finite set called the domain (which, by an abuse of notation, we will suppress from the notation ).
-
We fix a (-valued) weighted constraint family , where each is a -valued function for some called the arity of . We assume all the values that the assume are encoded as algebraic numbers over .
-
An instance of consists of a tuple of variables over (which will be suppressed in our notation) and a set of tuples in which is an -ary function from and are indices of the variables in .
-
The output of on instance is the algebraic number given by
where
The main result of [2] is
Theorem 3 (Thm. 1, [2]).
Given any constraint set as above, is either computable in polynomial time or -hard.
2.2 Proof of Theorem 1(a)
Proof.
Let be a spherical fusion category over . To prove part (a) of Theorem 1, it suffices – thanks to Theorem 3 – to build a domain and weighted constraint set with the following property: there exists a polynomial time algorithm that converts a triangulated 3-manifold into an instance of such that
Readers already familiar with the state-sum formula for TVBW invariants will notice that the definition of is quite similar in spirit. The goal of the present proof is simply to make this similarity precise.
To this end, let us recall the state-sum formula for :
where our notation is as follows:
-
is the ordered list of vertices in the triangulation and is the total quantum dimension of .
-
is the set of edges in the triangulation and is set of simple objects in the given skeletalization of .
-
is the set of faces in the triangulation and is the set of labels of the trivalent Hom spaces
and is the 3j+1k-symbol obtained by evaluating the face with a given labeling of the edges and faces of (See Figure 1).
-
is the set of tetrahedra in the triangulation , and is the 6j+4k-symbol obtained by evaluating the tetrahedron with a given labeling of the edges and faces of , where we take into account whether the orientation of given by the orientation of matches the induced orientation given by the ordering of the vertices. This will be made more precise below.
Since we assume is not necessarily multiplicity-free, then instead of 6j-symbols, we will be using so-called 6j+4k-symbols , which are defined by the contraction of a specific colored graph.
This defines the “positive” 6j+4k-symbols. We also define a “negative” version of the 6j+4k-symbols. We will call them negative 6j+4k-symbols since they correspond to negatively-oriented tetrahedra with respect to the standard orientation on :
Here, are simple objects, , , , and .
We now have enough to identify our domain and weighted constraint set. Define
Now extend the 6j+4k symbols to be 10-ary functions on our domain in the “trivial” way:
and
We similarly define 4-ary functions on our domain using the 3j+1k-symbols by taking
And we define 1-ary functions using the quantum dimensions of simple objects:
Finally, we define a 1-ary function to encode the total quantum dimension of :
Using these functions, we define our weighted constraint family
Our next goal is to describe how to convert a triangulation of an oriented manifold into an instance of .
The data of is comprised of:
-
An ordered list of vertices .
-
A list of oriented edges where means that is an edge connecting to . (Note that these orientations are chosen arbitrarily.)
-
A list of oriented faces where means that is a face whose boundary consists of the edges , , and . (Note that the orientations of the faces need to be consistent with the edge orientations in any way.)
-
A list of tetrahedra where means that is a tetrahedron with faces given by , …, .
-
To encode the orientation of , each tetrahedron is endowed with a sign or to indicate the local orientation inside that tetrahedron.222These signs must assemble to give a -valued 0-cocycle on the dual cellulation. This condition could be easily checked, but for our purposes it is simply part of the data structure of , and so this condition can be assumed to be met as a promise. This condition is not necessary for our proof (although it is necessary for the proof that is an invariant of ).
For a given triangulation as described, define a tuple
that has a variable for each vertex, edge and face in . We now describe how to build the desired instance of . It will be clear from the construction that the mapping can be done in polynomial time in the size of .
First we put the functions and in for every vertex and edge of . For each face , we include . Finally, for each tetrahedron , we include either
or
where has faces , , , and . To determine whether we should include or for , we check if the orientation of given by the orientation of matches the induced orientation by the ordering of the vertices; if they match, then we use , and otherwise we use .
This defines an instance of that computes the Turaev-Viro invariant for . Indeed, plugging in the definitions of our constraint functions, we get
where if the orientation of given by the orientation of matches the induced orientation by the ordering of the vertices, and otherwise. A priori, includes a sum over more types of labelings than the state-sum formula for . However, because of how we have chosen to define the functions in the constraint family , all of these additional terms in the sum vanish. To see this, first note that when , the entire term is 0. In particular, this means all non-zero terms have a common factor of the global quantum dimension to the power, and hence we can pull it out as the normalizing factor. Similarly, when the edges are not labeled by elements of the domain that are not simple objects of , or the faces are not labeled with multiplicities, the terms are zero. We have furthermore arranged so that when the labeling of the edges and faces is not admissible, the 6j+4k-symbol for that term vanishes. Therefore, the only surviving terms in the sum are the which define admissible labelings of the edges and faces of . It is then straightforward to see recovers the Turaev-Viro invariant.
2.3 Graphical calculus preliminaries
Before proving part (b) of Theorem 1, we establish a technical result about the graphical calculus in a spherical fusion category. The result is likely well-known to experts, but does not appear in the literature anywhere that we are aware. We begin by reviewing what we need of closed trivalent graphs in .
For our purposes, a (closed) trivalent graph in (or ) has:
-
A finite collection of vertices , where
-
A collection of directed edges in
subject to the conditions that
-
Each edge is either a loop disjoint from or an arc connecting two (not necessarily distinct) vertices, with an interior that is disjoint from all vertices in .
-
If is a vertex, then there is an open disk neighborhood so that has three arcs (coming from intersections of with E, not necessarily distinct) emanating from with one arc parallel to the vector , one arc parallel to the vector , and one arc parallel to the vector .
Such a graph is closed in the sense that there are no vertices that are involved in precisely one half-edge.
We will also need to consider closed trivalent graphs which have crossings. We will call these crossed (closed) trivalent graphs. Crossed trivalent graphs are trivalent graphs where they are allowed to have finitely many double points in the interior of its edges. These double points must indicate which segment of the edge crosses “over” or “under” the other.
Recall that computing the Reshetikhin-Turaev invariant of a 3-manifold presented by a surgery diagram involves a process where we interpret a coloring of the components of the diagram by simple objects of as an endomorphism of the tensor unit of . Since is itself simple, such a coloring gives rise to an endomorphism , which in turn can be identified with a complex number because . The invariant is then (roughly) the sum of all of these numbers over all choices of colorings of the surgery diagram of . For any single coloring, the complex number associated to it can generally be understood as the result of a sequence of tensor contractions on a tensor induced by that coloring. Moreover, this sequence of tensor contractions can be represented diagramatically, using a small number of standard diagrammatic operations that are determined from the data of the modular fusion category . We call these operations Circle Removal, Tadpole Trim, Bubble Pop, -Move, and Vertex Spiral (aka “bending” moves); see Figures 3-5. We also allow ourselves to reverse edge orientations. To compute the complex number associated to a colored surgery link, one must identify a sequence of these operations that simplifies the diagram to the empty diagram. The desired number will then be a product of numbers determined from the operations in the sequence and the given coloring.
Our proof of Theorem 1(b) essentially revolves around two key observations.
First, a kind of uniformity: given a surgery description of , there exists a single sequence of diagrammatic operations that can be used to evaluate all colorings of the surgery link to complex numbers. This uniformity is, more-or-less, what will make it possible to encode as an instance of for an appropriately chosen .
Second: such a uniform sequence of operations can be identified in polynomial time from the surgery description of . This will imply that the reduction can be performed in polynomial time. We make this point more precise now.
Lemma 4.
Proof.
A simple greedy algorithm suffices. We sketch the idea.
Begin by greedily choosing a complementary region of , i.e. a connected component of . Note that we may identify the boundary edges and vertices of in polynomial time. Suppose has unique edges and unique vertices on its boundary. Now simplify and update according to the following cases.
-
1.
If , then , and so we terminate.
-
2.
If , then the boundary of is a circle in , which we remove as in Figure 3.
- 3.
-
4.
If , then the boundary of is part of a bubble, which we pop as in Figure 3, but possibly only after first applying an appropriate set of vertex spirals and edge orientation reversals.
-
5.
Otherwise, . Greedily pick an edge on the boundary of . After perhaps first applying up to two vertex spirals and 5 edge orientation reversals, we can arrange so that around , looks like one of the four diagrams in Figure 4, with the edge in the middle. Apply the available -move around . The complementary regions of the resulting graph are naturally in bijection with the regions of the previous graph (see Figure 6 for an example). Let be the region of the new graph associated with . If has edges on its boundary, then repeat what we just did, but with and the new graph, instead of ; otherwise, has edges, and we pop the bubble as in case (3).
Repeat this process of greedily picking a region and proceeding as in the above cases. Each step of identifying an and carrying through the appropriate case takes polynomial time, and, moreover, reduces the number of complementary regions of by 1. Since there are at most a polynomial number of complementary regions to begin with, the entire procedure takes place in polynomial time.
Lemma 4 only involves planar trivalent graphs, while the proof of part (b) of Theorem 1 needs crossings. Indeed, when we compute Reshetikhin-Turaev invariants, a 3-manifold is encoded by a framed link diagram (which can be assumed to be in plat position, as in Figure 8). Fortunately, for modular fusion categories we have “-moves” that – together with a “resolution of the identity” trick shown in Figure 7 – allow us to reduce to the planar case, and thereby use Lemma 4. The basic strategy is then not so different from the proof of part (a), but it requires that we introduce new variables for every crossing and then every step of the algorithm of the lemma. The details are found in Appendix A.
3 Discussion
3.1 Unitarity
We note that none of our results depend on the unitarity of the spherical fusion category or the modular fusion category . This is not surprising, since a choice of unitary structure is not necessary to define the TQFT invariants of closed -manifolds from or , and so, as far as exact calculation of invariants is concerned, such a choice will not affect any dichotomies. Nevertheless, a unitary structure is certainly needed in order to do topological quantum computation with the TQFT determined by or , since, for such applications, one needs the TQFT to be unitary. A priori, a specific choice of unitary structure might affect the -universality of braiding (with or without adaptive measurement); however, a posteriori, this is not the case because Reutter showed that unitarizable fusion categories admit unique unitary structures [18] (we thank Milo Moses for reminding us of this reference).
3.2 TQFTs in other dimensions
We furthermore note that the same strategy we used for the proof of Theorem 1(a) should work more generally to prove that any fully-extended -dimensional TQFT in any dimension will satisfy a similar dichotomy involving its invariants of closed -dimensional manifolds, so long as the TQFT is defined using a state-sum formula based on finite combinatorial-algebraic data. In particular, similar dichotomies should be possible for -dimensional TQFTs based on spherical fusion 2-categories [8] or lattice gauge theories based on finite groups (sometimes called Dijkgraaf-Witten theories) in arbitrary dimension.
3.3 Alternative proof strategies
Building on the previous point, one might try to give an alternative proof of Theorem 1(b) by using the -dimensional Crane-Yetter TQFT based on the modular fusion category [5]. To put it more carefully, it is known that the Reshetikhin-Turaev invariant can be computed by choosing a triangulated 4-manifold whose boundary is , and computing an appropriate state-sum invariant of (similar to the Turaev-Viro invariant of a triangulated 3-manifold) [4]. So one could try to prove a dichotomy for the -dimensional surgery-invariant case of Reshetikhin-Turaev in a simpler way by instead proving a dichotomy for the -dimensional triangulation-invariant case of Crane-Yetter. Accomplishing this requires using the fact that given a surgery diagram for a -manifold , one can build a triangulated 4-manifold with in polynomial time (for example, cf. [1]).
In the opposite direction, it is known that for a spherical fusion category , , where is the Drinfeld center of . If one were able to efficiently convert a triangulation of a 3-manifold into a surgery presentation of the same manifold, then part (b) of Theorem 1 (or Conjecture 2) would immediately imply part (a) of the same.
3.4 Towards Conjecture 2
The current gap between Theorem 1 and Conjecture 2 is explained by a rather simple and undesirable property of our proof of the former: our reductions are not “surjective” from 3-manifold encodings to instances of or . For example, it would be consistent with our results for there to exist a spherical fusion category such that is computable in polynomial-time, and yet, the problem is still -hard (although we consider this unlikely).
To establish an outright dichotomy theorem for TQFT invariants via Cai and Chen’s Theorem 3, we would need to arrange our choices of and with more care so that every instance of or that is not of the form satisfies two properties: first, it can be identified in polynomial time as an instance that is not of the form , and, second, can be computed in polynomial time. This seems difficult to arrange, as it is not clear how to choose the constraint families and so that instances can “self-report” as not being of the form .
It is instructive to compare TQFT invariants with “holant problems” as defined in [3] and inspired by the “holographic reductions” of [22]. Holant problems are a kind of generalization of counting CSPs that impose more structure on the way in which the individual functions comprising an instance are “wired together.” Intuitively, an instance of has no locality constraints on its variables, other than that the constraint functions have bounded arity (assuming is finite). An instance of a holant problem, on the other hand, has a set of variables that are determined by the edges of a graph with constraint functions assigned to the vertices. The Turaev-Viro-Barrett-Westbury invariant of closed 3-manifolds determined by a spherical fusion category can be seen as generalization of this idea, with variables assigned to both the edges and faces of a 3-dimensional triangulation, and constraints assigned to the tetrahedra. We expect it should be possible to formulate TVBW invariants of triangulated 3-manifolds directly as instances of holant problems using a similar construction as in the proof of Theorem 1(a). Of course, even if one could achieve this, such a reduction from TVBW invariants to holant problems would – a priori – suffer in the same way as our current reduction to . Thus, it seems likely that proving Conjecture 2 will require substantially new ideas. Nevertheless, it appears that there could be much to gain by attempting to import what has been learned about holant problem dichotomies to TQFT invariants of 3-manifolds.
References
- [1] Rhuaidi Antonio Burke. Practical Software for Triangulating and Simplifying 4-Manifolds. In Wolfgang Mulzer and Jeff M. Phillips, editors, 40th International Symposium on Computational Geometry (SoCG 2024), volume 293 of Leibniz International Proceedings in Informatics (LIPIcs), pages 29:1–29:23, Dagstuhl, Germany, 2024. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.SoCG.2024.29.
- [2] Jin-Yi Cai and Xi Chen. Complexity of counting CSP with complex weights. Journal of the ACM (JACM), 64(3):1–39, 2017.
- [3] Jin-Yi Cai, Pinyan Lu, and Mingji Xia. Computational complexity of holant problems. SIAM Journal on Computing, 40(4):1101–1132, 2011.
- [4] Louis Crane and Louis H Kauffman. Evaluating the crane-yetter invariant. Quantum topology, 3:131–8, 1993.
- [5] Louis Crane and David N Yetter. A categorical construction of 4d tqfts. arXiv preprint hep-th/9301062, 1993.
- [6] Shawn X Cui and Zhenghan Wang. Universal quantum computation with metaplectic anyons. Journal of Mathematical Physics, 56(3), 2015.
- [7] Colleen Delaney, Clément Maria, and Eric Samperton. An Algorithm for Tambara-Yamagami Quantum Invariants of 3-Manifolds, Parameterized by the First Betti Number. In Oswin Aichholzer and Haitao Wang, editors, 41st International Symposium on Computational Geometry (SoCG 2025), volume 332 of Leibniz International Proceedings in Informatics (LIPIcs), pages 38:1–38:15, Dagstuhl, Germany, 2025. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.SoCG.2025.38.
- [8] Christopher L Douglas and David J Reutter. Fusion 2-categories and a state-sum invariant for 4-manifolds. arXiv preprint arXiv:1812.11933, 2018.
- [9] Michael H Freedman, Alexei Kitaev, and Zhenghan Wang. Simulation of topological field theories by quantum computers. Communications in Mathematical Physics, 227:587–603, 2002.
- [10] Michael H Freedman, Michael Larsen, and Zhenghan Wang. A modular functor which is universal for quantum computation. Communications in Mathematical Physics, 227:605–622, 2002.
- [11] Robion Kirby and Paul Melvin. Local surgery formulas for quantum invariants and the Arf invariant. Geometry & Topology Monographs, 7, 2004.
- [12] Alexei Kitaev. Anyons in an exactly solved model and beyond. Annals of Physics, 321(1):2–111, 2006.
- [13] Greg Kuperberg. How hard is it to approximate the Jones polynomial? Theory of Computing, 11(6):183–219, 2015.
- [14] Greg Kuperberg and Eric Samperton. Computational complexity and 3–manifolds and zombies. Geometry & Topology, 22(6):3623–3670, 2018.
- [15] Greg Kuperberg and Eric Samperton. Coloring invariants of knots and links are often intractable. Algebraic & Geometric Topology, 21(3):1479–1510, 2021.
- [16] Richard E Ladner. On the structure of polynomial time reducibility. Journal of the ACM (JACM), 22(1):155–171, 1975.
- [17] Deepak Naidu and Eric C Rowell. A finiteness property for braided fusion categories. Algebras and representation theory, 14:837–855, 2011.
- [18] David Reutter. Uniqueness of unitary structure for unitarizable fusion categories. Comm. Math. Phys., 397(1):37–52, 2023. doi:10.1007/s00220-022-04425-7.
- [19] Eric Rowell and Zhenghan Wang. Mathematics of topological quantum computing. Bulletin of the American Mathematical Society, 55(2):183–238, 2018.
- [20] Eric Samperton. Topological quantum computation is hyperbolic. Communications in Mathematical Physics, 402(1):79–96, 2023.
- [21] Thomas J Schaefer. The complexity of satisfiability problems. In Proceedings of the tenth annual ACM symposium on Theory of computing, pages 216–226, 1978.
- [22] Leslie G Valiant. Holographic algorithms. SIAM Journal on Computing, 37(5):1565–1594, 2008.
- [23] Zhenghan Wang. Topological quantum computation, volume 112 of CBMS Regional Conference Series in Mathematics. Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2010. doi:10.1090/cbms/112.
Appendix A Proof of Theorem 1(b)
Proof.
Let be a modular fusion category over . As in our proof of part(a) of Theorem 1, it suffices to build a domain and weighted constraint set for which there is a polynomial time algorithm to encode a surgery diagram for a 3-manifold into an instance of such that
Let us recall the formula for :
where our notation is as follows:
-
is the surgery link diagram that defines the 3-manifold . consists of components (labeled ) and, for convenience is endowed with the blackboard framing.333To justify this convenience, simply apply a Reidemeister move of type 1 to each the components of so that the blackboard framing agrees with the desired integral surgery coefficients. This can be done in polynomial time because we encode the surgery coefficents in unary. We will also assume, for convenience, that is in “standard plat position.” This means that all of the local minima of the diagram (which, recall, is a picture in the -plane) occur below all crossings, all local maxima of the diagram occur above all crossings, and the sets of cups and caps have no “nesting”.444This convenience can be justified by the fact that any link diagram can be put in standard plat position in polynomial time by simply applying a sequence of Reidemeister 2 moves. This means is entirely determined by a braid word. See Figure 8.
-
is the signature of : the number of positive eigenvalues of the linking matrix minus the number of negative eigenvalues. (This can be computed in polynomial time from .)
-
are the Gauss sums of .
-
is the evaluation of the colored ribbon graph defined by coloring the components of by a function .
We now define
where is the set of labels of the trivalent Hom space as in the proof of Theorem 1(a). As hinted at in Lemma 4, we need -valued constraint functions on the domain that implement bubble pops, tadpole removals, -moves, etc. We define them as follows.
where , , and . These implement the bubble pop.
where , , . These implement tadpole trims. There is an upside-down version of the tadpole trim, which we denote .
We define functions by requiring
where and . These implement the “left” vertex spin. We similarly define coefficients to implement the right vertex spin. There are upside-down versions of these vertex spirals. We denote them by and , respectively.
We need the -matrices, which have coefficients that satisfy the equation
where , , and . The inverse -matrix has coefficients that we denote by . We also need to include the matrix coefficients implementing the upside-down version of this picture. We call these , respectively.
We then extend the above to functions on our domain:
for .
We also need to implement braidings, which are described diagrammatically via -moves. Recall the definition of the -symbols: for and , they satisfy
The inverse -symbol is similarly defined to describe the inverse braiding. We turn these -symbols into 5-ary functions on the entire domain in the same trivial way as before, namely
Finally, we define 1-ary dimension functions, 1-ary Gauss sum functions (and their inverses), 1-ary dual functions, and 2-ary Kronecker delta functions as follows (respectively):
With all of this, we define our weighted constraint family to be
We reiterate that all of this is computed independently of , and can simply be considered as part of what it means to “have the data” of .
We conclude our proof by describing how to encode a surgery presentation of into an instance of . In addition to the conveniences described above, we assume that the surgery link diagram is oriented, embedded in , and is given by plat closure of a braid word so that each crossing corresponding to lies in and the only maxima or minima lie in .
In order to describe the variables that will be involved in the instance we want, we first describe a polynomial time algorithm to replace with a planar trivalent graph :
-
1.
At each crossing , insert trivalent vertices to resolve the identity (see Figure 7) so that there is a vertex directly adjacent to the crossing, resulting in a crossed trivalent graph.
-
2.
Perform an -move for each crossing, resulting in a trivalent graph in (after potentially scaling so the vertices lie in ).
We can now identify the tuple of variables (valued in the domain ) that will be involved in our desired instance . Recall that is the number of components of , is the signature, and is the number of crossings. Let be the sequence of graphs provided by Lemma 4, and define to be the sum of the number of vertices in all of these graphs. Similarly, define to be the sum of the number of edges in the graphs. Then define a tuple of variables associated to the instance by
To determine the functions involved in , we simply need to keep appropriate account of the sequence of diagrammatic operations involved in taking to and then taking to . For each diagrammatic operation in the algorithms above, we will define a list of functions in which account for the contribution of those operations to .
The first operations are those involved in step (1) of the process of taking to , and involve the insertion of “resolutions of the identity” at every crossing of . To account for these operations, we define a set that is a list of Kronecker delta functions, each of which pairs up edges which used to belong to the same link component before the operation. For example, in Figure 7, the top-right edge of the upper vertex and the bottom-right edge of the lower vertex were a part of the same link component before the operation, so we introduce a Kronecker delta function between the associated variables. The middle edge is free to vary, as in the original algorithm there is a sum over this edge.
We then account for the operations in step (2) of the process of taking to , all of which are -moves. Each operation happens locally on the diagram, so we define lists for that are given by , where we use or depending on the strand that crosses over. The variables are the edges in the trivalent vertex in the relevant order, is the labeling of the vertex before the -move, and is the labeling of the vertex after the operation.
The next operations are given using the algorithm of Lemma 4. The algorithm provides an ordered list of operations , where upon completion of the final operation , the graph is empty. Each operation here is local, so we need only consider the local changes when defining our list. Consider operation in this sequence:
-
If is a circle removal, define a list which contains the relevant function.
-
If is a tadpole trim, define a list which contains the relevant or function.
-
If is a bubble pop, we define a list which contains the relevant function.
-
If is a vertex spiral, we define a list which contains the relevant or (or their upside-down versions).
-
If is an -move, we define a list which contains the relevant or functions.
-
if is an orientation reversal where is the variable associated to the edge before the operation, and is the variable associated to the edge after the operation, then define a list which contains the Kronecker delta function .
For each list, we append Kronecker delta functions on all edges or vertices which are held constant before and after performing the given operation. E.g. if the edge associated to will be held constant after an -move, and will then be associated with in the trivalent graph associated to after the operation, then we introduce to the list associated to the operation. If , we then define the set by:
where each run over all which were defined by the algorithm above. If , then replace the elements with the inverses , …, . By virtue of Lemma 4, the construction of can be carried out in polynomial time in the size of .
We now explain why this choice of instance defines an output which computes the Reshetikhin-Turaev invariant . The and functions at the beginning implement the normalizing factor since these functions are constant. This guarantees they may be factored out of the sum as global factors. The functions implement the product of dimensions we see in the sum.
Notice that if at any point a coloring is not admissible, the term in is 0. It is thus clear that all circle removals, edge orientation reversals, bubble pops, and tadpole trims are correctly implemented, so we just need to check that the -moves, -moves, and vertex spirals are correct.
In the standard algorithm, when an -move occurs, there are three additional variables introduced in the summation: one for the interior edge and two for each interior vertex. These additional variables are introduced here as well, since we are summing over all edges that occur throughout the algorithm, the new edge which is created in the -move will contribute to the sum, while the rest are held constant due to the inclusion of the Kronecker delta functions.
Similarly, we see that -moves and vertex spirals are correctly implemented. Note that also there are no extraneous variables in the sum since the algorithm guarantees that the graph will become empty, and our instance is completely determined by how the algorithm behaves.
