Fully Characterizing Lossy Catalytic Computation
Abstract
A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, “catalytic” tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace (CL), can compute problems which are believed to be impossible for L.
A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace () as a variant of CL where we allow up to errors when resetting the catalytic tape. They showed that for any , which remains the frontier of our understanding.
In this work we completely characterize lossy catalytic space () in terms of ordinary catalytic space (). We show that
In other words, allowing errors on a catalytic tape of length is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional bits of ordinary working memory.
As a consequence, we show that for any , implies , thus giving a barrier to any improvement beyond . We also show equivalent results for non-deterministic and randomized catalytic space.
Keywords and phrases:
Space complexity, catalytic computation, error-correcting codesFunding:
Marten Folkertsma: Supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL program.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Complexity theory and logic ; Theory of computation Complexity classesAcknowledgements:
We acknowledge useful conversations with Swastik Kopparty and Geoffrey Mon about error correction codes and how to apply them.Editors:
Raghu MekaSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
1.1 Catalytic computation
Within space-bounded computation, the catalytic computing framework, first introduced by Buhrman, Cleve, Koucký, Loff, and Speelman [4], models the question of whether or not full memory can be a computational resource. Their main object of study is a catalytic logspace (CL) machine, in which a traditional logspace-bounded Turing machine is given access to a second work tape, polynomial in length, called the catalytic tape; while this tape is exponentially longer than the logspace work tape, it is already full with some string at the outset, and this string must be preserved by the overall computation.
Surprisingly, [4] show that CL can be much more powerful than L, with the catalytic tape being at least as powerful a resource as non-determinism (), randomness (), and more (). They also showed that its power is nevertheless limited and falls far short PSPACE, namely .
This work spawned a long sequence of explorations of the power of catalytic space. Given the base model of CL there are many possible variations and structural questions to be asked, such as the power of randomness [11, 6], non-determinism [5], non-uniformity [21, 23, 9, 10], and other variants [15, 2]. There have also been many works connecting the catalytic framework to broader questions in complexity theory, such as space-bounded derandomization [22, 14, 19], as well as adaptations of catalytic techniques to solve longstanding open questions such as compositional upper bounds for space [7, 8, 10] (see [18, 20] for surveys on the topic).
1.2 Lossy catalytic computation
Besides these more standard structural questions, there are also catalytic variants which are more specific to the catalytic space restriction. In particular, Gupta et al. [16] initiated the study of lossy catalytic computing, wherein the catalytic tape need not be exactly reset to its initial configuration. This model, which we refer to as LCSPACE, essentially asks how robust the core definition of catalytic space is to seemingly small relaxations; for example, in the quantum setting some computation error (albeit of a different form) is necessary for converting between different definitions based on allowed operations.
To begin, note that CL with errors trivially contains the class by simply erasing the first bits of the catalytic tape and using them as free memory. Because we have not managed to prove that any space-bounded class beyond L which is contained in ZPP, we should not expect to be able to prove CL is the same as CL with errors. The question, then, is to understand where, in the range of to , is the acceptable number of errors that CL can provably tolerate.
As an initial answer to the previous question, [16] show that CL gains no additional power from allowing any constant number of errors on the catalytic tape, i.e., . This remains the frontier of our knowledge, and Mertz [20] posed it as an open question to improve this result to any superconstant number of errors, or, alternatively, to provide evidence against being able to prove such a collapse.111We cannot expect an unconditional separation between CL and any LCL, as even separating PSPACE from e.g. remains wide open. Recently, Cook et al. [6] showed that a different error-prone model, namely randomized CL, is no more powerful than the base CL model.
1.3 Our results
In this work we completely characterize lossy catalytic space in terms of ordinary catalytic space. Let denote catalytic machines with free space and catalytic space , and let be the same with up to errors allowed in resetting the catalytic tape. We show that these errors are equivalent to an additional free bits of memory, up to constant factor losses.
Theorem 1.
Let be such that . Then
Besides characterizing , the main takeaway of Theorem 1 is that allowing seemingly minor (superconstant) errors in the resetting condition can give an LCSPACE machine surprising power. A concrete instantiation of this view is the following direct corollary.
Corollary 2.
For any ,
If we revisit the assumption that we cannot hope to prove is in ZPP for any , then Corollary 2 implies the result of [16] is optimal with respect to ; any result of the form is out of reach.
We also show that our proof extends to catalytic machines with additional power beyond errors, namely non-deterministic and randomized catalytic space.
Theorem 3.
Let , and let be such that . Then
We briefly remark that the restriction in all our results is only needed to get the constant stretch in the catalytic tape; we discuss the unrestricted setting in Section 3.2.
1.4 Open problems
1.4.1 Errors in expectation
A related question asked in [20] is whether or not CL is equivalent to CL with errors allowed in expectation over all starting catalytic tapes. This represents a different notion of distance between catalytic tapes, in opposition to Hamming distance, that may be more applicable to settings such as quantum computation. This question has received some attention in a related form by Bisoyi et al. [1], who introduce almost catalytic machines, which perfectly reset some catalytic tapes and are completely unrestricted on others.
However, no general results are known for expected errors – the results in [1] are very structured – and all techniques in our paper fail to restore the tape in pathological cases where a few starting tapes end up with potentially many errors. Furthermore, a barrier result was pointed out by an anonymous reviewer.222The main idea is that allowing virtually unlimited error in an exponentially small fraction of catalytic tapes gives us a strong form of the “compress-or-random” framework of previous papers; we can simulate a randomized algorithm using the catalytic tape as our source of randomness, and in the exponentially unlikely event the tape is not sufficiently entropic we simply erase it and run brute force. Formalizing this intuition and combining it with the results of [6] gives a derandomization barrier to showing even , namely that randomized TC1, which is not known to even be in P, would reduce to the lossy code problem.
1.4.2 Randomized error-prone catalytic space
Recent work of Cook et al. [6] shows that , which, in conjunction with Theorem 3, seems to indicate that our theorems can be unified to show the connection between ordinary CSPACE and CSPACE which is both randomized and lossy, i.e. . This would characterize how deterministic catalytic space handles both natural kinds of “error”, namely both error in the output from the randomness and error in resetting the catalytic tape.
However, the proof of [6] only works when , and our connection to error-prone space incurs an blowup in free space, putting us outside this regime. A generalization of their result, i.e. showing for every and , would tie off this connection. Note that the polynomial blowup allowed in the catalytic tape means this result would not yield novel derandomization for ordinary space; even for this would only show that derandomization overheads can be pushed into a polylogarithmic length catalytic tape, which was already shown by Pyne [22].
1.4.3 Lossy catalytic branching programs
Due to the flexibility in the conditions of Theorem 1, the results of Theorem 3 are likely to extend to other settings catalytic settings; for example, it is immediate to extend both results to CSPACE with advice. We focus on non-determinism and randomness simply because these are two of the most well-studied catalytic variants, and future works are free to adapt these proofs to their own settings.
In terms of notable omissions, however, one setting where one direction does not yet extend, and which is very related to advice, is the catalytic branching program model, which is a syntactic, and by extension non-uniform, way of capturing CSPACE. The issue here is simply that such machines can read and write their entire work tape in one step, which our simulation of CSPACE by LCSPACE is unequipped to handle. As we discuss in the full version of this paper, showing such branching programs are reversible would be sufficient to close this off.
1.4.4 Exact simulation space requirements
In the current simulation of errors using clean space, we use clean space. By contrast, in our simulation of clean space using errors, we use only more errors. If errors can be simulated in clean space instead, then there is only very low overhead in switching between the two perspectives. This would tighten the correspondence between errors and space that we establish. However, since the distance between two codewords required to correct errors is , a different error correction code would be necessary to reach clean space .
2 Preliminaries
We begin by defining catalytic machines as introduced by Buhrman et al. [4].
Definition 4 (Catalytic space).
A catalytic Turing Machine is a space-bounded Turing machine with two work tapes: 1) a read-write work tape of length which is initialized to , and 2) a read-write catalytic tape of length which is initialized to an arbitrary state . On any input and initial catalytic state , a catalytic Turing machine has the property that at the end of the computation on input , the catalytic tape will be in the initial state .
In this work we focus on a relaxation of catalytic space by Gupta, Jain, and Sharma [16], where we are allowed to make some errors in resetting the catalytic tape.
Definition 5 (Lossy catalytic space).
A lossy catalytic Turing Machine with errors is a catalytic machine where at the end of the computation on any input and initial catalytic state , instead of requiring that the catalytic tape be in state , the catalytic tape can be in any state such that and differ in at most locations.
Lastly we specify the basic complexity classes arising from our two catalytic definitions, as well as their specification to the “logspace” setting, where most research interest at the moment lies.
Definition 6.
We write
-
: the class of languages which can be recognized by catalytic Turing Machines with work space and catalytic space .
-
: the class of languages which can be recognized by lossy catalytic Turing Machines with work space , catalytic space , and errors.
We additionally write
We note that throughout this paper we write as a shorthand for for complexity class and function , and similarly for classes with multiple parameters or other asymptotics (e.g. ).
3 Main theorem
In this section we will prove Theorem 1. We will do so via a simulation argument for each direction in turn.
3.1 Simulating errors with space
First, we show that . In fact, we will not need any increase in the length of our catalytic tape.
Theorem 7.
Let . Then
We note that this was also proven in [16] for the case of , but we will pursue a different proof, based on error-correcting codes, which will allow us to generalize to other catalytic models in Section 4.
Proof.
Let be an machine. We will devise a machine which simulates . Note that in this section, we will not use our parameter restriction on ; this direction holds for every setting of , , and . We will presume that , as the inclusion becomes trivial otherwise.
Our simulation will go via an error-correcting code. In particular we will use BCH codes333Technically because of our parameters, they can even be considered Reed-Solomon codes, which are a special case of BCH codes; nevertheless we follow the presentation of the more general code form. (BCH), named after Bose, Ray-Chaudhuri, and Hocquenghem [3, 17], which we define as per [13, 12]. We define the mapping BCH and prove the following lemma in the appendix to the the full version of our paper.
Lemma 8.
Let . There exists a mapping with the following operations:
-
Encoding: takes as input a string of length , plus an additional bits initialized in , and outputs a codeword :
Furthermore, all outputs generated this way have minimum distance from one another.
-
Decoding: takes as input a string of length , with the promise that there exists a string of length such that differs from in at most locations, and outputs this string :
Furthermore, both and are in place replacements of the input strings, they require at most an additional free space of memory.
We now move on to the simulation of our machine . Our machine acts as follows:
-
1.
Initialization: use the function to encode the initial state of the catalytic tape into a codeword, using additional bits from clean space,
-
2.
Simulation: Run using clean space and the first bits of as the catalytic tape. When finishes the calculation, we record the answer in a bit of the free work tape. The catalytic tape is, at this point, in a state which differs in at most locations from .
-
3.
Cleanup: use the function to detect and correct our resulting catalytic tape :
Once we finish this process, we output our saved answer and halt.
The correctness of is clear, as it gives the same output as . By our error guarantee on and the correctness of Dec, our catalytic tape is successfully reset to . Our catalytic memory is as before, while for our free work space we require bits to simulate , an additional zero bits for our codewords, and space for and , for space in total.
Note 9.
There is an alternative proof of this point, one which gets better parameters and relies on an interesting characterization of space, namely the reversibility of space. This proof is a simplification and extension of the one originally provided in [16], and we provide it in the appendix to the full version of our paper for those interested.
3.2 Simulating space with errors
We now show the other direction of Theorem 1, i.e. is contained in .
Theorem 10.
Let , and be such that . Then
Since by the definition of a catalytic machine, this achieves the reverse direction of Theorem 1 with very small blowups in and , and for bounded by a small polynomial in we get a negligible error blowup as well. Note that we allow , and so our proof is not limited to ; however, we will pay for larger values of in the error blowup, and for this factor becomes superconstant.
To understand our construction, we will first prove a version with looser space parameters. This result is incomparable to Theorem 10; although we lose a factor of in our catalytic space, in exchange we have no restrictions on and no loss in either.
Theorem 11.
Let be such that is a power of 2. Then
Proof.
Let be a machine. We will devise a machine which simulates . By the definition of a Turing machine, we will assume that in any time step only reads and writes at most one bit on the work tape.
Throughout this proof, we will associate with in the usual manner, i.e. subtracting 1 and taking the binary representation, and so we will use them interchangeably. Our workhorse is the following folklore444This construction is based on the solution to the so-called “almost impossible chessboard puzzle”; interested readers can find the setup and solution in videos on the YouTube channels 3Blue1Brown (https://www.youtube.com/watch?v=wTJI_WuZSwE) and Stand-up Maths (https://www.youtube.com/watch?v=as7Gkm7Y7h4). It can also be seen as the syndrome of the Hamming code. construction:
Lemma 12.
For every , there exists a mapping , computable in space , such that the following holds: for any and any ,
where is the vector of length with a single 1 in position .
Intuitively, Lemma 12 gives us an easily computable mapping where any transformation of the -bit output string can be achieved by flipping one bit of the -bit input string, with the location of this single bitflip being determined only by the locations where the current and target output strings differ.555It is not particularly important that the location to flip in to achieve is exactly , but only that there exists some such place to flip, depending only on the mask , and that this location can be easily calculated given .
Proof of Lemma 12.
Let be indexed by bitstrings . We will define our mapping mem as the entrywise sum of all indices where , i.e.
This is clearly computable in space , as we need only store and our current sum. Now note that for any , flipping the entry , i.e. , flips every entry where and leaves all other entry unchanged, which gives as claimed.
At a high level, our machine works as follows. will use its bits of free memory and the first bits of catalytic memory to represent their equivalent blocks in , i.e. bits of free memory and bits of catalytic memory. We will break the remaining bits of our catalytic tape of into blocks of size each, and we denote by the memory in block . We apply Lemma 12 with – recall that we assume is a power of 2 – and use each to represent an additional bits of free memory, giving us an additional bits of memory in total. Our additional workplace memory will be used to compute the mapping, serve as pointers, and other assorted tasks.
Before formally stating , we mention a special case of the construction of Lemma 12, which will allow us to use it in a reversible operational manner.
Claim 13.
Let be such that 1) and differ in exactly one coordinate for all ; 2) and differ in exactly one coordinate for all ; and 3) . Then .
Proof.
For all , let be the location where and differ, and let be the location where and differ. Since , each value must appear an even number of times in the list , and since flipping any location in can only be obtained from one flip in at the unique location , it follows that each value must appear an even number of times in the list . This means that is with each bit flipped an even number of times, or in other words .
We now concretely define our machine :
-
1.
Initialization: for each block , calculate and flip the th element of :
Define to be the memory after this process. Note that we now have exactly errors on the tape, one in each , and we are guaranteed that
-
2.
Simulation: run using free work bits and catalytic bits, with the concatenation of the values as the other free work bits. To do this, whenever we read or write a bit in our bits of memory, we find the responsible for this bit, calculate , and update using one bitflip to reflect how changes according to the operation of :
If is unchanged, we make no updates to .
-
3.
Cleanup: when we reach the end of ’s computation, record the answer on the free work tape and set each value to one bit at a time:
Once we finish this process, we output our saved answer and halt.
The correctness of is clear, as we output the same value as . Our catalytic space usage is by construction, while our free space usage is to simulate , one extra bit to save the output, and any additional space required to handle the simulation of the additional work bits. In particular, we need bits for a pointer into and bits for the computation of by Lemma 12, for a total space usage of as claimed.
We also claim that our lossy catalytic condition is satisfied. By the property of , we make no errors on the catalytic bits used for the simulation of ’s catalytic space. The initialization step introduces at most one error per , thus giving at most errors on to the catalytic tape. After the initialization step, each other update to corresponds to changing a single bit in , with the final value being the same as the value after initialization. Thus by Claim 13 we restore the catalytic tape to its position after the initialization phase, and so our end state corresponds to our original catalytic tape with at most errors, i.e. those induced by the initialization phase, as required.
We now return to Theorem 10, which requires only a small modification of the above proof, namely to break the the catalytic tape into more, smaller blocks, which reduces its required length at the cost of a few extra errors. This modification works because the number of pure bits represented is logarithmic in the length of the block, and so making the blocks smaller barely affects the number of bits represented; for example, bits in a block still lets you represent bits, so half the size only loses one bit per block.
Proof of Theorem 10.
Let be a machine. We will devise a machine which simulates , where satisfies .
An easy manipulation gives us , and so there exists a function such that . We will have the same approach as Theorem 11, but now we use blocks of length each, i.e., using Lemma 12 for . Since we introduce one error per block, the number of errors the machine makes is at most , while our total catalytic tape has length
Lastly, we can use our additional catalytic memory to simulate a work tape of length
and by manipulating our starting assumption we get that
thus giving us a simulation of work bits as claimed. The correctness and lossy catalytic condition can then be argued as above, and our free space usage is plus an additional bits, for a total space usage of as claimed.
4 Further consequences
With this, we have concluded our main theorem and proof. We now move to corollaries and extensions.
4.1 Lossy catalytic logspace with superconstant errors
As stated in the introduction, it immediately follows from Theorem 1 that proving is likely difficult, if not false, for superconstant values of .
Proof of Corollary 2.
4.2 Lossy catalytic space with other resources
As mentioned in Section 1, there are many extensions of the base catalytic model besides LCSPACE, such as randomized, non-deterministic, and non-uniform CSPACE. So far, however, there has been little discussion of classes where more than one such external resource has been utilized. In this section, we will discuss two of the aforementioned models – randomized and non-deterministic CSPACE – in the presence of errors.
Definition 14.
Let be a Boolean function on inputs.
-
A non-deterministic catalytic Turing machine for is a catalytic machine which, in addition to its usual input , has access to a read-once witness string , of length at most exponential in the total space of , based on , which has the following properties:
-
–
if , then there exists a witness such that
-
–
if , then for every witness string ,
Furthermore, for every witness string , restores the catalytic tape to its original configuration.
-
–
-
A randomized catalytic Turing machine for is a catalytic machine which, in addition to its usual input , has access to a read-once uniformly random string , of length at most exponential in the total space of , such that
Furthermore, for every witness string , restores the catalytic tape to its original configuration.
We write
-
: the class of languages which can be recognized by non-deterministic catalytic Turing Machines with work space and catalytic space .
-
: the class of languages which can be recognized by randomized catalytic Turing Machines with work space and catalytic space .
Furthermore, for , we define to be the class with the catalytic resetting definition replaced by the LCSPACE resetting definition, i.e., where errors are allowed to remain on the catalytic tape at the end of any computation.
Note that we allow the errors to depend on the witness and randomness, respectively. Without delving too deep into these models, however, it is clear that our proof of Theorem 1 carries over to all the above definitions.
Proof sketch of Theorem 3.
As earlier, we need to show both directions. We will prove the same two equivalences as in Theorems 7 and 10, namely
-
1.
-
2.
Both directions will follow immediately via the same simulation as before. For the forward direction, we simulate our L machine by a machine as usual, and then at the last step we correct the changes using our BCH code as before; this works just as before because the code allow us to correct any errors on the catalytic tape, regardless of how they came about.
For the reverse direction, again our machine will only read or write one bit per time step, and so we use the same approach to simulating our additional bits of free memory, which does not change based on the operation of the rest of the machine. As in the forward direction, simulating the actual workings of the machine via the L machine, given our method of simulating the additional bits of memory, is straightforward, and our resetting step at the end again resets our extra catalytic tape regardless of the computation path the machine takes.
References
- [1] Sagar Bisoyi, Krishnamoorthy Dinesh, Bhyabya Deep Rai, and Jayalal Sarma. Almost-catalytic and property-catalytic computation, 2024.
- [2] Sagar Bisoyi, Krishnamoorthy Dinesh, and Jayalal Sarma. On pure space vs catalytic space. Theoretical Computer Science (TCS), 921:112–126, 2022. doi:10.1016/J.TCS.2022.04.005.
- [3] Raj Chandra Bose and Dwijendra K Ray-Chaudhuri. On a class of error correcting binary group codes. Information and control, 3(1):68–79, 1960. doi:10.1016/S0019-9958(60)90287-4.
- [4] Harry Buhrman, Richard Cleve, Michal Koucký, Bruno Loff, and Florian Speelman. Computing with a full memory: catalytic space. In ACM Symposium on Theory of Computing (STOC), pages 857–866, 2014. doi:10.1145/2591796.2591874.
- [5] Harry Buhrman, Michal Koucký, Bruno Loff, and Florian Speelman. Catalytic space: Non-determinism and hierarchy. Theory of Computing Systems (TOCS), 62(1):116–135, 2018. doi:10.1007/S00224-017-9784-7.
- [6] James Cook, Jiatu Li, Ian Mertz, and Edward Pyne. The structure of catalytic space: Capturing randomness and time via compression. Electronic Colloquium on Computational Complexity (ECCC), TR24-106, 2024. URL: https://eccc.weizmann.ac.il/report/2024/106.
- [7] James Cook and Ian Mertz. Catalytic approaches to the tree evaluation problem. In ACM Symposium on Theory of Computing (STOC), pages 752–760. ACM, 2020. doi:10.1145/3357713.3384316.
- [8] James Cook and Ian Mertz. Encodings and the tree evaluation problem. Electronic Colloquium on Computational Complexity (ECCC), TR21-054, 2021. URL: https://eccc.weizmann.ac.il/report/2021/054.
- [9] James Cook and Ian Mertz. Trading time and space in catalytic branching programs. In IEEE Conference on Computational Complexity (CCC), volume 234 of Leibniz International Proceedings in Informatics (LIPIcs), pages 8:1–8:21, 2022. doi:10.4230/LIPIcs.CCC.2022.8.
- [10] James Cook and Ian Mertz. Tree evaluation is in space O(log n log log n). In ACM Symposium on Theory of Computing (STOC), pages 1268–1278. ACM, 2024. doi:10.1145/3618260.3649664.
- [11] Samir Datta, Chetan Gupta, Rahul Jain, Vimal Raj Sharma, and Raghunath Tewari. Randomized and symmetric catalytic computation. In CSR, volume 12159 of Lecture Notes in Computer Science (LNCS), pages 211–223. Springer, 2020. doi:10.1007/978-3-030-50026-9_15.
- [12] Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin, and Adam Smith. Syndrome encoding and decoding of bch codes in sublinear time, 2006. URL: https://www.cs.bu.edu/~reyzin/code/bch-excerpt.pdf.
- [13] Yevgeniy Dodis, Leonid Reyzin, and Adam Smith. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. In Advances in Cryptology - EUROCRYPT 2004, 2004.
- [14] Dean Doron, Edward Pyne, and Roei Tell. Opening up the distinguisher: A hardness to randomness approach for BPL = L that uses properties of BPL. In Proceedings of the 56nd Annual ACM Symposium on Theory of Computing, STOC 2024, pages 2039–2049, 2024. doi:10.1145/3618260.3649772.
- [15] Chetan Gupta, Rahul Jain, Vimal Raj Sharma, and Raghunath Tewari. Unambiguous catalytic computation. In Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), volume 150 of Leibniz International Proceedings in Informatics (LIPIcs), pages 16:1–16:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.FSTTCS.2019.16.
- [16] Chetan Gupta, Rahul Jain, Vimal Raj Sharma, and Raghunath Tewari. Lossy catalytic computation. Computing Research Repository (CoRR), abs/2408.14670, 2024. doi:10.48550/arXiv.2408.14670.
- [17] Alexis Hocquenghem. Codes correcteurs d’erreurs. Chiffers, 2:147–156, 1959.
- [18] Michal Koucký. Catalytic computation. Bulletin of the EATCS (B.EATCS), 118, 2016. URL: http://eatcs.org/beatcs/index.php/beatcs/article/view/400.
- [19] Jiatu Li, Edward Pyne, and Roei Tell. Distinguishing, predicting, and certifying: On the long reach of partial notions of pseudorandomness. In IEEE Symposium on Foundations of Computer Science (FOCS), to appear, 2024.
- [20] Ian Mertz. Reusing space: Techniques and open problems. Bulletin of the EATCS (B.EATCS), 141:57–106, 2023. URL: http://eatcs.org/beatcs/index.php/beatcs/article/view/780.
- [21] Aaron Potechin. A note on amortized branching program complexity. In IEEE Conference on Computational Complexity (CCC), volume 79 of Leibniz International Proceedings in Informatics (LIPIcs), pages 4:1–4:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.CCC.2017.4.
- [22] Edward Pyne. Derandomizing logspace with a small shared hard drive. In 39th Computational Complexity Conference, CCC 2024, volume 300 of LIPIcs, pages 4:1–4:20, 2024. doi:10.4230/LIPICS.CCC.2024.4.
- [23] Robert Robere and Jeroen Zuiddam. Amortized circuit complexity, formal complexity measures, and catalytic algorithms. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 759–769. IEEE, 2021. doi:10.1109/FOCS52979.2021.00079.
