Let’s Try to Be More Tolerant: On Tolerant Property Testing and Distance Approximation
Abstract
This short paper accompanies an invited talk given at ICALP2025. It is an informal, high-level presentation of tolerant testing and distance approximation. It includes some general results as well as a few specific ones, with the aim of providing a taste of this research direction within the area of sublinear algorithms.
Keywords and phrases:
Sublinear Algorithms, Tolerant Property Testing, Distance ApproximationCategory:
Invited Talk2012 ACM Subject Classification:
Theory of computation Streaming, sublinear and near linear time algorithmsEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
Property testing algorithms [10, 50, 31] are randomized approximate decision algorithms. Namely, such an algorithm should decide whether an object (e.g., a binary string) has a prespecified property (e.g., contains a majority of 1’s), in which case the algorithm should accept, or whether is -far from every object that has , in which case the algorithm should reject. The distance measure depends on the problem and may vary, but it is usually the normalized Hamming distance, and is a given distance/approximation parameter. In order to perform this task, the testing algorithm is typically given query access to the object , and is allowed a small constant failure probability.111The failure probability is usually set to , and by a standard argument can be reduced to any at a multiplicative cost of . The aim is to design testing algorithms whose query complexity is sublinear in the size of the tested object and possibly does not depend at all on this size but only on .
A natural variant of standard property testing (as defined above), is tolerant property testing, which was first explicitly defined and studied by Parnas, Ron and Rubinfeld [47]. In this variant, the algorithm should accept objects that may not perfectly have the property, but rather are -close to (some object that has) the property, and reject those that are -far from (every object that has) the property. Ideally, the algorithm should work for any given and have query complexity that grows with (in particular, polynomially). But we may allow the relation between and to be restricted, for example, .
Another, related, variant (also defined in [47]) is distance approximation. Let denote the distance of to (the closest object) having . The goal of a distance-approximation algorithm is to obtain an approximation of , where this approximation may be additive, multiplicative, or a combination of the two.
In this short paper, accompanying an invited talk presented at ICALP 2025, we first discuss some general results concerning tolerant property testing and distance approximation. We then turn to shortly describe several specific results, so as to give a taste of work in this area. For the sake of the exposition, we sometimes opt for statement that are not completely formal, but hopefully intuitive.
2 Some general observations and results
Tolerant testing is (essentially) equivalent to distance approximation
The first observation is that tolerant testing and distance approximation are essentially equivalent. In order to be more precise and concrete, fixing a property , we consider the following versions of the two notions. The tolerant-testing algorithm should work for every (with a dependence on ), and the distance-approximation algorithm should provide an estimate that is within an additive term of from (with a dependence on ). As stated above, in both cases the algorithm is allowed a small constant failure probability which we set to , and the distance measure is assumed to be normalized to the range .
Suppose first that we have a distance-approximation algorithm for . In order to obtain a tolerant-testing algorithm , given , we simply run with set to . If the estimate of that returns is at most , then accepts, otherwise it rejects. The correctness of follows directly from that of , and if the complexity of grows with , then complexity of grows with .
The other direction is slightly less straightforward. Suppose we have a tolerant testing algorithm . In order to obtain a distance-approximation algorithm , we perform an iterative geometric search for (an estimate of) using . Initially we know that , and we start by running with and . Depending on the outcome, we continue the search in either or . In general, in each iteration the interval we consider is cut by a factor of , until it reaches a size of , and the midpoint of the interval is returned. In order to bound the failure probability of , within each iteration we actually run several times and take the majority outcome. Here too the correctness of follows directly from that of . As for the complexity, there is at most an extra multiplicative factor of due to the number of iterations and the number of repetitions in each iteration.
Transformations similar to the above can be applied to other (related) versions of tolerant testing and distance approximation, i.e., when the tolerant-testing algorithm works for that are constrained and the distance-approximation algorithm is not purely additive.
Given the close relation between the two notions, we shall refer to the two interchangeably, and present results according to one of the two notions, where the choice depends on which is more natural/simpler to present.
Standard testing may provide some weak tolerance
The second observation is that if the queries of a standard testing algorithm are uniformly distributed (though not necessarily independent), then the algorithm provides some tolerance. Specifically, if the query complexity of the testing algorithm is (where is the size of the tested object), then it is a tolerant testing algorithm for and . Hence, if (as is the case for example for testing linearity [10]), then we get a tolerant tester with , but the larger the query complexity, the weaker is the tolerance ensured.
Agnostic learning implies tolerant testing
In the context of standard testing, a simple argument [31] shows that if we have a proper learning algorithm (using queries and working under the uniform distribution) for a family of functions ,222Such an algorithm performs queries to an unknown function that is promised to be in , and with high probability returns a hypothesis that belongs to such that is -close to (with respect to the uniform distribution) then we can use it to obtain a testing algorithm for the property of membership in with essentially the same complexity. An analogous statement holds regarding agnostic learning333An agnostic learning algorithm (using queries and working under the uniform distribution) is given query access to an unknown function that is not ensured to belong to and returns a hypothesis that is -close a function in that is closest to . and tolerant testing.
Tolerant testing may be much harder that standard testing
While, as we shall see in the next sections, for some properties, tolerant testing is not (much) harder (in terms of its query complexity), than standard testing, in some cases it is much harder. This was initially shown by Fischer and Fortnow [26], and later strengthened by Ben-Eliezer, Fischer, Levi and Rothblum [4]. Both works provide properties of strings for which there is a standard tester whose complexity depends only on , while tolerant testing requires a number of queries that depends on the length, of the string. In [26] the latter is for a constant and in [4] it is .
3 Monotonicity
Monotonicity over
A function , where and is a fully ordered range is monotone if for every . The distance of to monotonicity, denoted , is the minimum fraction of elements in the domain on which the function should be modified so as to obtain a monotone function. In what follows, is viewed as an array with values in the range , where the element of the array is . The first testing algorithm for monotonicity, by Ergun, Kannan, Kumar, Rubinfeld and Viswanathan [22], is nice and simple, and works as follows (assuming the values of are distinct, which can be assumed without loss of generality).
-
1.
Select, uniformly, independently at random, a sample of indices in , denoted .
-
2.
For each selected index , query on , set , and perform a binary search on for .
-
3.
If for some , the binary search fails to return , then Reject, otherwise, Accept.
Clearly, if is monotone (and its values are distinct), then the algorithm always accepts. On the other hand, if is -far from monotone, i.e., , then it can be proved that at least an -fraction of the indices will fail the binary search (i.e., a search for does not end at ).
A naive idea for modifying this algorithm in order to obtain a distance-approximation algorithm (and hence a tolerant testing algorithm), is to estimate the fraction of indices for which the binary search fails, and to use this estimate for . This approach unfortunately does not work. While there are cases in which this fraction is a fairly good estimate of , in others it might significantly differ from . As an example for the former case, suppose (writing as an array), . Then equals (since it is both necessary and sufficient to modify half of the values of in order to obtain a monotone function), and the binary search would indeed fail on half of the indices . As an example of the latter case, consider . Then , while the binary search will, here too, fail on half of the indices .
In [47] a tolerant testing algorithm for monotonicity was presented, which works for any and , whose query complexity is . Consider the violation graph induced by , denoted . For each index there is a vertex in , and there is an edge between and for if (and only if) . It is not hard to verify that equals the size of the minimum vertex cover in . The algorithm in [47] works by approximating the size of a maximal matching in .
At the heart of the algorithm is the following observation. Suppose we knew the median value of , denoted . Then there is a perfect matching between where and where . The size of such a perfect matching can be estimated by sampling, and all remaining edges in are between pairs of vertices that are either both in or both in . This suggests to continue recursively, using the “median based” approach taking care not to match vertices that were already matched in a higher level of the recursion. While a straightforward recursive implementation would be too costly (in particular due to the size of the recursion tree), as shown in [47], this can be overcome by defining a type of “approximate recursion tree” and sampling paths in it.
The aforementioned algorithm is indeed sublinear, and further has a polylogarithmic dependence on , but the exponent of the polylog is . Ailon, Chazelle, Comandur and Liu [1] were able to reduce the polylogarithmic dependence to logarithmic. To be precise, they provide an estimate in for constant by performing queries. Saks and Seshadhri [51] showed how to get rid of the factor of in the quality of the estimate. Their algorithm computes an estimate in by performing queries.
Monotonicity of Boolean functions over the -dimensional hypercube
There is a large body of work on testing monotonicity of functions , as well as some more recent works on tolerant testing. A sequence of works, starting with [30, 21], culminated in the work of Khot, Minzer and Safra [38], who gave a (non-adaptive) testing algorithm with query complexity (the square-root dependence on is necessary for non-adaptive algorithms [27]).
Tolerant testing in this context turns out to be much harder. Chen, De, Li, Nadimpali and Servedio [14] proved a lower bound of for non-adaptive algorithms (an upper bound of directly follows from what is known for agnostic learning). On the positive side, Pallavoor, Raskhodnikova and Waingerten [45] give a factor- approximation algorithm that is non adaptive and performs queries where is a lower bound on the distance to monotonicity of (they also show that this is essentially tight for non-adaptive algorithms). We note that Black, Kalemaj and Raskhodnikova [7] show that this result extends to real valued functions over the Boolean hypercube (improving on [24]).
4 Connectivity and other graph properties
There are several models for testing properties of graphs, which differ in the type of queries allowed and the distance measure used. Let denote an undirected graph over vertices. In what is referred to as the sparse-graphs model (first studied in [46]), the graph is assumed to be represented by incidence lists (of possibly varying lengths), so that the algorithm may perform neighbor queries. Namely, for any vertex and index , the algorithm can query for the neighbor of (if is larger than the degree of , denoted , then a special symbol, e.g., , is returned). It is sometimes assumed that the algorithm may also perform degree queries, so as to obtain the degree of any vertex of its choice, but in what follows we shall not assume access to such queries. As for the distance to the property, it is defined as the number of edges that should be added/removed so as to obtain the property in question, divided by a given upper bound on the number of edges. A related model is the bounded-degree model, where all incidence lists are of length at most for a given maximum-degree parameter , and .
Recall that a graph is connected if there is a path between every pair of vertices. A graph is -far from being connected if it is necessary to add more than edges so as to make the graph connected. Goldreich and Ron [32] gave a (one-sided error) algorithm for testing connectivity in the bounded-degree model whose query complexity is . Parnas and Ron [46] adapted this algorithm to the sparse-graphs model where the resulting query complexity is , where (we assume ). Berman, Raskhodnikova and Yaroslavtsev [6] improved this to , and Levi, Pallavoor, Raskhodnikova and Varma [41] gave an upper bound of (which is tight for ).
Let denote the distance of a graph to connectivity. As a central part of their algorithm for approximating the weight of a minimum-weight spanning tree, Chazelle, Rubinfeld and Trevisan [13] gave an algorithm that approximates to within an additive term of by performing queries. Here we describe a less efficient, but simple to analyze algorithm of Marko and Ron [43].
For a vertex , let denote the number of vertices in the connected component of , and for an integer , let . The algorithm works as follows.
-
1.
Select, uniformly, independently at random, vertices and set .
-
2.
For each vertex in the sample , perform a BFS to determine if it is at most , or discover that .
-
3.
Output
Let denote the number of connected components in . The correctness of the algorithm is based on the following observations.
-
1.
. This observation simply follows from the fact that it is possible to connect any unconnected graph by adding edges (e.g., by selecting an arbitrary ordering over the connected components and adding an edge between every pair of consecutive components).
-
2.
. This equality holds by the definition of , since for each connected component we have .
-
3.
. This observation follows by combining the previous one with the fact that the total number of connected components with more than vertices is less than , where was set to .
Consider the random variable . By the second and third observations above, . By the first observation and the definition of in the algorithm, . The proof is completed by applying an additive Chernoff bound to show that based on the setting of the sample size, , the value of does not deviate from its expected value by more than an additive term of .
Additional properties
Other properties of sparse graphs that were studied in the context of distance approximation include -edge connectivity, subgraph-freeness, cycle-freeness and Eulerianity [43], strong connectivity in directed graphs and bounded diameter [11], as well as -colorability in planar graphs [35]. A strong separation result between standard and tolerant testing for bounded-degree graphs was proved by Goldreich and Wigderson [33].
The adjacency-matrix (dense-graphs) model
Much of the focus in the property-testing literature has been on testing properties of graphs in the adjacency-matrix model. In this model, which is appropriate for studying properties of dense graphs, graphs are represented by their adjacency matrices, so an algorithm may perform vertex-pair queries. Namely, for any choice of two vertices and , an algorithm can query whether there is an edge between and . The distance to a property is defined as the fraction of entries in the matrix that should be modified so as to obtain the property (i.e., the minimum number of edges that should be added/removed to obtain the property, divided by ).
Fischer and Newman [28] proved that every graph property for which there is a (standard) testing algorithm in the adjacency-matrix model that performs queries, has an additive distance-approximation algorithm that performs queries, where is the additive approximation parameter. They showed that is upper bounded by a function that is a tower of exponentials of height . Gishboliner, Kushnir and Shapira [29] significantly improved this upper bound on to double exponential in , and also showed that for hereditary properties the transformation loss is single-exponential (improving on [36]). Finally we note that for some properties there are known additive distance-approximation algorithms with query complexity . These include -colorability (by approximating the size of the maximum--cut) [31] as well as more general hereditary partition properties, induced and -freeness, and chordality [25].
5 Subsequence freeness: Sample-based and distribution-free
Up till now we only discussed standard and tolerant testing when the algorithm has access to queries. However, there are settings in which the algorithm only has access to samples (as is usually assumed in many models of learning). The samples may be uniformly distributed, or distributed according to some other, possibly unknown, distribution. The distance measure is defined accordingly. Here we shortly discuss one result on distance-approximation in this model.
Let be an alphabet, a sequence and a subsequence. We say that contains if there exist indices such that for every . Otherwise, is -free. The algorithm is provided with (which defines the property) and gets a sample of pairs . In the uniform sample-based model, each index is selected uniformly, and the distance to being -free is the minimum fraction of indices on which must be modified in order to become -free (i.e., the normalized Hamming distance). In the distribution-free sample-based model, there is an unknown distribution according to which the sampled indices are selected (independently), and the distance to -freeness is determined by . Namely, it is the minimum total weight according to of indices on which must be modified in order to become -free.
The sample complexity of testing subsequence-freeness with one-sided error in the sample-based distribution-free model is [49]. (There is a matching lower bound of that holds for every subsequence even under the uniform distribution [49], and there exists for which this lower bound holds also when queries are allowed [12].)
Turning to distance approximation, let denote the distance of to being -free with respect to the distribution . There is an algorithm that outputs an estimate in using a sample of size [17]. The quadratic dependence on is necessary [17], but interestingly, there are specific subsequences (in particular , defining the -monotonicity property) for which samples suffice [9].
Since the results for the distribution-free case are obtained by reducing to the uniform case, we henceforth focus on the latter, and use as a shorthand for where is the uniform distribution. A key notion that was introduced in [49] and is used in the design and analysis of the [17] distance-approximation algorithm is that of role-disjoint copies of in . A copy of in is simply defined by indices, such that for every . Two copies, and are role-disjoint if for every (though it is possible that for ).
Let denote the number of role-disjoint copies of in . In [49] it was proved that . In the context of testing, this equality was used to prove that if , so that , then a sample of size will contain a copy of with high constant probability. This unfortunately is not sufficient on its own for distance approximation. It is possible that, say, , but still, a sample of size will contain several copies of with high constant probability. Instead, in a nutshell, the [17] algorithm approximates in a more refined way by expressing (exactly) using a certain recursive formula, defining a “coarse” version of the recursive formula, and then using a sample to estimate terms in the latter version.
6 Other results
There are quite a few additional results on tolerant-testing / distance-approximation for other properties and models. These include: Edit distance [2]; Clustering [47]; Linearity and codes [34, 39]; Convexity [23, 15]; Juntas [19, 8, 18, 42, 37, 16, 14, 44]; Properties of distributions [3, 52]; Properties of images [5]; Erasure-resilient model [20, 48]; Lipschitz [40].
References
- [1] Nir Ailon, Bernard Chazelle, Seshadhri Comandur, and Ding Liu. Estimating the distance to a monotone function. Random Structures and Algorithms, 31(3):371–383, 2007. doi:10.1002/RSA.20167.
- [2] Tugkan Batu, Funda Ergun, Joe Kilian, Avner Magen, Sofya Raskhodnikova, Ronitt Rubinfeld, and Rahul Sami. A sublinear algorithm for weakly approximating edit distance. In Proceedings of the 35th Annual ACM Symposium on the Theory of Computing (STOC), pages 316–324, 2003. doi:10.1145/780542.780590.
- [3] Tugkan Batu, Lance Fortnow, Ronitt Rubinfeld, Warren D. Smith, and Patrick White. Testing closeness of discrete distributions. Journal of the ACM, 60(1):4:1–4:25, 2013. doi:10.1145/2432622.2432626.
- [4] Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Ron D. Rothblum. Hard properties with (very) short PCPPs and their applications. In Proceedings of the 11th Innovations in Theoretical Computer Science conference (ITCS), pages 9:1–9:27, 2020. doi:10.4230/LIPICS.ITCS.2020.9.
- [5] Piotr Berman, Meiram Murzabulatov, and Sofya Raskhodnikova. Tolerant testers of image properties. ACM Transactions on Algorithms, 18(4):1–39, 2022. Article number 37. doi:10.1145/3531527.
- [6] Piotr Berman, Sofya Raskhodnikova, and Grigory Yaroslavtsev. -testing. In Proceedings of the 46th Annual ACM Symposium on the Theory of Computing (STOC), pages 164–173, 2014. doi:10.1145/2591796.2591887.
- [7] Hadley Black, Iden Kalemaj, and Sofya Raskhodnikova. Isoperimetric inequalities for real-valued functions with applications to monotonicity testing. In Automata, Languages and Programming: 50th International Colloquium (ICALP), pages 25:1–25:20, 2023. doi:10.4230/LIPICS.ICALP.2023.25.
- [8] Eric Blais, Clément Canonne, Talya Eden, Amit Levi, and Dana Ron. Tolerant junta testing and the connection to submodular optimization and function isomorphism. ACM Transactions on Computation Theory, 11(4):1–33, 2019. doi:10.1145/3337789.
- [9] Avrim Blum and Lunjia Hu. Active tolerant testing. In Proceedings of the 31st Conference on Computational Learning Theory (COLT), pages 474–497, 2018. URL: http://proceedings.mlr.press/v75/blum18a.html.
- [10] Manuel Blum, Michael Luby, and Ronitt Rubinfeld. Self-testing/correcting with applications to numerical problems. Journal of Computer and System Sciences, 47(3):549–595, 1993. doi:10.1016/0022-0000(93)90044-W.
- [11] Andrea Campagna, Alan Guo, and Ronitt Rubinfeld. Local reconstructors and tolerant testers for connectivity and diameter. In Proceedings of the 17th International Workshop on Randomization and Computation (RANDOM), pages 411–424, 2013. doi:10.1007/978-3-642-40328-6_29.
- [12] Clément Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, and Karl Wimmer. Testing -monotonicity: The rise and fall of boolean functions. Theory of Computing, 15(1):1–55, 2019. doi:10.4086/TOC.2019.V015A001.
- [13] Bernard Chazelle, Ronitt Rubinfeld, and Luca Trevisan. Approximating the minimum spanning tree weight in sublinear time. SIAM Journal on Computing, 34(6):1370–1379, 2005. doi:10.1137/S0097539702403244.
- [14] Xi Chen, Anindya De, Yuhao Li, Shivam Nadimpalli, and Rocco Servedio. Mildly exponential lower bounds on tolerant testers for monotonicity, unateness, and juntas. In Proceedings of the 35th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 4321–4337, 2024. doi:10.1137/1.9781611977912.151.
- [15] Xi Chen, Anindya De, Shivam Nadimpalli, Rocco Servedio, and Erik Waingarten. Lower bounds for convexity testing. In Proceedings of the 36th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 446–488, 2025. doi:10.1137/1.9781611978322.13.
- [16] Xi Chen and Shyamal Patel. New lower bounds for adaptive tolerant junta testing. In Proceedings of the 64th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 1778–1786, 2023. doi:10.1109/FOCS57990.2023.00108.
- [17] Omer Cohen Sidon and Dana Ron. Sample-based distance-approximation for subsequence-freeness. Algorithmica, 86:2519–2556, 2024. doi:10.1007/S00453-024-01233-4.
- [18] Anindya De, Elchanan Mossel, and Joe Neeman. Robust testing of low dimensional functions. In Proceedings of the 53rd Annual ACM Symposium on the Theory of Computing (STOC), pages 584–597, 2021. doi:10.1145/3406325.3451115.
- [19] Ilias Diakonikolas, Homin Lee, Kevin Matulef, Krzysztof Onak, Ronitt Rubinfeld, Rocco Servedio, and Andrew Wan. Testing for concise representations. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 549–557, 2007. doi:10.1109/FOCS.2007.32.
- [20] Kashyap Dixit, Sofya Raskhodnikova, Abhradeep Thakurta, and Nithin Varma. Erasure-resilient property testing. SIAM Journal on Computing, 47(2):295–329, 2018. doi:10.1137/16M1075661.
- [21] Yevgeny Dodis, Oded Goldreich, Eric Lehman, Sofya Raskhodnikova, Dana Ron, and Alex Samorodnitsky. Improved testing algorithms for monotonicity. In Proceedings of the 3rd International Workshop on Randomization and Approximation Techniques in Computer Science (RANDOM), pages 97–108, 1999. doi:10.1007/978-3-540-48413-4_10.
- [22] Funda Ergun, Sampath Kannan, S. Ravi Kumar, Ronitt Rubinfeld, and Mahesh Viswanathan. Spot-checkers. Journal of Computer and System Sciences, 60(3):717–751, 2000. doi:10.1006/JCSS.1999.1692.
- [23] Shahar Fattal. Approximating the distance to monotonicity and convexity in sublinear time. Master’s thesis, Tel Aviv University, 2006.
- [24] Shahar Fattal and Dana Ron. Approximating the distance to monotonicity in high dimensions. ACM Transactions on Algorithms, 6(3):1–37, 2010. doi:10.1145/1798596.1798605.
- [25] Nimrod Fiat and Dana Ron. On efficient distance approximation for graph properties. In Proceedings of the 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1618–1637, 2021. doi:10.1137/1.9781611976465.98.
- [26] Eldar Fischer and Lance Fortnow. Tolerant versus intolerant testing for boolean properties. Theory of Computing, 2:173–183, 2006. doi:10.4086/TOC.2006.V002A009.
- [27] Eldar Fischer, Eric Lehman, Ilan Newman, Sofya Raskhodnikova, Ronitt Rubinfeld, and Alex Samorodnitsky. Monotonicity testing over general poset domains. In Proceedings of the 34th Annual ACM Symposium on the Theory of Computing (STOC), pages 474–483, 2002. doi:10.1145/509907.509977.
- [28] Eldar Fischer and Ilan Newman. Testing versus estimation of graph properties. SIAM Journal on Computing, 37(2):482–501, 2007. doi:10.1137/060652324.
- [29] Lior Gishboliner, Nick Kushnir, , and Asaf Shapira. Testing versus estimation of graph properties, revisited. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM, pages 46:1–46:18, 2023. doi:10.4230/LIPICS.APPROX/RANDOM.2023.46.
- [30] Oded Goldreich, Shafi Goldwasser, Eric Lehman, Dana Ron, and Alex Samorodnitsky. Testing monotonicity. Combinatorica, 20(3):301–337, 2000. doi:10.1007/S004930070011.
- [31] Oded Goldreich, Shafi Goldwasser, and Dana Ron. Property testing and its connections to learning and approximation. Journal of the ACM, 45:653–750, 1998. doi:10.1145/285055.285060.
- [32] Oded Goldreich and Dana Ron. Property testing in bounded degree graphs. Algorithmica, pages 302–343, 2002. doi:10.1007/S00453-001-0078-7.
- [33] Oded Goldreich and Avi Wigderson. Robustly self-ordered graphs: Constructions and applications to property testing. TheoretiCS, 1, 2022. Article number 1. doi:10.46298/THEORETICS.22.1.
- [34] Venkat Guruswami and Atri Rudra. Tolerant locally testable codes. In Proceedings of the 9th International Workshop on Randomization and Computation (RANDOM), pages 306–317, 2005.
- [35] Avinatan Hassidim, Jonathan A. Kelner, Huy N. Nguyen, and Krzysztof Onak. Local graph partitions for approximation and testing. In Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 22–31, 2009. doi:10.1109/FOCS.2009.77.
- [36] Carlos Hoppen, Yoshiharu Kohayakawa, Richard Lang, Hanno Lefmann, and Henrique Stagni. Estimating parameters associated with monotone properties. Combinatorics, Probability and Computing, 29(4):616–632, 2020. doi:10.1017/S0963548320000048.
- [37] Vishnu Iyer, Avishay Tal, and Michael Whitmeyer. Junta distance approximation with sub-exponential queries. In Proceedings of the 36th Computational Complexity Conference (CCC), pages 24:1–24:38, 2021. doi:10.4230/LIPICS.CCC.2021.24.
- [38] Subash Khot, Dor Minzer, and Muli Safra. On monotonicity testing and boolean isoperimetric-type theorems. SIAM Journal on Computing, 47(6):2238–2276, 2018. doi:10.1137/16M1065872.
- [39] Swastik Kopparty and Shubhangi Saraf. Tolerant linearity testing and locally testable codes. In Proceedings of the 13th International Workshop on Randomization and Computation (RANDOM), pages 601–614, 2009. doi:10.1007/978-3-642-03685-9_45.
- [40] Jane Lange, Ephraim Linder, Sofya Raskhodnikova, and Arsen Vasilyan. Local lipschitz filters for bounded-range functions with applications to arbitrary real-valued functions. In Proceedings of the 36th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2881–2907, 2025. doi:10.1137/1.9781611978322.93.
- [41] Amit Levi, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and Nithin Varma. Erasure-resilient sublinear-time graph algorithms. ACM Transactions on Computation Theory, 14(1):1:1–1:22, 2022. doi:10.1145/3488250.
- [42] Amit Levi and Erik Waingarten. Lower bounds for tolerant junta and unateness testing via rejection sampling of graphs. In Proceedings of the 10th Innovations in Theoretical Computer Science conference (ITCS), pages 52:1–52:20, 2019. doi:10.4230/LIPICS.ITCS.2019.52.
- [43] Sharon Marko and Dana Ron. Distance approximation in bounded-degree and general sparse graphs. Transactions on Algorithms, 5(2), 2009. Article number 22.
- [44] Shivam Nadimpalli and Shyamal Patel. Optimal non-adaptive tolerant junta testing via local estimators. In Proceedings of the 56th Annual ACM Symposium on the Theory of Computing (STOC), pages 1039–1050, 2024. doi:10.1145/3618260.3649687.
- [45] Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and Erik Waingarten. Approximating the distance to monotonicity of boolean functions. Random Structures and Algorithms, 60(2):233–260, 2022. doi:10.1002/RSA.21029.
- [46] Michal Parnas and Dana Ron. Testing the diameter of graphs. Random Structures and Algorithms, 20(2):165–183, 2002. doi:10.1002/RSA.10013.
- [47] Michal Parnas, Dana Ron, and Ronitt Rubinfeld. Tolerant property testing and distance approximation. Journal of Computer and System Sciences, 72(6):1012–1042, 2006. doi:10.1016/J.JCSS.2006.03.002.
- [48] Sofya Raskhodnikova, Noga Ron-Zewi, and Nithin Varma. Erasures vs. errors in local decoding and property testing. Random Structures and Algorithms, 59:640–670, 2021. doi:10.1002/RSA.21031.
- [49] Dana Ron and Asaf Rosin. Optimal distribution-free sample-based testing of subsequence-freeness with one-sided error. ACM Transactions on Computation Theory, 14(4):1–31, 2022. doi:10.1145/3512750.
- [50] Ronitt Rubinfeld and Madhu Sudan. Robust characterization of polynomials with applications to program testing. SIAM Journal on Computing, 25(2):252–271, 1996. doi:10.1137/S0097539793255151.
- [51] Michael Saks and C. Seshadhri. Local monotonicity reconstruction. SIAM Journal on Computing, 39)(7):2897–2926, 2010. doi:10.1137/080728561.
- [52] Gregory Valiant and Paul Valiant. Estimating the unseen: an -sample estimator for entropy and support size, shown optimal via new CLTs. In Proceedings of the 43rd Annual ACM Symposium on the Theory of Computing (STOC), pages 685–694, 2011. doi:10.1145/1993636.1993727.