Courcelle’s Theorem for Lipschitz Continuity
Abstract
Lipschitz continuity of algorithms, introduced by Kumabe and Yoshida (FOCS’23), measures the stability of an algorithm against small input perturbations. Algorithms with small Lipschitz continuity are desirable, as they ensure reliable decision-making and reproducible scientific research. Several studies have proposed Lipschitz continuous algorithms for various combinatorial optimization problems, but these algorithms are problem-specific, requiring a separate design for each problem.
To address this issue, we provide the first algorithmic meta-theorem in the field of Lipschitz continuous algorithms. Our result can be seen as a Lipschitz continuous analogue of Courcelle’s theorem, which offers Lipschitz continuous algorithms for problems on bounded-treewidth graphs. Specifically, we consider the problem of finding a vertex set in a graph that maximizes or minimizes the total weight, subject to constraints expressed in monadic second-order logic (). We show that for any , there exists a -approximation algorithm for the problem with a polylogarithmic Lipschitz constant on bounded treewidth graphs. On such graphs, our result outperforms most existing Lipschitz continuous algorithms in terms of approximability and/or Lipschitz continuity. Further, we provide similar results for problems on bounded-clique-width graphs subject to constraints expressed in . Additionally, we construct a Lipschitz continuous version of Baker’s decomposition using our meta-theorem as a subroutine.
Keywords and phrases:
Fixed-Parameter Tractability, Algorithmic Meta-Theorem, Lipschitz ContinuityFunding:
Tatsuya Gima: JSPS KAKENHI Grant Numbers JP24K23847, JP25K03077Copyright and License:
2012 ACM Subject Classification:
Theory of computation Fixed parameter tractability ; Theory of computation Approximation algorithms analysisEditors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Lipschitz continuity of algorithms, introduced by Kumabe and Yoshida [14], is a measure of an algorithm’s stability in response to errors or small perturbations in the input for weighted optimization problems. Roughly speaking, it is the maximum ratio of the (weighted) Hamming distance between the outputs of an algorithm for two different weights to the distance between those weights (see Section 2.4 for the precise definition). It is desirable for algorithms to have small Lipschitz constants, as large constants can undermine the reliability of decision-making and the reproducibility of scientific research.
Since its introduction, Lipschitz continuous algorithms have been proposed for various optimization problems [14, 16]. However, we need to design a different algorithm and analyze its Lipschitz continuity for each problem, which can be impractical. To address this limitation, we present an algorithmic meta-theorem for Lipschitz continuous algorithms, which can be seen as Lipschitz continuity analogue of celebrated Courcelle’s theorem [3].
To present our results, we first introduce some notation. Let be a graph of vertices with treewidth , be an formula with a free vertex set variable (see Section 2.3 for details about logic), and be a weight vector. We consider the problem of finding a vertex subset such that and is maximized, which we call the maximization problem. We also consider the problem of finding such that is minimized, which we call the minimization problem. Our main results are the following.
Theorem 1.
For any , there is a -approximation algorithm for the maximization problem with Lipschitz constant , where is some computable function. The time complexity is bounded by .
Theorem 2.
For any , there is a -approximation algorithm for the minimization problem with Lipschitz constant , where is some computable function. The time complexity is bounded by .
We note that the trivial upper bound on Lipschitz constant is [16]; therefore the bounds in the theorems above are significantly smaller for fixed and . We also remark that our meta-theorems yield randomized approximation algorithms. This is necessary since, for most problems, it is known that exact or deterministic algorithms cannot be Lipschitz continuous [14].
When the treewidth of the input graph is bounded by a constant, Theorems 1 and 2 provide algorithms that outperform existing algorithms in terms of approximability and/or Lipschitz continuity:
-
For the maximum weight matching problem111Although the output of this problem is an edge set rather than a vertex set, this problem can be expressed by an formula on a new graph , Theorem 1 yields an algorithm with a better approximation ratio than the previous -approximation algorithm with Lipschitz constant [14], at the polylogarithmic sacrifice of the Lipschitz constant.
For a fixed , by explicitly specifying the transitions in the dynamic programming within the algorithm, we can provide more precise bounds on the function that appears in Theorems 1 and 2. In particular, considering the case that is an formula representing the independent set constraint, we have the following.
Theorem 3.
For any , there is a -approximation algorithm for the maximum weight independent set problem with Lipschitz constant . The time complexity is bounded by .
This result is surprising as the dependence of on the Lipschitz constant is subexponential and, even more remarkably, linear. We note that through similar arguments, analogous results hold for several other problems, such as the minimum weight vertex cover problem and the minimum weight dominating set problem.
We further demonstrate that Theorems 1 and 2 lead to Lipschitz continuous version of Baker’s technique [1]. As a representative example, we consider the maximum weight independent set problem on planar graphs, where a vertex subset is an independent set of a graph if no two vertices in are adjacent in . We prove the following.
Theorem 4.
For any , there is a -approximation algorithm for the maximum weight independent set problem on planar graphs with Lipschitz constant . The time complexity is bounded by .
Using similar algorithms, Lipschitz continuous PTASes can be obtained for many problems, including the minimum weight vertex cover problem and the minimum weight dominating set problem.
As a lower bound result, we prove that the Lipschitz constant of any -approximation algorithm for some maximization problem is large on general graphs, which justifies considering the maximization problem on a restricted graph class. Specifically, we consider the max ones problem [6, 9], where the instance is a 3CNF formula over a variable set and a weight function , and the goal is to find a satisfying assignment that maximizes the weight . It is easy to see that there exists a fixed formula such that the max ones problem can be reduced to a problem on bipartite graphs, where the task is to find a vertex set that maximizes the weight subject to . We show the following.
Theorem 5.
There exist such that any -approximation algorithm for the max ones problem has Lipschitz constant , where is the number of variables.
We note that it is possible to construct an algorithm for the max ones problem with a Lipschitz constant similar to that in Theorem 3, where denotes the treewidth of the “incidence graph” of the 3CNF instance (see Section 4.2 for details). Since is upper bounded by for any instance, Theorem 5 implies that the dependency of the Lipschitz constant on cannot be improved to .
We also prove analogous meta-theorems when parameterizing by clique-width. Let an maximization (resp., minimization) problem be the variant of an problem in which the formula is restricted to . Denote by the clique-width of the graph . We then obtain the following.
Theorem 6.
For any , there is a -approximation algorithm for the maximization problem with Lipschitz constant , where is some computable function. The time complexity is bounded by .
Theorem 7.
For any , there is a -approximation algorithm for the minimization problem with Lipschitz constant , where is some computable function. The time complexity is bounded by .
We note that the functions in Theorems 6 and 7 are much larger than those in Theorems 1 and 2. In particular, the bounds for Theorems 3 and 4 do not follow from Theorem 6.
1.1 Technical Overview
Now we provide a technical overview of our framework. Since the arguments for clique-width are similar, we focus here on the treewidth results and omit the clique-width case.
For simplicity, here we consider the maximum weight independent set problem on a full binary tree (a rooted tree in which every vertex has exactly 0 or 2 children). This corresponds to the case where . Let be the weight vector.
If we do not care about Lipschitzness, this problem can exactly be solved by the following algorithm. For each vertex , define to be an independent set in the subtree rooted at that does not include with the maximum weight. Similarly, define to be an independent set in the subtree rooted at with the maximum weight. If is a leaf, then and . Otherwise, let and be the two children of . We have , and is the one with the larger weight of and . By performing this dynamic programming in a bottom-up manner, the problem can be solved exactly.
However, this algorithm is not Lipschitz continuous. This is because when we compute , which of or has a larger weight is affected by small changes in the weights. To address this issue, we use the exponential mechanism [7]. Specifically, for some constant , we select with probability proportional to . This approach makes the algorithm Lipschitz continuous, with only a slight sacrifice in the approximation ratio. Specifically, we can prove that by appropriately choosing for a given , the increase in the Lipschitz constant can be bounded by by reducing the approximation guarantee by a factor of at each vertex where the exponential mechanism is applied.
While this approach makes the algorithm Lipschitz continuous, the Lipschitz constant is still too large when the height of the tree is large. Specifically, to achieve an approximation guarantee of , we need to set , leading to a Lipschitz constant of . This is larger than the trivial bound when . We resolve this issue by using the fact that any tree has a tree decomposition of width 5 and height [2]. By performing dynamic programming with the exponential mechanism on this tree decomposition, we can obtain -approximation algorithm with Lipschitz constant . This argument can be naturally extended to the case where is a bounded treewidth graph. By following the proof of Courcelle’s Theorem [3, 5], we further extend this argument to the case where is a general formula with a free vertex variable.
We prove Theorem 5 by leveraging a lower bound on a related notion of sensitivity for the maximum cut problem [8], where sensitivity measures the change in the output with respect to the Hamming distance under edge deletions [21]. Specifically, we reduce the maximum cut problem, where stability is defined with respect to edge deletions, to the max ones problem, where stability is measured with respect to weight changes.
1.2 Related Work
Lipschitz continuity of discrete algorithms is defined by Kumabe and Yoshida [14], and they provided Lipschitz continuous approximation algorithms for the minimum spanning tree, shortest path, and maximum weight matching problems. In a subsequent work [16], they further provided such algorithms for the minimum weight vertex cover, minimum weight set cover, and minimum weight feedback vertex set problems. In another work [15], they defined Lipschitz continuity for allocations in cooperative games and provided Lipschitz continuous allocation schemes for the matching game and the minimum spanning tree game.
A variant known as pointwise Lipschitz continuity has also been studied, which is defined using the unweighted Hamming distance instead of the weighted Hamming distance. Kumabe and Yoshida [14] defined pointwise Lipschitz continuity and provided pointwise Lipschitz continuous algorithms for the minimum spanning tree and maximum weight bipartite matching problems. Liu et al. [18] proposed the proximal gradient method as a general technique for solving LP relaxations stably. Using this, they provided pointwise Lipschitz continuous approximation algorithms for the minimum vertex -cut, densest subgraph, maximum weight (-)matching, and packing integer programming problems.
(Average) sensitivity, introduced by Varma and Yoshida [21] with preliminary ideas by Murai and Yoshida [19], is a notion similar to Lipschitz continuity. While Lipschitz continuity evaluates an algorithm’s stability against unit changes in the input weights, (average) sensitivity evaluates an algorithm’s stability against (random) deletions of elements from the input. Algorithms with small (average) sensitivity have been studied for several problems, such as maximum matching [21, 23], minimum cut [21], knapsack problem [13], Euclidean -means [22], spectral clustering [20], and dynamic programming problems [12]. Recently, Fleming and Yoshida [8] constructed a PCP framework to prove the sensitivity lower bound for the constraint satisfaction problem.
1.3 Organization
The rest of this paper is organized as follows. In Section 2, we describe the necessary preliminaries on tree decomposition (Sections 2.1 and 2.2), logic (Section 2.3), and Lipschitz continuity (Section 2.4). In Section 3, we prove Theorem 1 by providing a Lipschitz continuous algorithm for the maximization problem. Since the algorithm and analysis for minimization are similar, we defer the proof of Theorem 2 to the full version. In Section 4, we give a more precise Lipschitzness analysis for specific formulas , which includes the proof of Theorem 3. Proofs of Theorems 4, 5, 6, and 7 are given in the full version.
2 Preliminaries
2.1 Tree Decomposition
Let be a graph with vertices. A pair consisting of a family of subsets (called bags) of and a rooted tree whose vertex set is is a (rooted) tree decomposition of if it satisfies the following three conditions.
-
.
-
For each edge , there is a bag such that .
-
For each vertex , the set of bags induces connected subgraph in .
For a tree decomposition , we may refer to the root node, leaf nodes, and the height of as those of , respectively. Moreover, is binary if is a binary tree. The width of a tree decomposition is the maximum size of a bag in minus one. The treewidth of is the minimum possible width among all possible tree decompositions of . It is known that the tree decomposition of of width at most can be computed in time [10], where is the treewidth of . Moreover, any tree decomposition of of width can be transformed into a binary tree decomposition of width at most and height at most [2]. Thus, we obtain the following lemma.
2.2 HR-algebra
We introduce the notions of HR-algebra222The acronym HR stands for hyperedge replacement [4]., which is one of the algebraic definitions of tree decomposition. A -graph is a tuple of a set of vertices , a set of edges , and a -vector . We write the -th element of a vector by . If , we say that is a -source. We write the set by .
Let and be -graphs. The parallel-composition of and , denoted by , is the graph generated by the following procedure. First, create the disjoint union of and , and identify the (non-) vertices and for each . Finally, remove all the self-loops and multi-edges. Let be a non-empty subset of . The source forgetting operation is the function that maps a -graph to a -graph such that , , and if , and otherwise.
Definition 9 (HR-algebra).
A term of a HR-algebra over -graphs is either
-
a constant symbol , , or denoting a -graph with , a -graph with and , or the empty graph, respectively;
-
for any terms and ;
-
for any term and any .
We may associate a term with the graphs represented by it. A term of an HR-algebra can be decomposed into the parse tree in the usual way, called the (rooted) HR-parse tree of . The height of a HR-parse tree is the maximum distance from the root to a leaf. It is known that tree decompositions are equivalent to HR-parse trees in the following sense.
Proposition 10 (see e.g. [4]).
The treewidth of a graph is at most if and only if the graph can be denoted by a term of a HR-algebra over -graphs.
Moreover, binary tree decomposition can be transformed into an HR-parse tree with approximately the same height.
Proposition 11 ().
Given a rooted binary tree decomposition of with width and height , we can compute a HR-parse tree over -graph denoting with height .
The proof of Proposition 11 is essentially the same as the proof of Proposition 10 but we give in the full version.
2.3 Monadic Second-Order Logic
We provide a slightly less formal definition of monadic second-order (MSO) logic over -graphs for accessibility (see, e.g., [17] for more formal definition). An atomic formula is a formula of the form , , , and Boolean constants and . Here, is the predicate that represents whether and are adjacent. A monadic second-order formula over -graphs is a formula built from atomic formulas, the usual Boolean connectives , first-order quantifications and (quantifications over vertices), and second-order quantifications and (quantification over sets of vertices). The notation indicates that has exactly one free variable . The quantifier height (or quantifier rank) of a formula , denoted by , is the maximum depth of quantifier nesting in (see e.g. [17] for a formal definition). The counting monadic second-order (CMSO) logic is an expansion of MSO logic that have additional predicates for a second-order variable , meaning . A -formula is a CMSO formula such that the quantifier height of is at most and holds for any predicate in . For a logical formula and a graph , we write when graph satisfies the property expressed by .
There are two variants of MSO logic: (or simply MSO), which is described above, allows quantification over vertices and sets of vertices only, and which allows quantification over edges and sets of edges as well. It is well known that, for any graph and -formula , there exists an formula such that , where is the incidence graph, obtained by subdividing each edge of (with two colors meaning the original and the subdivision vertices) (see e.g. [11]). It is easy to see that the treewidth of the incidence graph is at most the treewidth of the original graph. Thus, since this paper focuses on graphs with bounded treewidth, we mainly consider . However, all the theorems are held for and graphs of bounded treewidth.
We introduce some more notations. For a set , two families and are separated if for all , and . We write by if , and if and are separated. For a formula and a graph , the set of vertex sets satisfying is denoted by , that is, .
Theorem 12 ([4]).
Let be positive integers. Let be a -formula over -graphs with a second-order free variable . Then, the following hold.
-
1.
For any , there exists a -formula such that for any -graph , we have
-
2.
There exists a family of tuples of -formulas with a free variable such that, for any -graphs and ,
It is known that, for any , up to logical equivalence, there are only finitely many different -formulas over -graphs (see e.g., [17]). Thus, we can obtain a linear-time algorithm, based on dynamic programming over trees, for model checking over bounded treewidth. Courcelle and Mosbah [5] adjusted Theorem 12 to address optimization, counting, and other problems. We use a slightly modified version of the theorem of Courcelle and Mosbah.
Let , and denote for a set .
Observation 13.
Let be a -graph, and be a formula with a free set variable . Then, we have .
The proof of Corollary 14 is given in the full version.
Corollary 14 ().
Let be positive integers. Let be a -formula over -graphs with a second-order free variables . Then, for all , the following hold.
-
1.
For any , there exists a -formula such that for any -graph , we have
-
2.
There exists a family of tuples of -formulas with a free variable such that, for any -graphs and ,
Moreover, for all .
Then, we can design an algorithm for an (C)MSO maximization (minimization) problem over graphs of bounded treewidth. For simplicity, we assume that the given graph has no sources, that is, . Let be a term denoting and be the (C)MSO-formula describing the constraint of the problem. We recursively compute the value for any subterm of and MSO-formula as follows, where is the given weight vector and for any . If is of the form , then , where, is the formula obtained from Corollary 14. If is of the form , then , where, are the formulas obtained from Corollary 14. Then, is the maximum value for the problem since has no sources. In the following, for simplicity, we may omit the notation and consider only formulas such that does not contain any sources.
In the above setting, the graphs are considered without any special vertex sets (called as colors) or special vertices (called as labels). However, for -graphs with a constant number of colors and/or labels, an appropriate HR-algebra and logics can be defined, and the similar results for Corollary 14 hold (see e.g. [4, 17]).
2.4 Lipschitz Continuity
We formally define Lipschitz continuity of algorithms.
As this is sufficient for this work, we only consider algorithms for vertex-weighted graph problems.
The definition can be naturally extended to other settings, such as edge-weighted problems.
Let be a graph.
For vertex sets and weight vectors , we define the weighted Hamming distance between and by
where denotes the characteristic vector of , that is, the vector holds if and only if . For two probability distributions and over subsets of , we define
where the minimum is taken over couplings of and , that is, distributions over pairs of sets such that its marginal distributions on the first and second coordinates are equal to and , respectively. Consider an algorithm , that takes a graph and a weight vector as an input and outputs a vertex subset . We denote the output distribution of for weight as . The Lipschitz constant of is defined by
For two random variables and , the total variation distance between them is given as:
where the minimum is taken over couplings between and , that is, distributions over pairs such that its marginal distributions on the first and the second coordinates are equal to and , respectively. For an element , we use to denote the characteristic vector of , that is, and for . The following lemma indicates that, to bound the Lipschitz constant, it suffices to consider pairs of weight vectors that differ by one coordinate.
Lemma 15 ([14]).
Suppose that there exist some and such that
holds for all , and . Then, is -Lipschitz.
In our algorithm, we use the following procedures softmax and softmin. These procedures are derived from the exponential mechanism in the literature of differential privacy [7] and frequently appear in the literature on Lipschitz continuity [14, 16]. Here, we organize them into a more convenient form for our use. Let , , and . The softmax of is taken as follows. If , we set to be an arbitrary index with and . Assume otherwise. First, we sample uniformly from . Let be a probability distribution over such that
holds for all . Then, we define and . We have the following. The proofs of Lemmas 16 and 17 are given in the full version.
Lemma 16.
We have .
Lemma 17.
Let and assume . Let be numbers such that holds for all . Then, .
Similarly, the softmin of is taken as follows. If , we set be an arbitrary index with and . Assume otherwise. First, we sample uniformly from . Let be a probability distribution over such that
holds for all . Then, we define and . We have the following. The proofs of Lemmas 18 and 19 are given in the full version.
Lemma 18.
We have .
Lemma 19.
Let and assume . Let be numbers such that holds for all . Then, .
3 Lipschitz Continuous Algorithms for MSO2 Optimization Problems
In this section, we prove Theorems 1, 2, and 3. Since the proofs are similar to that for Theorem 1, we provide most of the proof of Theorem 2 in the full version. In Section 3.1, we define the notations and give a brief overview of the algorithm. Section 3.2 handles the base case, where the graph consists of a single vertex. Sections 3.3 and 3.4 analyze the impact of the parallel composition and the forget operations, respectively, on approximability and the Lipschitz constant. In Section 3.5, we put everything together to prove Theorem 1. Finally, in Section 4.1, we provide a more refined analysis for the special case of the maximum weight independent set problem to prove Theorem 3.
3.1 Definitions
For a graph , a weight vector , and an formula , we define
| (1) |
for maximization problems and
| (2) |
for minimization problems. If , we define . For graphs and formulas with , our algorithm recursively computes the vertex set that approximately achieves the maximum in Equation (1) or minimum in Equation (2). When it is clear from the context, we omit the subscript and simply write and . We perform the dynamic programming algorithm using the formulas defined in Corollary 14 over the parse tree of a term obtained from Lemma 8 and Proposition 11. Specifically, at every stage of the algorithm, it is guaranteed that each satisfies . Furthermore, our algorithm is randomized. Thus, can be considered as a probability distribution over vertex sets in .
To bound the approximation ratio, for a weight vector , we bound the ratio between and . To bound the Lipschitz constant, for a weight vector , , and , we bound . From now on, we will concentrate on maximizing problems. The algorithm for minimization problems is similar to that for maximization problems, while the detail of the analysis is slightly differ. We discuss the minimization version in the full version.
3.2 Base Case
Here we consider the base case. Let be a graph with a single vertex and no edges, where is a source of , and be an formula such that is nonempty and contains no set containing . In particular, we have . We set . Since , it is clear that . Furthermore, it is obvious that
3.3 Parallel Composition
Next, we consider the parallel composition. Let
| (3) |
Assume we have already computed and for each . We compute . For each , we denote . By definition, we have . Then, we take and define . First we analyze the approximation ratio.
Lemma 20.
Let and suppose and hold for all . Then, we have .
Proof.
We have
Now we analyze the Lipschitz constant. We denote the variable corresponding to the weight and by and , respectively. We note that and are the same as a distribution unless . The same also holds for . Since and are disjoint, without loss of generality, we can assume the and are the same as a distribution. We have the following.
Lemma 21.
Let and suppose holds for all . Then, we have .
Proof.
We have
where the last inequality is from and .
3.4 Forget
Here we consider forget operation. Let and
| (4) |
Assume we have already computed for each . We compute .
We first sample uniformly from . For each , we denote . By definition, we have . Then, we take and define . First we analyze the approximation ratio.
Lemma 22.
Let and suppose holds for all . Then, we have .
Proof.
We have
Now we analyze the Lipschitz constant.
Lemma 23.
Let and suppose holds for all . Then, we have .
Proof.
We have
3.5 Putting Together
Let be a graph with treewidth . From Lemma 8 and Proposition 11, we can compute a HR-parse tree over -graph denoting with height . We perform the dynamic programming algorithm on this parse tree. We have the following.
Lemma 24.
Proof.
The approximability bound is obtained by repeatedly applying Lemmas 20 and 22 and . The Lipschitzness bound is obtained by repeatedly applying Lemmas 21 and 23.
Since is bounded by a function of and , we have the following.
Proof of Theorem 1.
The claim follows by substituting , , and in Lemma 24 with , , and , respectively.
4 Special Cases
4.1 Independent Set
From the proof of Theorem 1, in Theorem 1 is bounded by .
Particularly, if is bounded by , the Lipschitz constant depends linearly on .
Here, we prove Theorem 3 for the maximum weight independent set problem by showing that .
Similar arguments can be applied to several other problems, such as the minimum weight vertex cover problem and the minimum weight dominating set problem.
For a -graph and , let
In words, is the family of subsets of such that is disjoint from and is an independent set of .
Then, we have
Particularly, we have and therefore Theorem 3 is proved.
4.2 Max Ones
Recall that in max ones problem, we given a 3CNF formula over a variable set and a weight function , the goal is to find a satisfying assignment that maximizes the weight . We reduce the problem to a graph problem. Let be the graph such that is the variable set of , , is the clause set of , for all , iff clause contains as a positive literal, and iff clause contains as a negative literal. Let if , and otherwise . Note that the treewidth of is at most .
Here, the notations , , , , and are syntactic sugar defined in the usual sense.
For a -graph and , let
The first and the second rows says that and represent the sets and of an assignment , respectively. The third row says that has no sources, and all clauses except in are satisfied by the defined by and . Then, the max ones problem is equivalent to find a vertex set maximizes the weight and satisfies . Here, we have
Particularly, we have .
References
- [1] Brenda Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of the ACM (JACM), 41(1):153–180, 1994. doi:10.1145/174644.174650.
- [2] Hans Bodlaender and Torben Hagerup. Parallel algorithms with optimal speedup for bounded treewidth. SIAM Journal on Computing, 27(6):1725–1746, 1998. doi:10.1137/S0097539795289859.
- [3] Bruno Courcelle. The monadic second-order logic of graphs. I. recognizable sets of finite graphs. Information and Computation, 85(1):12–75, 1990. doi:10.1016/0890-5401(90)90043-H.
- [4] Bruno Courcelle and Joost Engelfriet. Graph Structure and Monadic Second-Order Logic – A Language-Theoretic Approach, volume 138 of Encyclopedia of mathematics and its applications. Cambridge University Press, 2012. doi:10.1017/CBO9780511977619.
- [5] Bruno Courcelle and Mohamed Mosbah. Monadic second-order evaluations on tree-decomposable graphs. Theoretical Computer Science, 109(1&2):49–82, 1993. doi:10.1016/0304-3975(93)90064-Z.
- [6] Nadia Creignou, Sanjeev Khanna, and Madhu Sudan. Complexity classifications of Boolean constraint satisfaction problems. SIAM, 2001.
- [7] Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages and Programming (ICALP), volume 4052, pages 1–12, 2006. doi:10.1007/11787006_1.
- [8] Noah Fleming and Yuichi Yoshida. Sensitivity lower bounds for approximation algorithms. arXiv preprint arXiv:2411.02744, 2024. doi:10.48550/arXiv.2411.02744.
- [9] Sanjeev Khanna, Madhu Sudan, Luca Trevisan, and David Williamson. The approximability of constraint satisfaction problems. SIAM Journal on Computing, 30(6):1863–1920, 2001. doi:10.1137/S0097539799349948.
- [10] Tuukka Korhonen. Lower bounds on dynamic programming for maximum weight independent set. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, International Colloquium on Automata, Languages, and Programming (ICALP), volume 198 of LIPIcs, pages 87:1–87:14, 2021. doi:10.4230/LIPICS.ICALP.2021.87.
- [11] Stephan Kreutzer. Algorithmic meta-theorems. In Finite and Algorithmic Model Theory, volume 379 of London Mathematical Society Lecture Note Series, pages 177–270. Cambridge University Press, 2011.
- [12] Soh Kumabe and Yuichi Yoshida. Average sensitivity of dynamic programming. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1925–1961, 2022. doi:10.1137/1.9781611977073.77.
- [13] Soh Kumabe and Yuichi Yoshida. Average sensitivity of the knapsack problem. In European Symposium on Algorithms (ESA), volume 244, pages 75:1–75:14, 2022. doi:10.4230/LIPICS.ESA.2022.75.
- [14] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous algorithms for graph problems. In Symposium on Foundations of Computer Science (FOCS), pages 762–797, 2023. doi:10.1109/FOCS57990.2023.00051.
- [15] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous allocations for optimization games. In International Colloquium on Automata, Languages, and Programming (ICALP), volume 297, pages 102:1–102:16, 2024. doi:10.4230/LIPICS.ICALP.2024.102.
- [16] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous algorithms for covering problems. In Symposium on Discrete Algorithms (SODA), pages 1057–1093, 2025. doi:10.1137/1.9781611978322.31.
- [17] Leonid Libkin. Elements of Finite Model Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2004. doi:10.1007/978-3-662-07003-1.
- [18] Quanquan Liu, Grigoris Velegkas, Yuichi Yoshida, and Felix Zhou. Pointwise lipschitz continuous graph algorithms via proximal gradient analysis. arXiv preprint, 2024. doi:10.48550/arXiv.2405.08938.
- [19] Shogo Murai and Yuichi Yoshida. Sensitivity analysis of centralities on unweighted networks. In The world wide web conference, pages 1332–1342, 2019. doi:10.1145/3308558.3313422.
- [20] Pan Peng and Yuichi Yoshida. Average sensitivity of spectral clustering. In ACM SIGKDD international conference on knowledge discovery & data mining (KDD), pages 1132–1140, 2020. doi:10.1145/3394486.3403166.
- [21] Nithin Varma and Yuichi Yoshida. Average sensitivity of graph algorithms. SIAM Journal on Computing, 52(4):1039–1081, 2023. doi:10.1137/21M1399592.
- [22] Yuichi Yoshida and Shinji Ito. Average sensitivity of euclidean -clustering. Advances in Neural Information Processing Systems (NeurIPS), 35:32487–32498, 2022.
- [23] Yuichi Yoshida and Samson Zhou. Sensitivity analysis of the maximum matching problem. In Innovations in Theoretical Computer Science Conference (ITCS), volume 185, pages 58:1–58:20, 2021. doi:10.4230/LIPIcs.ITCS.2021.58.
