Abstract 1 Introduction 2 Preliminaries 3 Lipschitz Continuous Algorithms for MSO2 Optimization Problems 4 Special Cases References

Courcelle’s Theorem for Lipschitz Continuity

Tatsuya Gima ORCID Hokkaido University, Sapporo, Japan Soh Kumabe ORCID CyberAgent, Tokyo, Japan Yuichi Yoshida ORCID National Institute of Informatics, Tokyo, Japan
Abstract

Lipschitz continuity of algorithms, introduced by Kumabe and Yoshida (FOCS’23), measures the stability of an algorithm against small input perturbations. Algorithms with small Lipschitz continuity are desirable, as they ensure reliable decision-making and reproducible scientific research. Several studies have proposed Lipschitz continuous algorithms for various combinatorial optimization problems, but these algorithms are problem-specific, requiring a separate design for each problem.

To address this issue, we provide the first algorithmic meta-theorem in the field of Lipschitz continuous algorithms. Our result can be seen as a Lipschitz continuous analogue of Courcelle’s theorem, which offers Lipschitz continuous algorithms for problems on bounded-treewidth graphs. Specifically, we consider the problem of finding a vertex set in a graph that maximizes or minimizes the total weight, subject to constraints expressed in monadic second-order logic (MSO2). We show that for any ε>0, there exists a (1±ε)-approximation algorithm for the problem with a polylogarithmic Lipschitz constant on bounded treewidth graphs. On such graphs, our result outperforms most existing Lipschitz continuous algorithms in terms of approximability and/or Lipschitz continuity. Further, we provide similar results for problems on bounded-clique-width graphs subject to constraints expressed in MSO1. Additionally, we construct a Lipschitz continuous version of Baker’s decomposition using our meta-theorem as a subroutine.

Keywords and phrases:
Fixed-Parameter Tractability, Algorithmic Meta-Theorem, Lipschitz Continuity
Funding:
Tatsuya Gima: JSPS KAKENHI Grant Numbers JP24K23847, JP25K03077
Yuichi Yoshida: JSPS KAKENHI Grant Number JP24K02903
Copyright and License:
[Uncaptioned image] © Tatsuya Gima, Soh Kumabe, and Yuichi Yoshida; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Fixed parameter tractability
; Theory of computation Approximation algorithms analysis
Related Version:
Full Version: http://arxiv.org/abs/2506.21118
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

Lipschitz continuity of algorithms, introduced by Kumabe and Yoshida [14], is a measure of an algorithm’s stability in response to errors or small perturbations in the input for weighted optimization problems. Roughly speaking, it is the maximum ratio of the (weighted) Hamming distance between the outputs of an algorithm for two different weights to the 1 distance between those weights (see Section 2.4 for the precise definition). It is desirable for algorithms to have small Lipschitz constants, as large constants can undermine the reliability of decision-making and the reproducibility of scientific research.

Since its introduction, Lipschitz continuous algorithms have been proposed for various optimization problems [14, 16]. However, we need to design a different algorithm and analyze its Lipschitz continuity for each problem, which can be impractical. To address this limitation, we present an algorithmic meta-theorem for Lipschitz continuous algorithms, which can be seen as Lipschitz continuity analogue of celebrated Courcelle’s theorem [3].

To present our results, we first introduce some notation. Let G=(V,E) be a graph of n vertices with treewidth tw, φ(X) be an MSO2 formula with a free vertex set variable X (see Section 2.3 for details about logic), and w0V be a weight vector. We consider the problem of finding a vertex subset XV such that Gφ(X) and w(X) is maximized, which we call the MSO2 maximization problem. We also consider the problem of finding X such that w(X) is minimized, which we call the MSO2 minimization problem. Our main results are the following.

Theorem 1.

For any ε(0,1], there is a (1ε)-approximation algorithm for the MSO2 maximization problem with Lipschitz constant O((f(tw,|φ|)+logε1+loglogn)ε1log2n), where f is some computable function. The time complexity is bounded by O(f(tw,|φ|)n).

Theorem 2.

For any ε(0,1], there is a (1+ε)-approximation algorithm for the MSO2 minimization problem with Lipschitz constant O((f(tw,|φ|)+logε1+loglogn)ε1log2n), where f is some computable function. The time complexity is bounded by O(f(tw,|φ|)n).

We note that the trivial upper bound on Lipschitz constant is n [16]; therefore the bounds in the theorems above are significantly smaller for fixed tw and φ. We also remark that our meta-theorems yield randomized approximation algorithms. This is necessary since, for most problems, it is known that exact or deterministic algorithms cannot be Lipschitz continuous [14].

When the treewidth of the input graph is bounded by a constant, Theorems 1 and 2 provide algorithms that outperform existing algorithms in terms of approximability and/or Lipschitz continuity:

  • For the minimum weight vertex cover problem, Theorem 2 yields an algorithm with a better approximation ratio than the previous 2-approximation algorithm with Lipschitz constant 4 [16], at the polylogarithmic sacrifice of the Lipschitz constant.

  • For the minimum weight feedback vertex set problem, Theorem 2 outperforms the previous O(logn)-approximation algorithm with Lipschitz constant O(nlog3/2n) [16] in terms of both approximability and Lipschitz continuity.

  • For the maximum weight matching problem111Although the output of this problem is an edge set rather than a vertex set, this problem can be expressed by an MSO2 formula on a new graph G=(VE,{(v,e):e is incident to v}), Theorem 1 yields an algorithm with a better approximation ratio than the previous (18ε)-approximation algorithm with Lipschitz constant O(ε1) [14], at the polylogarithmic sacrifice of the Lipschitz constant.

  • For the shortest path problem, Theorem 2 slightly improves the Lipschitz continuity compared to the previous (1ε)-approximation algorithm with Lipschitz constant O(ε1log3n) [14], without losing approximability.

For a fixed φ, by explicitly specifying the transitions in the dynamic programming within the algorithm, we can provide more precise bounds on the function f that appears in Theorems 1 and 2. In particular, considering the case that φ is an MSO2 formula representing the independent set constraint, we have the following.

Theorem 3.

For any ε(0,1], there is a (1ε)-approximation algorithm for the maximum weight independent set problem with Lipschitz constant O((tw+logε1+loglogn)ε1log2n). The time complexity is bounded by 2O(tw)n.

This result is surprising as the dependence of tw on the Lipschitz constant is subexponential and, even more remarkably, linear. We note that through similar arguments, analogous results hold for several other problems, such as the minimum weight vertex cover problem and the minimum weight dominating set problem.

We further demonstrate that Theorems 1 and 2 lead to Lipschitz continuous version of Baker’s technique [1]. As a representative example, we consider the maximum weight independent set problem on planar graphs, where a vertex subset X is an independent set of a graph G if no two vertices in X are adjacent in G. We prove the following.

Theorem 4.

For any ε(0,1], there is a (1ε)-approximation algorithm for the maximum weight independent set problem on planar graphs with Lipschitz constant O((ε1+loglogn)ε1log2n). The time complexity is bounded by 2O(ε1)n.

Using similar algorithms, Lipschitz continuous PTASes can be obtained for many problems, including the minimum weight vertex cover problem and the minimum weight dominating set problem.

As a lower bound result, we prove that the Lipschitz constant of any (1ε)-approximation algorithm for some MSO2 maximization problem is large on general graphs, which justifies considering the MSO2 maximization problem on a restricted graph class. Specifically, we consider the max ones problem [6, 9], where the instance is a 3CNF formula over a variable set X and a weight function w:X0, and the goal is to find a satisfying assignment σ:X{0,1} that maximizes the weight xσ1(1)w(x). It is easy to see that there exists a fixed MSO2 formula φ(X) such that the max ones problem can be reduced to a problem on bipartite graphs, where the task is to find a vertex set X that maximizes the weight xXw(x) subject to φ. We show the following.

Theorem 5.

There exist ε,δ>0 such that any (1ε)-approximation algorithm for the max ones problem has Lipschitz constant Ω(nδ), where n is the number of variables.

We note that it is possible to construct an algorithm for the max ones problem with a Lipschitz constant similar to that in Theorem 3, where tw denotes the treewidth of the “incidence graph” of the 3CNF instance (see Section 4.2 for details). Since tw is upper bounded by n for any instance, Theorem 5 implies that the dependency of the Lipschitz constant on tw cannot be improved to two(1).

We also prove analogous meta-theorems when parameterizing by clique-width. Let an MSO1 maximization (resp., minimization) problem be the variant of an MSO2 problem in which the formula φ is restricted to MSO1. Denote by cw the clique-width of the graph G. We then obtain the following.

Theorem 6.

For any ε(0,1], there is a (1ε)-approximation algorithm for the MSO1 maximization problem with Lipschitz constant O((f(cw,|φ|)+logε1+loglogn)ε1log2n), where f is some computable function. The time complexity is bounded by O(f(cw,|φ|)n).

Theorem 7.

For any ε(0,1], there is a (1+ε)-approximation algorithm for the MSO1 minimization problem with Lipschitz constant O((f(cw,|φ|)+logε1+loglogn)ε1log2n), where f is some computable function. The time complexity is bounded by O(f(cw,|φ|)n).

We note that the functions f in Theorems 6 and 7 are much larger than those in Theorems 1 and 2. In particular, the bounds for Theorems 3 and 4 do not follow from Theorem 6.

1.1 Technical Overview

Now we provide a technical overview of our framework. Since the arguments for clique-width are similar, we focus here on the treewidth results and omit the clique-width case.

For simplicity, here we consider the maximum weight independent set problem on a full binary tree (a rooted tree in which every vertex has exactly 0 or 2 children). This corresponds to the case where φ(X)=xy((xXyX)¬𝖺𝖽𝗃(x,y)). Let w0V be the weight vector.

If we do not care about Lipschitzness, this problem can exactly be solved by the following algorithm. For each vertex vV, define 𝖣𝖯[v][0] to be an independent set in the subtree rooted at v that does not include v with the maximum weight. Similarly, define 𝖣𝖯[v][1] to be an independent set in the subtree rooted at v with the maximum weight. If v is a leaf, then 𝖣𝖯[v][0]= and 𝖣𝖯[v][1]={v}. Otherwise, let u1 and u2 be the two children of v. We have 𝖣𝖯[v][0]=𝖣𝖯[u1][1]𝖣𝖯[u2][1], and 𝖣𝖯[v][1] is the one with the larger weight of Xv,0:=𝖣𝖯[u1][0]𝖣𝖯[u2][0]{v} and Xv,1:=𝖣𝖯[u1][1]𝖣𝖯[u2][1]. By performing this dynamic programming in a bottom-up manner, the problem can be solved exactly.

However, this algorithm is not Lipschitz continuous. This is because when we compute 𝖣𝖯[v][1], which of Xv,0 or Xv,1 has a larger weight is affected by small changes in the weights. To address this issue, we use the exponential mechanism [7]. Specifically, for some constant c>0, we select Xv,i with probability proportional to exp(cw(Xv,i)). This approach makes the algorithm Lipschitz continuous, with only a slight sacrifice in the approximation ratio. Specifically, we can prove that by appropriately choosing c for a given ε(0,1], the increase in the Lipschitz constant can be bounded by O~(ε1) by reducing the approximation guarantee by a factor of 1ε at each vertex where the exponential mechanism is applied.

While this approach makes the algorithm Lipschitz continuous, the Lipschitz constant is still too large when the height h of the tree is large. Specifically, to achieve an approximation guarantee of 1ε, we need to set ε=εh, leading to a Lipschitz constant of hO~(ε1)=O~(ε1h2). This is larger than the trivial bound n when h=O(n). We resolve this issue by using the fact that any tree has a tree decomposition of width 5 and height O(logn) [2]. By performing dynamic programming with the exponential mechanism on this tree decomposition, we can obtain (1ε)-approximation algorithm with Lipschitz constant O~(ε1). This argument can be naturally extended to the case where G is a bounded treewidth graph. By following the proof of Courcelle’s Theorem [3, 5], we further extend this argument to the case where φ is a general MSO2 formula with a free vertex variable.

We prove Theorem 5 by leveraging a lower bound on a related notion of sensitivity for the maximum cut problem [8], where sensitivity measures the change in the output with respect to the Hamming distance under edge deletions [21]. Specifically, we reduce the maximum cut problem, where stability is defined with respect to edge deletions, to the max ones problem, where stability is measured with respect to weight changes.

1.2 Related Work

Lipschitz continuity of discrete algorithms is defined by Kumabe and Yoshida [14], and they provided Lipschitz continuous approximation algorithms for the minimum spanning tree, shortest path, and maximum weight matching problems. In a subsequent work [16], they further provided such algorithms for the minimum weight vertex cover, minimum weight set cover, and minimum weight feedback vertex set problems. In another work [15], they defined Lipschitz continuity for allocations in cooperative games and provided Lipschitz continuous allocation schemes for the matching game and the minimum spanning tree game.

A variant known as pointwise Lipschitz continuity has also been studied, which is defined using the unweighted Hamming distance instead of the weighted Hamming distance. Kumabe and Yoshida [14] defined pointwise Lipschitz continuity and provided pointwise Lipschitz continuous algorithms for the minimum spanning tree and maximum weight bipartite matching problems. Liu et al. [18] proposed the proximal gradient method as a general technique for solving LP relaxations stably. Using this, they provided pointwise Lipschitz continuous approximation algorithms for the minimum vertex (S,T)-cut, densest subgraph, maximum weight (b-)matching, and packing integer programming problems.

(Average) sensitivity, introduced by Varma and Yoshida [21] with preliminary ideas by Murai and Yoshida [19], is a notion similar to Lipschitz continuity. While Lipschitz continuity evaluates an algorithm’s stability against unit changes in the input weights, (average) sensitivity evaluates an algorithm’s stability against (random) deletions of elements from the input. Algorithms with small (average) sensitivity have been studied for several problems, such as maximum matching [21, 23], minimum cut [21], knapsack problem [13], Euclidean k-means [22], spectral clustering [20], and dynamic programming problems [12]. Recently, Fleming and Yoshida [8] constructed a PCP framework to prove the sensitivity lower bound for the constraint satisfaction problem.

1.3 Organization

The rest of this paper is organized as follows. In Section 2, we describe the necessary preliminaries on tree decomposition (Sections 2.1 and 2.2), logic (Section 2.3), and Lipschitz continuity (Section 2.4). In Section 3, we prove Theorem 1 by providing a Lipschitz continuous algorithm for the MSO2 maximization problem. Since the algorithm and analysis for MSO2 minimization are similar, we defer the proof of Theorem 2 to the full version. In Section 4, we give a more precise Lipschitzness analysis for specific formulas φ, which includes the proof of Theorem 3. Proofs of Theorems 4, 5, 6, and 7 are given in the full version.

2 Preliminaries

2.1 Tree Decomposition

Let G be a graph with n vertices. A pair (,𝒯) consisting of a family of subsets (called bags) of V(G) and a rooted tree 𝒯 whose vertex set is is a (rooted) tree decomposition of G if it satisfies the following three conditions.

  • BB=V(G).

  • For each edge eE(G), there is a bag B such that eB.

  • For each vertex vV(G), the set of bags {B:vB} induces connected subgraph in T.

For a tree decomposition (,𝒯), we may refer to the root node, leaf nodes, and the height of 𝒯 as those of (,𝒯), respectively. Moreover, (,𝒯) is binary if 𝒯 is a binary tree. The width of a tree decomposition (,𝒯) is the maximum size of a bag in minus one. The treewidth of G is the minimum possible width among all possible tree decompositions of G. It is known that the tree decomposition of G of width at most 2k+1 can be computed in 2O(k)n time [10], where k is the treewidth of G. Moreover, any tree decomposition of G of width k can be transformed into a binary tree decomposition of width at most 3k+2 and height at most O(logn) [2]. Thus, we obtain the following lemma.

Lemma 8 ([2, 10]).

Let G be a graph with n vertices and k be the treewidth of G. Then, a binary tree decomposition of G with width O(k) and height O(logn) can be computed in 2O(k)n time.

2.2 HR-algebra

We introduce the notions of HR-algebra222The acronym HR stands for hyperedge replacement [4]., which is one of the algebraic definitions of tree decomposition. A k-graph is a tuple G=V,E,𝗌𝗋𝖼 of a set of vertices V, a set of edges E, and a k-vector 𝗌𝗋𝖼(V{𝚗𝚒𝚕})k. We write the i-th element of a vector 𝗌𝗋𝖼 by 𝗌𝗋𝖼(i). If 𝗌𝗋𝖼(i)=u, we say that u is a i-source. We write the set {𝗌𝗋𝖼(i):i[k]}{𝚗𝚒𝚕} by 𝗌𝗋𝖼(G).

Let G and H be k-graphs. The parallel-composition of G and H, denoted by G//H, is the graph generated by the following procedure. First, create the disjoint union of G and H, and identify the (non-𝚗𝚒𝚕) vertices 𝗌𝗋𝖼G(i) and 𝗌𝗋𝖼H(i) for each i[k]. Finally, remove all the self-loops and multi-edges. Let B be a non-empty subset of [k]. The source forgetting operation 𝖿𝗀B is the function that maps a k-graph G to a k-graph G such that VG=VG, EG=EG, and 𝗌𝗋𝖼G(i)=𝚗𝚒𝚕 if iB, and 𝗌𝗋𝖼G(i)=𝗌𝗋𝖼G(i) otherwise.

Definition 9 (HR-algebra).

A term of a HR-algebra over k-graphs is either

  • a constant symbol 𝚒, 𝚒𝚓, or denoting a k-graph {v},,𝗌𝗋𝖼 with 𝗌𝗋𝖼(i)=v, a k-graph {u,v},{{u,v}},𝗌𝗋𝖼 with 𝗌𝗋𝖼(i)=u and 𝗌𝗋𝖼(j)=v, or the empty graph, respectively;

  • t//s for any terms t and s;

  • 𝖿𝗀B(t) for any term t and any B[k].

We may associate a term with the graphs represented by it. A term of an HR-algebra t can be decomposed into the parse tree in the usual way, called the (rooted) HR-parse tree of t. The height of a HR-parse tree is the maximum distance from the root to a leaf. It is known that tree decompositions are equivalent to HR-parse trees in the following sense.

Proposition 10 (see e.g. [4]).

The treewidth of a graph is at most k if and only if the graph can be denoted by a term of a HR-algebra over (k+1)-graphs.

Moreover, binary tree decomposition can be transformed into an HR-parse tree with approximately the same height.

Proposition 11 ().

Given a rooted binary tree decomposition (,𝒯) of G with width k and height h, we can compute a HR-parse tree over (k+1)-graph denoting G with height O(logk+h).

The proof of Proposition 11 is essentially the same as the proof of Proposition 10 but we give in the full version.

2.3 Monadic Second-Order Logic

We provide a slightly less formal definition of monadic second-order (MSO) logic over k-graphs for accessibility (see, e.g., [17] for more formal definition). An atomic formula is a formula of the form s=t, 𝖺𝖽𝗃(s,t), tX, and Boolean constants 𝚃𝚛𝚞𝚎 and 𝙵𝚊𝚕𝚜𝚎. Here, 𝖺𝖽𝗃(s,t) is the predicate that represents whether s and t are adjacent. A monadic second-order formula over k-graphs is a formula built from atomic formulas, the usual Boolean connectives ,,¬,, first-order quantifications x and x (quantifications over vertices), and second-order quantifications X and X (quantification over sets of vertices). The notation φ(X) indicates that φ has exactly one free variable X. The quantifier height (or quantifier rank) of a formula φ, denoted by |φ|, is the maximum depth of quantifier nesting in φ (see e.g. [17] for a formal definition). The counting monadic second-order (CMSO) logic is an expansion of MSO logic that have additional predicates 𝖢𝖺𝗋𝖽m,r(X) for a second-order variable X, meaning |X|m(modr). A CrMSOq-formula is a CMSO formula φ such that the quantifier height of φ is at most q and rr holds for any predicate 𝖢𝖺𝗋𝖽m,r in φ. For a logical formula φ and a graph G, we write Gφ when graph G satisfies the property expressed by φ.

There are two variants of MSO logic: MSO1 (or simply MSO), which is described above, allows quantification over vertices and sets of vertices only, and MSO2 which allows quantification over edges and sets of edges as well. It is well known that, for any graph G and MSO2-formula φ, there exists an MSO1 formula φ such that GφGφ, where G is the incidence graph, obtained by subdividing each edge of G (with two colors meaning the original and the subdivision vertices) (see e.g. [11]). It is easy to see that the treewidth of the incidence graph is at most the treewidth of the original graph. Thus, since this paper focuses on graphs with bounded treewidth, we mainly consider MSO1. However, all the theorems are held for MSO2 and graphs of bounded treewidth.

We introduce some more notations. For a set V, two families 𝒜2V and 2V are separated if AB= for all A𝒜, and B. We write AB by AB if AB=, and 𝒜{AB:A𝒜,B} if 𝒜 and are separated. For a formula φ(X) and a graph G, the set of vertex sets satisfying φ is denoted by 𝗌𝖺𝗍(G,φ), that is, 𝗌𝖺𝗍(G,φ)={A:Gφ(A)}.

The following theorem is a key of the MSO model-checking algorithm [3, 4].

Theorem 12 ([4]).

Let r,q,k be positive integers. Let φ(X) be a CrMSOq-formula over k-graphs with a second-order free variable X. Then, the following hold.

  1. 1.

    For any B[k], there exists a CrMSOq-formula ψ(X) such that for any k-graph G, we have

    𝗌𝖺𝗍(𝖿𝗀B(G),φ)=𝗌𝖺𝗍(G,ψ).
  2. 2.

    There exists a family of tuples {θi(X),ψi(X)}i[p] of CrMSOq-formulas with a free variable X such that, for any k-graphs G and H,

    𝗌𝖺𝗍(G//H,φ)=i[p]{SP:S𝗌𝖺𝗍(G,θi),P𝗌𝖺𝗍(H,ψi)}.

It is known that, for any r,q,k, up to logical equivalence, there are only finitely many different CrMSOq-formulas φ(X) over k-graphs (see e.g., [17]). Thus, we can obtain a linear-time algorithm, based on dynamic programming over trees, for CrMSOq model checking over bounded treewidth. Courcelle and Mosbah [5] adjusted Theorem 12 to address optimization, counting, and other problems. We use a slightly modified version of the theorem of Courcelle and Mosbah.

Let 𝗇𝗈𝗌𝗋𝖼(X)i[k](𝗌𝗋𝖼(i)X), and φS(X) denote φ(XS)𝗇𝗈𝗌𝗋𝖼(X) for a set S𝗌𝗋𝖼(G).

Observation 13.

Let G be a k-graph, and φ(X) be a formula with a free set variable X. Then, we have 𝗌𝖺𝗍(G,φ)=S𝗌𝗋𝖼(G)𝗌𝖺𝗍(G,φS){S}.

The proof of Corollary 14 is given in the full version.

Corollary 14 ().

Let r,q,k be positive integers. Let φ(X) be a CrMSOq-formula over k-graphs with a second-order free variables X. Then, for all S𝗌𝗋𝖼(G), the following hold.

  1. 1.

    For any B[k], there exists a CrMSOq-formula ψ(X) such that for any k-graph G, we have

    𝗌𝖺𝗍(𝖿𝗀B(G),φS)=SB𝗌𝖺𝗍(G,ψSS){S}.
  2. 2.

    There exists a family of tuples {θiSi(X),ψiSi(X)}i[p] of CrMSOq-formulas with a free variable X such that, for any k-graphs G and H,

    𝗌𝖺𝗍(G//H,φS)=i[p]𝗌𝖺𝗍(G,θiSi)𝗌𝖺𝗍(H,ψiSi).

    Moreover, SiSi=S for all i[p].

Then, we can design an algorithm for an (C)MSO maximization (minimization) problem over graphs of bounded treewidth. For simplicity, we assume that the given graph G has no sources, that is, 𝗌𝗋𝖼(G)=. Let t be a term denoting Gt and φ(X) be the (C)MSO-formula describing the constraint of the problem. We recursively compute the value 𝗈𝗉𝗍(t,φS)=max{w(A):GφS(A)} for any subterm t of t and MSO-formula φS as follows, where w0V is the given weight vector and w(X)=xXwx for any XV(G). If t is of the form 𝖿𝗀B(t′′), then 𝗈𝗉𝗍(𝖿𝗀B(t′′),φS)=maxSB{𝗈𝗉𝗍(t′′,ψSS)+w(S)}, where, ψ is the formula obtained from Corollary 14. If t is of the form t1//t2, then 𝗈𝗉𝗍(t,φS)=maxi[p]{𝗈𝗉𝗍(t1,θSi)+𝗈𝗉𝗍(t2,ψSi)}, where, {θiSi(X),ψiSi(X)}i[p] are the formulas obtained from Corollary 14. Then, 𝗈𝗉𝗍(t,φ) is the maximum value for the problem since t has no sources. In the following, for simplicity, we may omit the S notation and consider only formulas φ such that 𝗌𝖺𝗍(G,φ) does not contain any sources.

In the above setting, the graphs are considered without any special vertex sets (called as colors) or special vertices (called as labels). However, for k-graphs with a constant number of colors and/or labels, an appropriate HR-algebra and logics can be defined, and the similar results for Corollary 14 hold (see e.g. [4, 17]).

2.4 Lipschitz Continuity

We formally define Lipschitz continuity of algorithms. As this is sufficient for this work, we only consider algorithms for vertex-weighted graph problems. The definition can be naturally extended to other settings, such as edge-weighted problems. Let G=(V,E) be a graph. For vertex sets X,XV and weight vectors w,w0V, we define the weighted Hamming distance between (X,w) and (X,w) by
dw((X,w),(X,w)):=vX𝟏vwvvX𝟏vwv1=vXX|wvwv|+vXXwv+vXXwv,

where 𝟏v{0,1}V denotes the characteristic vector of v, that is, the vector 𝟏v,u=1 holds if and only if u=v. For two probability distributions 𝒳 and 𝒳 over subsets of V, we define

𝖤𝖬((𝒳,w),(𝒳,w)):=inf𝒟𝔼(X,X)𝒟[dw((X,w),(X,w))],

where the minimum is taken over couplings of 𝒳 and 𝒳, that is, distributions over pairs of sets such that its marginal distributions on the first and second coordinates are equal to 𝒳 and 𝒳, respectively. Consider an algorithm 𝒜, that takes a graph G=(V,E) and a weight vector w0V as an input and outputs a vertex subset XV. We denote the output distribution of 𝒜 for weight w as 𝒜(w). The Lipschitz constant of 𝒜 is defined by

supw,w0,ww𝖤𝖬((𝒜(w),w),(𝒜(w),w))ww1.

For two random variables 𝑿 and 𝑿, the total variation distance between them is given as:

𝖳𝖵(𝑿,𝑿):=inf𝒟Pr(X,X)𝒟[XX],

where the minimum is taken over couplings between 𝑿 and 𝑿, that is, distributions over pairs such that its marginal distributions on the first and the second coordinates are equal to 𝑿 and 𝑿, respectively. For an element uV, we use 𝟏u0V to denote the characteristic vector of u, that is, 𝟏u(u)=1 and 𝟏u(v)=0 for vV{u}. The following lemma indicates that, to bound the Lipschitz constant, it suffices to consider pairs of weight vectors that differ by one coordinate.

Lemma 15 ([14]).

Suppose that there exist some c>0 and L>0 such that

𝖤𝖬((𝒜(G,w),w),(𝒜(G,w+δ𝟏u),w+δ𝟏u))δL

holds for all w0V, uV and 0<δc. Then, 𝒜 is L-Lipschitz.

In our algorithm, we use the following procedures softmax and softmin. These procedures are derived from the exponential mechanism in the literature of differential privacy [7] and frequently appear in the literature on Lipschitz continuity [14, 16]. Here, we organize them into a more convenient form for our use. Let p1, x1,,xp0, and ϵ(0,1]. The softmax of x1,,xp is taken as follows. If maxi[p]xi=0, we set 𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ϵxi to be an arbitrary index i with xi=0 and 𝗌𝗈𝖿𝗍𝗆𝖺𝗑i[p]ϵxi=0. Assume otherwise. First, we sample 𝒄 uniformly from [2ϵ1log(2pϵ1)maxi[p]xi,4ϵ1log(2pϵ1)maxi[p]xi]. Let 𝒊 be a probability distribution over [p] such that

Pr[𝒊=i]=exp(𝒄xi)i[p]exp(𝒄xi).

holds for all i[p]. Then, we define 𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ϵxi:=𝒊 and 𝗌𝗈𝖿𝗍𝗆𝖺𝗑i[p]ϵxi:=x𝒊. We have the following. The proofs of Lemmas 16 and 17 are given in the full version.

Lemma 16.

We have 𝔼[𝗌𝗈𝖿𝗍𝗆𝖺𝗑i[p]ϵxi](1ϵ)maxi[p]xi.

Lemma 17.

Let δ>0 and assume δmaxi[p]xi4ϵ1log(2pϵ1). Let x1,,xp0 be numbers such that xixixi+δ holds for all i[p]. Then, 𝖳𝖵(𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ϵxi,𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ϵxi)10ϵ1log(2pϵ1)δmaxi[p]xi.

Similarly, the softmin of x1,,xp is taken as follows. If mini[p]xi=0, we set 𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝗂𝗇i[p]ϵxi be an arbitrary index i with xi=0 and 𝗌𝗈𝖿𝗍𝗆𝗂𝗇i[p]ϵxi=0. Assume otherwise. First, we sample 𝒄 uniformly from [2ϵ1log(2pϵ1)mini[p]xi,4ϵ1log(2pϵ1)mini[p]xi]. Let 𝒊 be a probability distribution over [p] such that

Pr[𝒊=i]=exp(𝒄xi)i[p]exp(𝒄xi).

holds for all i[p]. Then, we define 𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝗂𝗇i[p]ϵxi:=𝒊 and 𝗌𝗈𝖿𝗍𝗆𝗂𝗇i[p]ϵxi:=x𝒊. We have the following. The proofs of Lemmas 18 and 19 are given in the full version.

Lemma 18.

We have 𝔼[𝗌𝗈𝖿𝗍𝗆𝗂𝗇i[p]ϵxi](1+ϵ)mini[p]xi.

Lemma 19.

Let δ>0 and assume δmini[p]xi4ϵ1log(2pϵ1). Let x1,,xp0 be numbers such that xixixi+δ holds for all i[p]. Then, 𝖳𝖵(𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝗂𝗇i[p]ϵxi,𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝗂𝗇i[p]ϵxi)10ϵ1log(2pϵ1)δmini[p]xi.

3 Lipschitz Continuous Algorithms for MSO2 Optimization Problems

In this section, we prove Theorems 1, 2, and 3. Since the proofs are similar to that for Theorem 1, we provide most of the proof of Theorem 2 in the full version. In Section 3.1, we define the notations and give a brief overview of the algorithm. Section 3.2 handles the base case, where the graph consists of a single vertex. Sections 3.3 and 3.4 analyze the impact of the parallel composition and the forget operations, respectively, on approximability and the Lipschitz constant. In Section 3.5, we put everything together to prove Theorem 1. Finally, in Section 4.1, we provide a more refined analysis for the special case of the maximum weight independent set problem to prove Theorem 3.

3.1 Definitions

For a graph G, a weight vector w, and an MSO2 formula φ, we define

𝗈𝗉𝗍w[G,φ]=maxSV(G),Gφ(S)w(S). (1)

for maximization problems and

𝗈𝗉𝗍w[G,φ]=minSV(G),Gφ(S)w(S). (2)

for minimization problems. If 𝗌𝖺𝗍(G,φ)=, we define 𝗈𝗉𝗍w[G,φ]=𝚗𝚒𝚕. For graphs G and MSO2 formulas φ with 𝗌𝖺𝗍(G,φ), our algorithm recursively computes the vertex set 𝖣𝖯w[G,φ] that approximately achieves the maximum in Equation (1) or minimum in Equation (2). When it is clear from the context, we omit the subscript w and simply write 𝗈𝗉𝗍[G,φ] and 𝖣𝖯[G,φ]. We perform the dynamic programming algorithm using the formulas defined in Corollary 14 over the parse tree of a term obtained from Lemma 8 and Proposition 11. Specifically, at every stage of the algorithm, it is guaranteed that each X𝗌𝖺𝗍(G,φ) satisfies XV(G)𝗌𝗋𝖼(G). Furthermore, our algorithm is randomized. Thus, 𝖣𝖯[G,φ] can be considered as a probability distribution over vertex sets in 𝗌𝖺𝗍(G,φ).

To bound the approximation ratio, for a weight vector w0V, we bound the ratio between 𝔼[w(𝖣𝖯[G,φ])] and 𝗈𝗉𝗍[G,φ]. To bound the Lipschitz constant, for a weight vector w0V, uV, and δ>0, we bound 𝖤𝖬(𝖣𝖯w[G,φ],𝖣𝖯w+δ𝟏u[G,φ]). From now on, we will concentrate on maximizing problems. The algorithm for minimization problems is similar to that for maximization problems, while the detail of the analysis is slightly differ. We discuss the minimization version in the full version.

3.2 Base Case

Here we consider the base case. Let G be a graph with a single vertex v and no edges, where v is a source of G, and φ be an MSO2 formula such that 𝗌𝖺𝗍(G,φ) is nonempty and contains no set containing v. In particular, we have 𝗌𝖺𝗍(G,φ)={}. We set 𝖣𝖯[G,φ]:=. Since w()=0, it is clear that w(𝖣𝖯[G,φ])=𝗈𝗉𝗍[G,φ]=0. Furthermore, it is obvious that

𝖤𝖬(𝖣𝖯w[G,φ],𝖣𝖯w+δ𝟏u[G,φ])=𝖤𝖬(,)=0.

3.3 Parallel Composition

Next, we consider the parallel composition. Let

𝗌𝖺𝗍(G//H,φ)=i[p]𝗌𝖺𝗍(G,θi)𝗌𝖺𝗍(H,ψi). (3)

Assume we have already computed 𝖣𝖯[G,θi] and 𝖣𝖯[H,ψi] for each i[p]. We compute 𝖣𝖯[G//H,φ]. For each i[p], we denote 𝗈𝗉𝗍i:=𝗈𝗉𝗍[G,θi]+𝗈𝗉𝗍[H,ψi]. By definition, we have maxi[p]𝗈𝗉𝗍i=𝗈𝗉𝗍[G//H,φ]. Then, we take 𝒊=𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ε𝗈𝗉𝗍i and define 𝖣𝖯[G//H,φ]:=𝖣𝖯[G,θ𝒊]𝖣𝖯[H,ψ𝒊]. First we analyze the approximation ratio.

Lemma 20.

Let 0<α1 and suppose 𝔼[w(𝖣𝖯[G,θi])]α𝗈𝗉𝗍[G,θi] and 𝔼[w(𝖣𝖯[H,ψi])]α𝗈𝗉𝗍[H,ψi] hold for all i[p]. Then, we have 𝔼[w(𝖣𝖯[G//H,φ])](1ε)α𝗈𝗉𝗍[G//H,φ].

Proof.

We have

𝔼[w(𝖣𝖯[G//H,φ])]=𝔼[w(𝖣𝖯[G,θ𝒊]𝖣𝖯[H,ψ𝒊])]α𝔼[𝗈𝗉𝗍[G,θ𝒊]+𝗈𝗉𝗍[H,ψ𝒊]]
=α𝔼[𝗌𝗈𝖿𝗍𝗆𝖺𝗑i[p]ε𝗈𝗉𝗍i](1ε)αmaxi[p]𝗈𝗉𝗍i=(1ε)α𝗈𝗉𝗍[G//H,φ].

Now we analyze the Lipschitz constant. We denote the variable 𝒊 corresponding to the weight w and w+δ𝟏u by 𝒊w and 𝒊w+δ𝟏u, respectively. We note that 𝖣𝖯w[G,θi] and 𝖣𝖯w+δ𝟏u[G,θi] are the same as a distribution unless uV(G)𝗌𝗋𝖼G. The same also holds for H. Since V(G)𝗌𝗋𝖼G and V(H)𝗌𝗋𝖼H are disjoint, without loss of generality, we can assume the 𝖣𝖯w[H,ψi] and 𝖣𝖯w+δ𝟏u[H,ψi] are the same as a distribution. We have the following.

Lemma 21.

Let β0 and suppose 𝖤𝖬(𝖣𝖯w[G,θi],𝖣𝖯w+δ𝟏u[G,θi])β holds for all i[p]. Then, we have 𝖤𝖬(𝖣𝖯w[G//H,φ],𝖣𝖯w+δ𝟏u[G//H,φ])30ε1log(2pε1)δ+β.

Proof.

We have
𝖤𝖬(𝖣𝖯w[G//H,φ],𝖣𝖯w+δ𝟏u[G//H,φ]) 𝖳𝖵(𝒊w,𝒊w+δ𝟏u)(𝗈𝗉𝗍w[G//H,φ]+𝗈𝗉𝗍w+δ𝟏u[G//H,φ])+maxi[p](𝖤𝖬(𝖣𝖯w[G,θi],𝖣𝖯w+δ𝟏u[G,θi])) 10ε1log(2pε1)δ𝗈𝗉𝗍w[G//H,φ](𝗈𝗉𝗍w[G//H,φ]+𝗈𝗉𝗍w+δ𝟏u[G//H,φ])+β30ε1log(2pε1)δ+β,

where the last inequality is from ε1 and δ𝗈𝗉𝗍w[G//H,φ].

3.4 Forget

Here we consider forget operation. Let B𝗌𝗋𝖼G and

𝗌𝖺𝗍(𝖿𝗀B(G),φ)=i[p]𝗌𝖺𝗍(G,φi){Si}. (4)

Assume we have already computed 𝖣𝖯[G,φi] for each i[p]. We compute 𝖣𝖯[𝖿𝗀B(G),φ].

We first sample 𝒄>0 uniformly from [2log(2pε1)ε𝗈𝗉𝗍[G,φ,S],4log(2pε1)ε𝗈𝗉𝗍[G,φ,S]]. For each i[p], we denote 𝗈𝗉𝗍i=𝗈𝗉𝗍[G,φi]+w(Si). By definition, we have maxi[p]𝗈𝗉𝗍i=𝗈𝗉𝗍[𝖿𝗀B(G),φ]. Then, we take 𝒊=𝗌𝗈𝖿𝗍𝖺𝗋𝗀𝗆𝖺𝗑i[p]ε𝗈𝗉𝗍i and define 𝖣𝖯[𝖿𝗀B(G),φ]:=𝖣𝖯[G,φ𝒊]S𝒊. First we analyze the approximation ratio.

Lemma 22.

Let 0<α1 and suppose 𝔼[w(𝖣𝖯[G,φi])]α𝗈𝗉𝗍[G,φi] holds for all i[p]. Then, we have 𝖣𝖯[𝖿𝗀B(G),φ](1ε)α𝗈𝗉𝗍[𝖿𝗀B(S),φ].

Proof.

We have
𝔼[w(𝖣𝖯[G,φ])] =𝔼[w(𝖣𝖯[G,φ𝒊]S𝒊)]𝔼[α𝗈𝗉𝗍[G,φ𝒊]+w(S𝒊)]α𝔼[𝗈𝗉𝗍[G,φ𝒊]+w(S𝒊)] =α𝔼[𝗌𝗈𝖿𝗍𝗆𝖺𝗑i[p]ε𝗈𝗉𝗍i](1ε)αmaxi[p]𝗈𝗉𝗍i=(1ε)α𝗈𝗉𝗍[𝖿𝗀B(G),φ].

Now we analyze the Lipschitz constant.

Lemma 23.

Let β0 and suppose 𝖤𝖬(𝖣𝖯w[G,φi],𝖣𝖯w+δ𝟏u[G,φi])β holds for all i[p]. Then, we have 𝖤𝖬(𝖣𝖯w[𝖿𝗀B(G),φ],𝖣𝖯w+δ𝟏u[𝖿𝗀B(G),φ])31ε1log(2pε1)δ+β.

Proof.

We have

𝖤𝖬 (𝖣𝖯w[𝖿𝗀B(G),φ],𝖣𝖯w+δ𝟏u[𝖿𝗀B(G),φ])
𝖳𝖵(𝒊w,𝒊w+δ𝟏u)(𝗈𝗉𝗍w[𝖿𝗀B(G),φ]+𝗈𝗉𝗍w+δ𝟏u[𝖿𝗀B(G),φ])
+maxi[p](𝖤𝖬(𝖣𝖯w[G,φi]Si,𝖣𝖯w+δ𝟏u[G,φi]Si))
10ε1log(2pε1)δ𝗈𝗉𝗍w[𝖿𝗀B(G),φ](𝗈𝗉𝗍w[𝖿𝗀B(G),φ]+𝗈𝗉𝗍w+δ𝟏u[𝖿𝗀B(G),φ])
+maxi[p](𝖤𝖬(𝖣𝖯w[G,φi],𝖣𝖯w+δ𝟏u[G,φi]))+maxi[p][dw((Si,w),(Si,w+δ𝟏u))]
30ε1log(2pε1)δ+β+δ31ε1log(2pε1)δ+β.

3.5 Putting Together

Let G=(V,E) be a graph with treewidth k. From Lemma 8 and Proposition 11, we can compute a HR-parse tree t over (k+1)-graph denoting G with height O(logk+logn)O(logn). We perform the dynamic programming algorithm on this parse tree. We have the following.

Lemma 24.

Let ε(0,1] and h be the height of t. Our algorithm outputs a solution X𝗌𝖺𝗍(G,φ) such that 𝔼[w(X)](1hε)𝗈𝗉𝗍[G,φ]. The Lipschitz constant is bounded by 31hε1log(2pmaxε1), where pmax is the maximum of p among all update formulas (3) or (4) that the algorithm uses.

Proof.

The approximability bound is obtained by repeatedly applying Lemmas 20 and 22 and (1ε)h1hε. The Lipschitzness bound is obtained by repeatedly applying Lemmas 21 and 23.

Since pmax is bounded by a function of k and |φ|, we have the following.

Proof of Theorem 1.

The claim follows by substituting ε, h, and k in Lemma 24 with εΘ(logn), O(logn), and 3k+2, respectively.

4 Special Cases

4.1 Independent Set

From the proof of Theorem 1, f(k,φ) in Theorem 1 is bounded by log(2pmax). Particularly, if pmax is bounded by 2O(k), the Lipschitz constant depends linearly on k. Here, we prove Theorem 3 for the maximum weight independent set problem by showing that pmax2O(k). Similar arguments can be applied to several other problems, such as the minimum weight vertex cover problem and the minimum weight dominating set problem. For a k-graph G and S𝗌𝗋𝖼(G), let
φS(X)x(x𝗌𝗋𝖼(G)¬xX)xy(((xXxS)(yXyS))¬𝖺𝖽𝗃(x,y)).

In words, 𝗌𝖺𝗍(G,φS) is the family of subsets X of V such that X is disjoint from 𝗌𝗋𝖼(G) and XS is an independent set of G. Then, we have
𝗌𝖺𝗍(𝖿𝗀B(G),φS)=SB𝗌𝖺𝗍(G,φSS){S},𝗌𝖺𝗍(G//H,φS)=𝗌𝖺𝗍(G,φS)𝗌𝖺𝗍(H,φS).

Particularly, we have pmax2O(k) and therefore Theorem 3 is proved.

4.2 Max Ones

Recall that in max ones problem, we given a 3CNF formula Φ over a variable set X and a weight function w:X0, the goal is to find a satisfying assignment σ:X{0,1} that maximizes the weight xσ1(1)w(x). We reduce the problem to a graph problem. Let GΦ=(XX¯𝒞,E) be the graph such that X is the variable set of Φ, X¯={x¯i:xiX}, 𝒞 is the clause set of Φ, {xi,x¯i}E for all xiX, {Cj,xi}E iff clause Cj contains xi as a positive literal, and {Cj,x¯i}E iff clause Cj contains xi as a negative literal. Let w(x)=w(x) if xX, and otherwise w(x)=0. Note that the treewidth of GΦ is at most 2|X|.

Here, the notations aAB, aAB, aAB, aAψ, and aAψ are syntactic sugar defined in the usual sense. For a k-graph G and S,D𝗌𝗋𝖼(G), let
φS,D(A)B[aASbB(aXbX¯¬𝖺𝖽𝗃(a,b))xXX¯yXX¯(𝖺𝖽𝗃(x,y)(xASByASB)s𝗌𝗋𝖼(G)(¬sA)c𝒞DxASB(𝖺𝖽𝗃(x,c))].

The first and the second rows says that AS and B represent the sets σ1(1) and σ1(0) of an assignment σ, respectively. The third row says that A has no sources, and all clauses except in D are satisfied by the σ defined by AS and B. Then, the max ones problem is equivalent to find a vertex set Y maximizes the weight xYw(x) and satisfies GΦφ,(Y). Here, we have

𝗌𝖺𝗍(𝖿𝗀B(G),φS,D) ={SB𝗌𝖺𝗍(G,φSS,D){S}if DB=,otherwise
𝗌𝖺𝗍(G//H,φS,D) =D1D2=DD1,D2𝗌𝗋𝖼(G)𝗌𝖺𝗍(G,φS,D1)𝗌𝖺𝗍(H,φS,D2).

Particularly, we have pmax2O(k).

References

  • [1] Brenda Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of the ACM (JACM), 41(1):153–180, 1994. doi:10.1145/174644.174650.
  • [2] Hans Bodlaender and Torben Hagerup. Parallel algorithms with optimal speedup for bounded treewidth. SIAM Journal on Computing, 27(6):1725–1746, 1998. doi:10.1137/S0097539795289859.
  • [3] Bruno Courcelle. The monadic second-order logic of graphs. I. recognizable sets of finite graphs. Information and Computation, 85(1):12–75, 1990. doi:10.1016/0890-5401(90)90043-H.
  • [4] Bruno Courcelle and Joost Engelfriet. Graph Structure and Monadic Second-Order Logic – A Language-Theoretic Approach, volume 138 of Encyclopedia of mathematics and its applications. Cambridge University Press, 2012. doi:10.1017/CBO9780511977619.
  • [5] Bruno Courcelle and Mohamed Mosbah. Monadic second-order evaluations on tree-decomposable graphs. Theoretical Computer Science, 109(1&2):49–82, 1993. doi:10.1016/0304-3975(93)90064-Z.
  • [6] Nadia Creignou, Sanjeev Khanna, and Madhu Sudan. Complexity classifications of Boolean constraint satisfaction problems. SIAM, 2001.
  • [7] Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages and Programming (ICALP), volume 4052, pages 1–12, 2006. doi:10.1007/11787006_1.
  • [8] Noah Fleming and Yuichi Yoshida. Sensitivity lower bounds for approximation algorithms. arXiv preprint arXiv:2411.02744, 2024. doi:10.48550/arXiv.2411.02744.
  • [9] Sanjeev Khanna, Madhu Sudan, Luca Trevisan, and David Williamson. The approximability of constraint satisfaction problems. SIAM Journal on Computing, 30(6):1863–1920, 2001. doi:10.1137/S0097539799349948.
  • [10] Tuukka Korhonen. Lower bounds on dynamic programming for maximum weight independent set. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, International Colloquium on Automata, Languages, and Programming (ICALP), volume 198 of LIPIcs, pages 87:1–87:14, 2021. doi:10.4230/LIPICS.ICALP.2021.87.
  • [11] Stephan Kreutzer. Algorithmic meta-theorems. In Finite and Algorithmic Model Theory, volume 379 of London Mathematical Society Lecture Note Series, pages 177–270. Cambridge University Press, 2011.
  • [12] Soh Kumabe and Yuichi Yoshida. Average sensitivity of dynamic programming. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1925–1961, 2022. doi:10.1137/1.9781611977073.77.
  • [13] Soh Kumabe and Yuichi Yoshida. Average sensitivity of the knapsack problem. In European Symposium on Algorithms (ESA), volume 244, pages 75:1–75:14, 2022. doi:10.4230/LIPICS.ESA.2022.75.
  • [14] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous algorithms for graph problems. In Symposium on Foundations of Computer Science (FOCS), pages 762–797, 2023. doi:10.1109/FOCS57990.2023.00051.
  • [15] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous allocations for optimization games. In International Colloquium on Automata, Languages, and Programming (ICALP), volume 297, pages 102:1–102:16, 2024. doi:10.4230/LIPICS.ICALP.2024.102.
  • [16] Soh Kumabe and Yuichi Yoshida. Lipschitz continuous algorithms for covering problems. In Symposium on Discrete Algorithms (SODA), pages 1057–1093, 2025. doi:10.1137/1.9781611978322.31.
  • [17] Leonid Libkin. Elements of Finite Model Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2004. doi:10.1007/978-3-662-07003-1.
  • [18] Quanquan Liu, Grigoris Velegkas, Yuichi Yoshida, and Felix Zhou. Pointwise lipschitz continuous graph algorithms via proximal gradient analysis. arXiv preprint, 2024. doi:10.48550/arXiv.2405.08938.
  • [19] Shogo Murai and Yuichi Yoshida. Sensitivity analysis of centralities on unweighted networks. In The world wide web conference, pages 1332–1342, 2019. doi:10.1145/3308558.3313422.
  • [20] Pan Peng and Yuichi Yoshida. Average sensitivity of spectral clustering. In ACM SIGKDD international conference on knowledge discovery & data mining (KDD), pages 1132–1140, 2020. doi:10.1145/3394486.3403166.
  • [21] Nithin Varma and Yuichi Yoshida. Average sensitivity of graph algorithms. SIAM Journal on Computing, 52(4):1039–1081, 2023. doi:10.1137/21M1399592.
  • [22] Yuichi Yoshida and Shinji Ito. Average sensitivity of euclidean k-clustering. Advances in Neural Information Processing Systems (NeurIPS), 35:32487–32498, 2022.
  • [23] Yuichi Yoshida and Samson Zhou. Sensitivity analysis of the maximum matching problem. In Innovations in Theoretical Computer Science Conference (ITCS), volume 185, pages 58:1–58:20, 2021. doi:10.4230/LIPIcs.ITCS.2021.58.