Abstract 1 Introduction 2 Preliminaries 3 Redundancy rules for MaxSAT 4 Soundness and completeness 5 Incompleteness and Width Lower bounds 6 Short proofs using redundancy rules 7 Conclusions and open problems References

Redundancy Rules for MaxSAT

Ilario Bonacina ORCID UPC Universitat Politècnica de Catalunya, Barcelona, Spain Maria Luisa Bonet ORCID UPC Universitat Politècnica de Catalunya, Barcelona, Spain Sam Buss ORCID University of California, San Diego, CA, USA Massimo Lauria ORCID Sapienza Università di Roma, Rome, Italy
Abstract

The concept of redundancy in SAT leads to more expressive and powerful proof search techniques, e.g., able to express various inprocessing techniques, and originates interesting hierarchies of proof systems [Heule et.al’20, Buss-Thapen’19]. Redundancy has also been integrated in MaxSAT [Ihalainen et.al’22, Berg et.al’23, Bonacina et.al’24].

In this paper, we define a structured hierarchy of redundancy proof systems for MaxSAT, with the goal of studying its proof complexity. We obtain MaxSAT variants of proof systems such as SPR, PR, SR, and others, previously defined for SAT.

All our rules are polynomially checkable, unlike [Ihalainen et.al’22]. Moreover, they are simpler and weaker than [Berg et.al’23], and possibly amenable to lower bounds. This work also complements the approach of [Bonacina et.al’24]. Their proof systems use different rule sets for soft and hard clauses, while here we propose a system using only hard clauses and blocking variables. This is easier to integrate with current solvers and proof checkers.

We discuss the strength of the systems introduced, we show some limitations of them, and we give a short cost-SR proof that any assignment for the weak pigeonhole principle 𝖯𝖧𝖯nm falsifies at least mn clauses.

Keywords and phrases:
MaxSAT, Redundancy Rules, Pigeonhole Principle
Funding:
Ilario Bonacina: The author was supported by grant PID2022-138506NB-C22 (PROOFS BEYOND) funded by AEI.
Maria Luisa Bonet: The author was supported by grant PID2022-138506NB-C21 (PROOFS BEYOND) funded by AEI.
Massimo Lauria: The author has been supported by the project PRIN 2022 “Logical Methods in Combinatorics” N. 2022BXH4R5 of the Italian Ministry of University and Research (MIUR).
Copyright and License:
[Uncaptioned image] © Ilario Bonacina, Maria Luisa Bonet, Sam Buss, and Massimo Lauria; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Proof complexity
Acknowledgements:
The authors would like to thank the Simons Institute for the Theory of Computing: part of this work has been done during the “Extended Reunion: Satisfiability” program (Spring 2023). Another part of this work has been done during the Oberwolfach workshop 2413 “Proof Complexity and Beyond” and during the 2023 Workshop on Proof Theory and its Applications organized by the Proof Society.
Editors:
Jeremias Berg and Jakob Nordström

1 Introduction

This paper investigates new proof systems for MaxSAT that incorporate redundancy inferences tailored to work for MaxSAT. Redundancy inferences were introduced as extensions to SAT solvers to allow non-implicational inferences that preserve satisfiability and non-satisfiability. For resolution and SAT solvers, the first redundancy inferences were based on blocked clauses (BC) [25] and Resolution Asymmetric Tautology (RAT) [22, 15]. Other work on redundancy reasoning includes [14, 16, 20, 23, 17]; and, of particular relevance to the present paper, are the work of Heule, Kiesl, and Biere [19], and the work of Buss and Thapen [9]. Redundancy inferences formalize “without loss of generality” reasoning [30] and can substantially strengthen resolution and, in some cases, the effectiveness of SAT solvers for hard problems such as the pigeonhole principle (PHP) [20]. Indeed, in their full generality, redundancy inferences allow resolution to polynomially simulate extended resolution.

MaxSAT is a generalization of SAT; it is the problem of determining a truth assignment for a CNF formula that minimizes the number of falsified clauses. Although the MaxSAT problem is inherently more difficult than SAT, in some cases MaxSAT can be adapted to be more efficient in practice than CDCL solvers for hard problems such as PHP [7]. There are several approaches to MaxSAT solvers, including MaxSAT resolution [8, 26], core-guided MaxSAT [12, 1, 28, 27, 29], and maximum-hitting-set MaxSAT [2, 11, 31] ; the present paper discusses only MaxSAT resolution. The MaxSAT resolution proof system was first defined by Larrosa and Heras [26] and proved completed by Bonet, Levy and Manya [8].

We define cost preserving redundancy rules for MaxSAT mirroring the redundancy rules for SAT, called “cost-BC”, “cost-LPR”, “cost-SPR”, “cost-PR”, and “cost-SR” (see Definition 3.2). The strongest of these is “cost-SR” based on the substitution redundancy (SR[9]. All five of these new inferences are sound for MaxSAT reasoning (Theorem 4.2). Furthermore, we prove that cost-SPR, cost-PR and cost-SR are complete for MaxSAT (Theorem 4.3). On the other hand, we prove that cost-LPR and cost-BC are incomplete for MaxSAT (Corollary 5.2). We illustrate the power of cost-SR by giving polynomial size proofs of the cost of the blocking-variable version of weak pigeonhole principle 𝖻𝖯𝖧𝖯nm for arbitrary numbers m>n of pigeons and holes (Theorem 6.3).

Ours is not the first paper bringing redundancy reasoning to the context of optimization. For instance, the work of Ihalainen, Berg and Järvisalo [21], building on [4], introduced versions of redundancy inferences that work with MaxSAT. In contrast to the system CPR of [21], all of our “cost-” inferences are polynomial-checkable for validity, and thus all give traditional Cook-Reckhow proof systems for MaxSAT.

Another system with redundancy rules for certifying unsatisfiability and optimization is veriPB [5]. Being rooted in cutting planes, veriPB is particularly apt at certifying optimality, and it can log the reasoning of MaxSAT solver strategies that are way out of reach of MaxSAT resolution [3]. The propositional fragment of veriPB is as strong as Extended Resolution and DRAT, and plausibly even stronger [24]. In contrast, our systems are plausibly weaker and simpler, but yet strong enough to prove efficiently interesting formulas. Still our systems might be amenable to proving lower bounds. Indeed, our explicit goal is to study the proof complexity of redundancy rules for MaxSAT in a similar vein of what [9] does for SAT, something that is beyond the scope of veriPB, focused on proof logging actual solvers. As a starting point, we show that “cost-BC” and “cost-LPR” are not complete (Corollary 5.2) and we show width lower bounds on “cost-SPR” and an analogue of a width lower bound for “cost-SR” (Corollary 5.3). Finally there is another work that tries to modify the redundancy rules to make them compatible with the MaxSAT model of soft and hard clauses [6]. That work has a similar spirit to this one, but departs in the redundancy test, which checks for inclusion rather than for reverse unit propagation. Such choice makes the inference weaker but preserves the number of unsatisfied clauses, hence the rule applies directly to soft clauses.

Before proceeding with the description and analysis of our systems, we should highlight two aspects of redundancy inference that are somehow orthogonal to the choice of the concrete inference rule: clause deletion and new variable introduction. The applicability of a redundancy rule depends on the clauses present in the database, and it seems that allowing deletion of past clauses makes the system stronger, and indeed collapses together the power of several types of inference rules (see [9]). Likewise the possibility of introducing new variables in redundant clauses makes all such systems as powerful as extended resolution [25]. Coherently with the stated goal of having systems that are simple and amenable to proof complexity analysis, in this paper we do not allow neither clause deletion nor new variables.

Structure of the paper

Section 2 contains all the necessary preliminaries, including notation on MaxSAT and the blocking variables encoding of MaxSAT instances (blocking variables are also used by [21]). Section 3 introduces the redundancy rules for MaxSAT, proves their basic properties, and defines calculi based on those rules. Section 4 shows their soundness, and their completeness. Section 5 shows the incompleteness of cost-BC/cost-LPR and some limitations of cost-SPR and even cost-SR. Section 6 gives examples of applications of the redundancy rules, including a polynomial size proof of the optimal cost of the weak Pigeonhole Principle and a general result about the polynomial size provability of minimally unsatisfiable formulas. Section 7 gives some concluding remarks.

2 Preliminaries

For a natural number n, let [n] be the set {1,,n}. Sets and multi-sets are denoted with capital Roman or Greek letters.

Propositional logic notation

A Boolean variable x takes values in {0,1}. A literal is either a variable x or its negation x¯. A clause is a finite disjunction of literals, i.e., C=ii. The empty clause is . A formula in Conjunctive Normal Form (CNF) is a conjunction of clauses Γ=jCj. We identify a CNF with the multiset of its clauses, and denote as |Γ| the number of its clauses (counted with multiplicity). We denote as Var(Γ) the set of variables in Γ.

Substitutions and assignments

A substitution σ for a set of variables X is a function so that σ(x) is either 0, 1 or some literal defined on X. For convenience, we extend a substitution σ to constants and literals, setting σ(0)=0, σ(1)=1, and σ(x¯)=σ(x)¯ for any variable xX. The composition of two substitutions σ,τ is the substitution στ, where στ(x)=σ(τ(x)) for xX. A substitution σ is an assignment when σ(x){0,1,x} for any xX. The domain of an assignment σ is dom(σ)=σ1({0,1}), and σ is a total assignment over X if X is its domain, i.e., σ maps all variables in X to Boolean values. Given a clause C=ii and a substitution σ, the clause C restricted by σ, is Cσ=iσ(i), simplified using the usual logic rules, i.e., D0=D, D1=1, and D=D. Another notation for Cσ is σ(C). If σ(C)=1 or σ(C) is tautological we say that σC, i.e., σ satisfies C.

The restriction of a CNF formula Γ by σ, denoted as Γσ, is the conjunction of all clauses Cσ where CΓ and σ(C)1. The CNF Γσ is also a multiset. We say that σ satisfies Γ (σΓ) if for every CΓ, σC, i.e., Γσ=. We say that ΓC if for every substitution σ, if σΓ then σC.

We identify a literal with the substitution that assigns to 1 and leaves all other variables unassigned. Hence we use notations like Γ. Likewise, given a clauses C we denote as C¯ the assignment that maps all literals in C to false, and we use the notation ΓC¯.

Unit propagation

A unit clause is a clause of just one literal. Unit propagation works as follows. Start with a CNF Γ: if Γ has no unit clauses, the process ends, otherwise pick some unit clause in Γ arbitrarily, remove from Γ all clauses containing and remove literal ¯ from all clauses containing it. Keep repeating until Γ has no more unit clauses. Regardless of the choice of the unit clause, the process always ends with the same formula.

We say that Γ1C when the application of unit propagation to the formula ΓC¯ produces the empty clause. For two CNF formulas Γ,Δ we say that Γ1Δ if for every DΔ, Γ1D. Clearly, if ΓΔ then Γ1Δ, and if Γ1Δ, then ΓΔ. It is important to stress that the 1 relation is efficiently checkable.

Observation 2.1 ([9, Fact 1.3]).

Let σ be a substitution and Γ,Δ be CNF formulas, if Γ1Δ, then Γσ1Δσ.

Resolution

Resolution is a well-studied propositional deduction system with two inference rules: (i) from a clause A we can deduce any B s.t. AB; (ii) from clauses Ax and Bx¯ we can deduce AB. A resolution proof from a set of clauses Γ is a sequence of clauses D1,D2,,Dt where each Di is either already in Γ or is deduced from earlier clauses in the sequence using one of the two inference rules. Resolution is complete, thus deciding whether a clause C can be deduced from Γ is the same as deciding whether ΓC.

MaxSAT

Given a CNF formula F, MaxSAT asks to find the maximum number of clauses in F which can be simultaneously satisfied. In applications, it is useful to consider a generalization for which we divide the clauses into hard or soft (partial MaxSAT). Hard clauses must be satisfied, while soft clauses can be falsified with a cost. Consider F=HS where H is the multiset of hard clauses and S is the multiset of soft ones. In this model, MaxSAT asks to find the maximum number of clauses in S that can be simultaneously satisfied by an assignment that satisfies all clauses in H. Observe that the optimization problem is not well defined if H is not satisfiable.111An even more general version is weighted MaxSAT, where we would consider weighted set of clauses (F,w), i.e., each clauses CF has an associated weight w(C) where w:F{}. In this model the goal is to minimize the weight of the falsified clauses. The role of the weight is to model hard clauses. In this paper we do not focus on this model. It is not relevant whether H is a set or a multiset. In S, on the other hand, the multiplicity of soft clauses must be accounted for.

Proof systems for MaxSAT aim to show lower bounds on the cost of (partial) MaxSAT instances, one such system is MaxSAT resolution.

MaxSAT with blocking variables

Without loss of generality we can assume that all soft clauses in a MaxSAT instance are unit clauses; indeed, using a new variable b, a soft clause C can be replaced with a hard clause Cb and a soft clause b¯, without affecting the cost. The variable b is usually called a blocking variable. This appears in [13], but it might have been used even earlier.

Definition 2.2.

Let F=HS with soft clauses S=C1Cm. The blocking variables formulation of F is F=HS where

  • H=H(C1b1)(Cmbm),

  • S=b1¯bm¯,

and b1,,bm are new variables (blocking variables) not appearing in F. We say that Γ is a MaxSAT instance encoded with blocking variables, when it is given as a set of hard clauses of the form as in H above. The soft clauses, then, are implicit.

Observation 2.3.

Let F=HS be a MaxSAT instance and F=HS be the blocking variables formulation of F. Any assignment that satisfies H and falsifies k clauses in S can be extended to an assignment that satisfies H and sets k blocking variables to true. Vice versa, any assignment that satisfies H and sets k blocking variables to true satisfies H too and falsifies at most k clauses in S.

Because of ˜2.3, for the rest of this work we consider Γ to be a MaxSAT instance encoded with blocking variables usually named {b1,,bm}. The goal is to satisfy Γ while setting to true the least number of blocking variables. More formally, given a total assignment α for Γ, we define

cost(α)=i=1mα(bi)andcost(Γ)=minα:αΓcost(α)

and the goal is to find the value of cost(Γ). Notice that, the notation cost(α) is defined even for assignments not satisfying Γ.

3 Redundancy rules for MaxSAT

In the context of SAT, a clause C is redundant w.r.t. a CNF instance Γ if Γ and Γ{C} are equisatisfiable, that is either they both are satisfiable or both unsatisfiable [25]. The natural adaptation of this notion to MaxSAT is a clause C that does not affect the cost of Γ.

Definition 3.1 (redundant clause, [21]).

A clause C is redundant w.r.t. a MaxSAT instance Γ when

cost(Γ)=cost(Γ{C}). (1)

Clauses that logically follow from Γ are obviously redundant, but there may be other useful clauses that do not follow logically, and yet do not increase the cost if added.

The condition in eq. (1) is not polynomially checkable (unless, say P=NP). Therefore, we consider efficiently certifiable notions of redundancy, i.e., ways to add redundant clauses (in the sense of eq. (1)) while certifying efficiently their redundancy. This is done showing how to extend in a systematic way the notions of efficiently certifiable redundancy already studied in the context of SAT (BC, RAT, LPR, SPR, PR, SR) [19, 9] to the context of MaxSAT. This is an alternative to the approach of [21]. This definition could also be seen as a very special case of veriPB [5] (see Section 3.1 for more details on the connections with veriPB).

Definition 3.2.

A clause C is cost substitution redundant (cost-SR) w.r.t. to Γ if there exists a substitution σ such that

  1. 1.

    ΓC¯1(Γ{C})|σ (redundancy)

  2. 2.

    for all total assignments τC¯, cost(τσ)cost(τ) (cost).

If the substitution σ has some additional structure, we have the following redundancy rules listed in decreasing order of generality:

Cost propagation redundant (cost-PR)

if σ is a partial assigment.

Cost subset propagation redundant (cost-SPR)

if σ is a partial assigment with the same domain as C¯. In other words, σ flips some variables in C¯.

Cost literal propagation redundant (cost-LPR)

if σ is a partial assignment with the same domain as C¯, but differs from C¯ on exactly one variable.

Cost blocked clause (cost-BC)

if σ is a partial assignment with the same domain as C¯, which differs from C¯ on exactly one variable v, and moreover, for every clause DΓ containing the variable v, σD.222The definition of blocked clause is written to match the previous definitions. In this case, the redundancy condition is always satisfied.

Item 1 in Definition 3.2 claims that adding C does not make Γ unsatisfiable, unless it was already the case. Together with Item 2, it ensures that any assignment that falsifies the new clause C can be patched with a substitution σ so that C is satisfied without increasing the minimum cost (see Lemma 3.4).

Indeed, Item 1 in Definition 3.2 and its variants correspond to the redundancy rules of proof systems SR, PR, SPR, and LPR from [9], adapted here to consider cost. Since LPR is the same as RAT (see [9, Theorem 1.10]), the notion of cost literal propagation redundancy could as well be called cost-RAT redundancy.

 Remark 3.3.

It is important to compare Definition 3.2 with [21, Definition 2]. Redundancy conditions are very similar and the main differences are in the cost conditions. Let us compare their CPR rule with our cost-PR . In cost-PR, the cost condition requires the witness σ to be at least as good as all possible extensions of C¯, while in CPR the requirement is enforced only on those extensions of C¯ that satisfy Γ. The latter more permissive condition allows to derive more clauses, but it is unlikely to be polynomially checkable, while the condition in cost-PR is polynomially checkable (see Lemma 3.5). In [21], the authors also define two polynomially checkable rules where the cost condition is not present, but implicitly enforced via restrictions on the type of assignments used. Those rules are special cases of cost-LPR and cost-SPR respectively.

Lemma 3.4.

If C is cost-SR w.r.t. to Γ, then C is redundant w.r.t. Γ.

Proof.

It is enough to show that cost(Γ)cost(Γ{C}). Let cost(Γ)=k. To show that adding C to Γ does not increase the cost, consider an optimal total assignment α that satisfies Γ and sets to true exactly k blocking variables. If αC we already have that αΓ{C} and cost(α)=k. Otherwise, α extends C¯ and, by assumption, there is a substitution σ such that cost(ασ)k. To show that cost(Γ{C})k, it remains to show that indeed ασΓ{C}. By assumption,

ΓC¯1(Γ{C})|σ,

and, since αΓ and extends C¯, then α(Γ{C})|σ too. Equivalently, ασΓ{C}.

Both Item 1 and Item 2 of Definition 3.2 are stronger than what is actually needed for Lemma 3.4 to hold. Indeed, for Item 1, it would be enough that ΓC¯(Γ{C})|σ, and, for Item 2, it would be sufficient to check it for any τC¯ such that τΓ. Unfortunately, these latter versions of Item 1 and Item 2 are in general not polynomially checkable. Instead, our conditions are checkable in polynomial time.

Lemma 3.5.

Let Γ be a MaxSAT instance, C a clause and σ a substitution. There is a polynomial time algorithm to decide whether C is cost-SR w.r.t. Γ, given the substitution σ.

Proof (sketch).

The redundancy condition in Definition 3.2 is polynomially checkable, since it is a unit propagation. To check the cost condition we need to check whether

maxτC¯(cost(τσ)cost(τ)) (2)

is at most 0. This amounts to maximizing the function

(i=1nσ(bi)bi)|C¯ (3)

over all assignments on variables in Var(Γ)Var(C¯). Observe that eq. (3) is a function of the form c+vVar(Γ)Var(C¯)cvv, for suitable constants c and cvs. Therefore its maximum can be easily determined in polynomial time and corresponds to assigning each v to 1 when cv>0 and to 0 otherwise.

Lemma 3.5 allows the definition of proof systems that extend resolution using cost redundancy rules in the sense of Cook-Reckhow [10].

Definition 3.6 (cost-SR calculus).

The cost-SR calculus is a proof system for MaxSAT. A derivation of a clause C from a MaxSAT instance Γ (encoded with blocking variables) is a sequence of clauses D1,D2,,Dt where, CΓ{Di}i[t], each Di is either already in Γ or is deduced from earlier clauses in the sequence using a resolution rule, or Di is cost-SR w.r.t. to Γ{D1,,Di1} with Var(Di)Var(Γ).333We only consider the case where no new variables are added via cost-SR rules. To be coherent with [9], cost-SR should be cost-SR, following the notational convention of adding an exponent with “” to denote SR, PR and SPR when the systems are not allowed to introduce new variables. We ignore that convention to ease an already rich notation. The length of such derivation is t, i.e., the number of derived clauses. To check the validity of derivations in polynomial time, any application of the cost-SR rule comes accompanied by the corresponding substitution that witnesses its soundness.

If the goal is to certify that cost(Γ)s, we can accomplish this deriving s distinct unit clauses of the form {bi1,,bis} (see Theorem 4.2). If the goal is to certify that cost(Γ)=s, we can accomplish this deriving s distinct unit clauses of the form {bi1,,bis} together with the unit clauses {b¯j:j{i1,,is}} (see Theorem 4.2).

In a similar fashion we define cost-PR, cost-SPR, cost-LPR, and cost-BC calculi. A remarkable aspect of these calculi is that a proof must somehow identify the blocking variables to be set to true. When there are multiple optimal solutions, it is quite possible that none of the bi follows logically from Γ. Nevertheless, the redundancy rules, often used to model “without loss of generality” reasoning, can reduce the solution space.

3.1 Simulation by veriPB

We show that cost-SR and its subsystems are p-simulated by veriPB. This is actually not surprising since veriPB proof steps are based on cutting planes, which in turn easily simulates resolution. Furthermore veriPB includes the SAT redundancy rules equipped with some criteria of cost preservation. veriPB can argue within the proof whether some substitution is cost preserving. If veriPB can do that for the substitutions that respect the second condition in Definition 3.2, then it p-simulates cost-SR.

Since veriPB is a complex system, we describe the minimal fragment needed to simulate cost-SR. Consider a CNF formula F and some objective linear function f to be minimized. A veriPB derivation starts with F encoded as linear inequalities. At each step a linear inequality C is derived from the set Γ of previously derived inequalities in one of the following ways:

  1. 1.

    C is derived from Γ using some cutting planes rule;

  2. 2.

    there exists a substitution σ so that ΓC¯(Γ{C})σ{fσf},where the symbol means that there is a cutting planes proof that witnesses the implication.

In Item 2 the cutting plane proof is included in the veriPB proof to make it polynomially verifiable, unless it just consists in unit propagation steps or certain uses of Boolean axioms. In the latter cases the verifier can efficiently recover such derivation steps on its own.

The resolution inference rule of cost-SR can be simulated using 1. The redundacy inference rule of cost-SR is simulated by 2. The redundancy condition is syntactically the same, while the cost condition is argued via a cutting planes proof. The latter is the only non-trivial part. This argument gives a proof sketch of the following proposition.

Proposition 3.7.

Let F be a MaxSAT instance over n variables with blocking variables b1,b2,,bk. If there is a t step cost-SR derivation of a set of clauses Γ from it, then there is a veriPB proof from F and objective f=i=1kbi of some ΓΓ in poly(nt) steps.

4 Soundness and completeness

The calculi cost-SR/cost-PR/cost-SPR are sound and complete. Before proving the soundness we show an auxiliary lemma, that shows that when the calculus certifies the lower bound for the optimal values, it can also certify the optimality.

Lemma 4.1.

Let Γ be a MaxSAT instance encoded with blocking variables b1,,bm, of cost(Γ)=k, and suppose Γ contains the unit clauses bi1,,bik. Then cost-PR can prove b¯j for each j{i1,,ik} in 𝒪(km) steps.

Proof.

Let σ be a total assignment satisfying Γ with cost(σ)=k, that is σ maps all the bis to 1 and the other blocking variables to 0. We derive all the clauses

Cj=b¯i1b¯ikb¯j

with j{i1,,ik} using the cost-PR rule. For all clauses Cj, the substitution witnessing the validity of the cost-PR rule is always σ. The redundancy condition from Definition 3.2 is trivially true since Γ union an arbitrary set of Cjs is mapped to 1 under σ. The cost condition is true because for every τCj¯, cost(τ)k+1 and cost(τσ)=cost(σ)=k.

To conclude, by resolution, derive b¯j from Cj and the unit clauses bi1,,bik.

Theorem 4.2 (soundness of cost-SR).

Let Γ be a MaxSAT instance encoded with blocking variables. If there is a cost-SR proof of k distinct blocking variables, then cost(Γ)k. If there is a cost-SR proof of k distinct blocking variables {bi1,,bik} and all the unit clauses b¯j for j{i1,,ik}, then cost(Γ)=k.

Proof.

Let b1,,bm be the blocking variables of Γ. Let Γ the set of clauses in Γ plus all the clauses derived in the proof of cost(Γ)k. That is Γ also contains contains k distinct unit clauses bi1,,bik, hence cost(Γ)k. By Lemma 3.4, the cost is preserved along proof steps, therefore cost(Γ)=cost(Γ)k. In the case where we have all the b¯js then cost(Γ)=k and therefore cost(Γ)=k.

As an immediate consequence of Theorem 4.2, also all the cost-PR, cost-SPR, cost-LPR calculi are sound. Moreover, we can always prove the optimal lower bound in cost-SPR calculus.

Theorem 4.3 (completeness of cost-SPR).

Let Γ be a MaxSAT instance encoded with blocking variables, of cost(Γ)=k. There is a cost-SPR derivation of the unit clauses bi1,,bik for some distinct k blocking literals and all the b¯j for j{i1,,ik}.

Proof.

Let b1,,bm be the blocking variables of Γ. Take αopt to be an optimal assignment, that is αoptΓ, cost(αopt)=k, and for every total assignment β that satisfies Γ, cost(β)k. Without loss of generality we can assume αopt sets variables b1,,bk to 1 and the remaining bjs to 0.

Given any assignment γ, let γ¯ be the largest clause falsified by γ. Let Σ be the set of all clauses γ¯ where γ is a total assignment that satisfies Γ is different from αopt. We want to derive Σ from Γ, essentially forbidding any satisfying assignment except for αopt.

We can add all clauses in Σ one by one by the cost-SPR rule. Indeed, for any clause γ¯Σ and any ΣΣ, the clause γ¯ is cost-SPR w.r.t. ΓΣ, with αopt as the witnessing assignment. The redundancy condition

(ΓΣ)γ1(ΓΣγ¯)αopt

holds because αoptΓΣγ¯ so the RHS is just true. The cost condition holds by optimality of αopt. In the end, the only assignment that satisfies ΓΣ is αopt. By the completeness resolution we can prove all its literals, in particular literals bi for 1ik and literals b¯i for i>k.

5 Incompleteness and Width Lower bounds

Theorem 4.3 shows the completeness of cost-SPR, hence for cost-PR and cost-SR. It is not a coincidence that the proof does not apply to cost-LPR, indeed the latter is not complete. We will see that for some redundant clause derived according to Definition 3.2, the number of values that partial assignment σ flips with respect to any τC¯ may have to be large. In a cost-LPR proof this number is always at most one. For a redundant clause C derived in some subsystem of cost-SR via a witness substitution σ we define

flip(C,σ)=maxτC¯HammingDistance(τ,τσ),

where HammingDistance is the number of different bits between two total assignments. The following result shows sufficient conditions for flip(C,σ) to be large.

Theorem 5.1.

Let Γ be a MaxSAT instance encoded with blocking variables, of cost(Γ)=k, and let A be the set of optimal total assignments for Γ, i.e., αA when αΓ and cost(α)=k. If A is such that

  1. 1.

    all pairs of assignments in A have Hamming distance at least d, and

  2. 2.

    for every blocking variable b there are α,βA s.t. α(b)=0 and β(b)=1,

then to derive any blocking literal b, cost-SR must derive a redundant clause with flip(C,σ)d, where σ is the witnessing substitution for C.

Proof.

Consider a cost-SR derivation from Γ as a sequence Γ0,Γ1,,Γs where each Γi+1:=Γi{C} with C either derived by resolution from clauses in Γi, or C is cost-SR w.r.t. Γi. For 0js, let μ(j) be the number of the optimal assignments for Γj.

At the beginning μ(0)=|A| by construction. If at some point Γj contains some clause bi, then the value μ(j) must be strictly smaller than |A| because A contains at least some assignment with {bi0} (by assumption 2).

Let j be the first step where μ(j) drops below |A|. The clause C introduced at that moment must be cost-SR w.r.t. Γj1, because the resolution steps do not change the set of optimal assignments. Let then σ be the witnessing substitution used to derive C, we have

Γj1C¯1(Γj1{C})σ. (4)

Since μ(j) dropped below |A|, clause C must be incompatible with some τA, that is τC¯. Therefore, by cost preservation,

cost(τσ)cost(τ)=k. (5)

By ˜2.1, eq. (4) implies that

Γj1τ1(Γj1{C})τσ.

Since j was the first moment when μ(j)<|A| we have that τΓj1 and therefore τσΓj1{C}. In particular, τσΓ. By eq. (5) then it must be τσA. But then, by assumption 1, τ and τσ have Hamming distance at least d, which implies flip(C,σ)d.

We see an example application of this result to cost-LPR, where the number of values allowed to be flipped by the rule is at most 1.

The formula F={xyb1,x¯b2,y¯b3} has cost 1 and its optimal assignments to variables x,y,b1,b2,b3 are {00100,10010,01001}. These assignments fulfil the premises of Theorem 5.1, with Hamming distance >1. Therefore cost-LPR cannot prove the cost of F to be 1, and hence cost-LPR is incomplete.

Corollary 5.2.

Proof systems cost-LPR, cost-BC are incomplete.

Corollary 5.3.

There is a formula family Fn with O(n) variables, O(n) clauses and cost(Fn)=Ω(n) where, in order to prove cost(Fn)1, any cost-SR proof derives a redundant clause with flip(C,σ)=Ω(n), where σ is the witnessing substitution for C.

Proof.

We define Fn on variables x0,x1,,xn, and variables y0,y1,,yn. The formula contains hard clauses to encode the constraint x0y0, and constraints x0=xi, y0=yi for i[n]. Furthermore Fn has the soft clauses, encoded as hard clauses with blocking variables, ¬xibi and ¬yibi+n for i[n].

To satisfy the formula an assignment must either set all x’s to true and y’s to false, or vice versa. Both such assignments set to true n of the blocking variables, and no blocking variable is fixed to a constant value. Therefore the claim follows from Theorem 5.1.

Corollary 5.4.

Any cost-SPR proof for a formula respecting the hypothesis of Theorem 5.1 requires some redundant clauses of width at least d.

Proof.

Any redundant clause C derived in cost-SPR must have |C|flip(C,σ), and by Theorem 5.1, flip(C,σ)d. For instance, the formula Fn from Corollary 5.3 requires cost-SPR refutations of width Ω(n).

6 Short proofs using redundancy rules

We show applications demonstarting the power of the redundancy rules on notable families of CNF formulas. In Section 6.1 we consider minimally unsatisfiable formulas, while in Section 6.2 we consider the weak Pigeonhole Principle.

 Remark 6.1.

Due to Theorem 4.2 and Theorem 4.3, we refer to a cost-SPR (resp. cost-PR, cost-SR) derivation from Γ of bi1,,bik for some distinct k blocking literals and all the b¯j for j{i1,,ik}, as a proof of cost(Γ)=k in cost-SPR (resp. cost-PR, cost-SR).

6.1 Short proofs of minimally unsatisfiable formulas

Recall the definition of PR from [18] (see also [9, Definition 1.16]). A PR calculus refutation of a CNF formula Γ is a sequence of clauses D1,,Dt where Dt=, and each Di+1 is either a clause in Γ, or derived by resolution, or is PR w.r.t. Γi=Γ{D1,,Di}, that is Di+1 satisfies

ΓiDi+1¯1(Γi{Di+1})|σ,

that is, Item 1 of Definition 3.2 for a σ which is a partial assignment. A PR refutation is a PR derivation of . The size of a refutation is the number of clauses in it.

An unsatisfiable set Γ of clauses is minimally unsatisfiable if no proper subset of Γ is unsatisfiable.

Theorem 6.2.

If a minimally unsatisfiable CNF formula {C1,,Cm} has a PR refutation of size s, then there is a cost-PR proof of cost({C1b1,,Cmbm})=1 of at most 𝒪(s+m) many clauses.

Proof.

Let F={C1,,Cm} and Γ={C1b1,,Cmbm} be the corresponding MaxSAT instance. Let π=(D1,,Ds) be a PR refutation of F, so Ds=. First we show that

πB=(D1B,,DsB),

with B=i[m]bi, is a valid cost-PR derivation of B from Γ. In particular, assuming we already derived the first i steps of πB, we show how to derive Di+1B.

When Di+iF, the clause Di+iB is the weakening of some clause in Γ. If Di+1 was derived using a resolution rule on some premises in π, then Di+iB can be derived in the same way from the corresponding premises in πB. The remaining case is when Di+1 is PR w.r.t. Fi=F{D1,,Di}. Let α be the assignment that witnesses it. This assignment only maps variables from the original formula F, so we extend it to α=α{b10,,bm0}, and then use α to witness that indeed Di+1B is cost-PR w.r.t. Γi=Γ{D1B,DiB}. For the cost condition in Definition 3.2, just observe that any extension of α has cost 0. For the redundancy condition, observe that, by construction, ΓiDi+1¯B¯=FiDi+1¯, (Fi{Di+1})α=(Γi{Di+1B})α, and FiDi+1¯1(Fi{Di+1})α.

The last clause of πB is B. Let αopt be an optimal assignment of Γ. Since F is minimally unsatisfiable, cost(αopt)=1. W.l.o.g. assume αopt sets bm=1 and all bi=0 for i<m.

Now, for each i<m, the clause Ei=b¯ibm is cost-PR w.r.t. πB{Ej:j<i}, using αopt itself as the witnessing assignment: redundancy holds since αopt satisfies every clause in πB and all clauses Ej. The cost condition follow since cost(τ)1 for any τEi¯ and cost(ταopt)=cost(αopt)=1.

In the end we use O(m) steps to derive bm from B and E1,,Em1, and to derive in cost-PR calculus all the units b¯1,,b¯m1 via Lemma 4.1.

Theorem 6.2 shows that the propositional refutations for the minimally unsatisfiable formulas in [9] translate immediately to certificates in the MaxSAT. In particular, as a corollary of Theorem 6.2, we have that cost-PR proves in polynomial size that

  • the Pigeonhole Principle with n+1 pigeons and n holes [9, Theorem 4.3] and [19, Section 5],

  • the Bit-Pigeonhole Principle [9, Theorem 4.4],

  • the Parity Principle [9, Theorem 4.6],

  • the Tseitin Principle on a connected graph [9, Theorem 4.10],

have all cost 1, since they are all minimally unsatisfiable. In MaxSAT resolution that would require exponentially long derivations.

6.2 Short proofs of the minimum cost of 𝗣𝗛𝗣𝒏𝒎

Let m>n1. The pigeonhole principle from m pigeons to n holes, with blocking variables, has the following formulation, that we call 𝖻𝖯𝖧𝖯nm: the totality clauses j[n]pi,jbi for i[m], and the injectivity clauses p¯i,jp¯k,jbi,k,j for 1i<km and j[n]. We use bk,i,j as an alias of the variable bi,k,j, given that i<k.

Theorem 6.3.

cost-SR proves cost(𝖻𝖯𝖧𝖯nm)=mn in polynomial size.

This is the main result of the section. Before proving it we show two useful lemmas. The first lemma is used to “clean up” the set of clauses during a derivation. For each new step in a cost-SR calculus derivation the redundancy condition must be checked against an ever increasing set of clauses. It turns out that some already derived clauses can be completely ignored for the rest of the derivation under some technical conditions. This makes up for the lack of a deletion rule, that we do not have, and in the context of SAT seems to give more power to the systems [9].

Lemma 6.4.

Let Γ and Σ be two sets of clauses. Any cost-SR derivation D1,Dt from Γ is also a valid derivation from ΓΣ if either of the two cases applies

  1. 1.

    Variables in Σ do not occur in Γ{D1,,Dt}.

  2. 2.

    For every clause CΣ there is a clause CΓ so that CC.

Proof.

The cost condition does not depend on the set of clauses, therefore we only need to check the validity of the redundancy condition. In the first case, the redundancy condition applies because the clauses of Σ are unaffected by the substitutions involved.

For the second case, consider the derivation of a clause Di witnessed by σi. The clauses in ΣDi¯ and Σσi are subsumed by clauses in ΓDi¯ and Γσi, respectively. Hence

(Γ{D1,,Di1})Di¯1(ΓΣ{D1,,Di})σi

which implies the validity of the redundancy condition.

The second lemma is used as a general condition to enforce clauses to be cost-SR.

Lemma 6.5.

Let C be a clause and Γ a set of clauses. If there exists a permutation π such that

  1. 1.

    π maps the set of blocking variables to itself,

  2. 2.

    the substitution C¯π satisfies C, and ΓC¯ΓC¯π,

then C is cost-SR w.r.t. Γ. Notice that the second condition in item (2) is automatically satisfied if π is a symmetry of Γ, i.e. Γ=Γπ.

Proof.

The cost condition follows from Item 1. The redundancy condition is immediate by Item 2 using as σ the substitution C¯π.

Proof of Theorem 6.3.

The proof is by induction. The goal is to reduce the formula to m1 pigeons and n1 holes. First we do some preprocessing: from 𝖻𝖯𝖧𝖯nm we derive a slightly more structured formula 𝖥nm. Then we show how to derive 𝖥n1m1 in a polynomial number of steps. The results follows because after n such derivations we obtain the formula 𝖥0mn that contains the clauses b1,,bmn. Moreover, we also derive bmn+1¯,,bm¯ along the way.

We derive 𝖥n1m1 from 𝖥nm using the rules of cost-SR calculus. We divide the argument into several steps, but first we show how to derive 𝖥nm from 𝖻𝖯𝖧𝖯nm.

Preprocessing 1.

“Make bi full-fledged extension variables”. 444This is true in general: if a MaxSAT instance contains a clause Cb then it is possible to make the blocking variable b a full-fledged extension variable (bC¯) by cost-LPR. Turn all the variables bi into full-fledged extension variables that satisfy bi¬(pi,1pi,n), by adding the clauses

𝖤𝗑𝗍={pi,j¯bi¯:i[m],j[n]}

one by one in cost-LPR.

We need to derive clause Dj=p1,j¯b1¯ for every j[n]. Assume that we already got D1,,Dj1, we derive Dj as a cost-LPR clause w.r.t. 𝖻𝖯𝖧𝖯nm{D1,,Dj1}. The witnessing assignment is σj:={p1,j=1,b1=0}. Since Dj¯={p1,j=1,b1=1}, the cost condition is satisfied. The redundancy condition follows from

𝖻𝖯𝖧𝖯nmDj¯(𝖻𝖯𝖧𝖯nm{D1,,Dj})σj.

Indeed, on clauses of 𝖻𝖯𝖧𝖯nm that do not contain the variable bj, the assignments Dj¯ and σj behave identically, while all the clauses containing bj are satisfied by σj. Repeat the previous argument to get all the clauses pi,j¯bi¯. The current database of clauses is 𝖻𝖯𝖧𝖯nm𝖤𝗑𝗍.

Preprocessing 2.

“Enforce injectivity”.  Optimal assignments for 𝖻𝖯𝖧𝖯nm can have unassigned pigeons or have collisions between pigeons. It is more convenient to avoid collisions and just focus on assignments that are partial matching. A moment’s thought suffices to realize that such restriction does not change the optimal cost but simplifies the solution space. We enforce collisions to never occur by deriving all the unit clauses bi,k,j¯ by cost-PR.

These clauses can be derived in any particular order: to show that bi,k,j¯ is cost-SR w.r.t. Γ0 and the previously derived bi,k,j¯ we pick one of the two pigeons involved (say k) and use σ={bi,k,j=0,bk=1,pk,1==pk,n=0} as the witnessing assignment. The cost is not increased, and to check the redundancy condition observe that σ satisfies all the clauses that touches, so on the right side of the redundancy condition has a subset of 𝖻𝖯𝖧𝖯nm𝖤𝗑𝗍 with no occurrences bi,k,j, while the left side has the same set of clauses, but restricted with bi,k,j=0.

Now that we have all clauses bi,k,j¯ we resolve them with the corresponding clauses pi,j¯pk,j¯bi,k,j to get the set of clauses

𝖨𝗇𝗃={pi,j¯pk,j¯:1i<km and j[n]},

for all holes j and pair of pigeons i and k.

We do not need variables bi,k,j anymore. By one application of Lemma 6.4, from now on we can ignore all clauses pi,j¯pk,j¯bi,k,j. By another application, we can also ignore the clauses bi,k,j¯. We will do induction on the current database of clauses.

For clarity we list all its clauses again.

Formula 𝖥nm j[n]pi,jbi for i[m] (totality 1), pi,j¯bi¯ for i[m] and j[n] (totality 2), pi,j¯pk,j¯ for 1i<km and j[n]  (injectivity).

The core idea of the induction is that if a pigeon flies to a hole, we can assume without loss of generality that it is pigeon m that flies into hole n.

Step 0.

“If some pigeon i flies, we can assume it is pigeon m who flies”.  We want to derive, in this order, the set of clauses

Δ1={bm¯b1,bm¯b2,,bm¯b(m1)}

from 𝖥nm, to claim that if some pigeon is mapped, then pigeon m is mapped too. For each Ci=bm¯bi we apply Lemma 6.5 using as the witnessing permutation πi, the permutation that swaps pigeons m and i.

Namely, πi(pm,j)=pi,j, πi(pi,j)=pm,j, πi(bm)=bi, πi(bi)=bm, and πi is the identity on all other variables, therefore πi satisfies the first requirement for the lemma. Likewise Ci¯πiCi, and we need to check that

(𝖥nm{C1,,C(i1)})Ci¯(𝖥nm{C1,,C(i1)})Ci¯πi.

By symmetry 𝖥nmCi¯=𝖥nmCi¯πi, and for 1i<i, CiCi¯πi=1, hence the inclusion is true. The current database of clauses is Γ1=𝖥nmΔ1.

Step 1.

“If pigeon m flies to some hole, we can assume it flies to hole n”.  Using cost-SR inferences, we derive from Γ1, in this order, the clauses

Δ2={pm,1¯pm,n,pm,2¯pm,n,,pm,(n1)¯pm,n}

expressing that if pigeon m flies to some hole, this hole is the last one.

For each Cj=pm,j¯pm,n we apply Lemma 6.5 with the witnessing permutation πj swapping holes n and j.

Namely πj(pi,n)=pi,j and πj(pi,j)=pi,n, and πj is the identity on all other variables. By construction πj satisfies the first requirement for the lemma, and likewise Cj¯πjCj, and, again, we need to check

(Γ1{C1,,C(j1)})Cj¯(Γ1{C1,,C(j1)})Cj¯πj.

By symmetry Γ1Cj¯=Γ1Cj¯πj, and for 1j<j, CjCj¯πj=1, hence the inclusion is true. The current database of clauses is Γ2=Γ1Δ2=𝖥nmΔ1Δ2.

Step 2.

“Obtain pk,n¯ for every 1k<m via resolution”.  Resolve the clause (pm,1pm,2pm,nbm) (totality 1) with pm,n¯pk,n¯, the resulting clause with all clauses pm,j¯pm,n from step 2, to get bmpm,npk,n¯. Then resolve bmpm,npk,n¯ again with the injectivity clause pm,n¯pk,n¯, then the result with clause bm¯bk (from step 1), and again this latter result with clause bk¯pk,n¯ (totality 2). The final result is pk,n¯.

The clauses pk,n¯ subsume the clauses in 𝖨𝗇𝗃 of the form pm,n¯pk,n¯ and all the intermediate clauses from the previous resolution steps. Therefore we use Lemma 6.4 to be able to ignore the subsumed clauses.

The current database of clauses is Γ3 is equal to

𝖥nmΔ1Δ2{pk,n¯:1k<m}{pm,n¯pk,n¯:1k<m}.
Step 3.

“Assign pigeon m to hole n: derive unit clauses pm,n and bm¯”.  The goal is to enforce pigeon m to be mapped to hole n, by deriving the clause pm,n using the cost-PR rule. Then we get bm¯ immediately by resolving pm,n with pm,n¯bm¯ (totality 2).

The unit clause pm,n is cost-PR w.r.t. Γ3, using partial assignment σ={pm,n=1,bm=0} as witness.

Clearly σ satisfies the cost condition. To see that the redundancy condition holds as well, we need to show that Γ3|C¯1D|σ for all D in Γ3 that contain pm,n¯, but the only such clause that remains in Γ3 is pm,n¯bm¯, which is satisfied by σ. The current database of clauses is Γ4=Γ3{pm,n,bm¯}.

Step 4.

“Derive pm,1¯,,pm,(n1)¯ by cost-SR”.  We can derive them in any order using as witnessing substitution of the cost-SR rule the assignment σ setting pm,n=1, pm,1==pm,(n1)=0, and bm=0.

The cost condition is immediate, and the redundancy condition follows from the fact that Γ4σΓ4.

Step 5.

“Reduction to m1 pigeons and n1 holes”.  First we derive by unit propagation all the the totality clauses of 𝖥n1m1. That is, we remove the hole n from the totality axioms of the pigeons 1,,m1 in the current database. Now, the current database is 𝖥n1m1, the unit clauses bm¯, pm,n, pk,n¯ for km and pm,j¯ for jn, and clauses that are subsumed by one of these unit clauses. Therefore by Lemma 6.4 we can ignore all the unit clauses and all the clauses subsumed by them. That is we can carry on the derivation using only 𝖥n1m1.

Thus steps (1)–(6) are repeated n1 times, up to derive 𝖥0mn.

The unit clauses derived in the whole process include

  • b1,,b(mn) (totality clauses in 𝖥0mn).

  • bn+1¯,,bm¯ (derived at each step of the induction),

  • bi,k,j¯ for all i<k and j (derived at the preprocessing).

Therefore cost(𝖻𝖯𝖧𝖯nm)=mn.

7 Conclusions and open problems

We proposed a way to extend redundancy rules, originally introduced for SAT, into polynomially verifiable rules for MaxSAT. We defined sound and complete calculi based on those rules and we showed the strength of some of the calculi giving short derivations of notable principles and we showed the incompleteness of the weaker ones and width lower bounds for the stronger ones. We conclude this article with a list of open problems:

  1. 1.

    The cost constraint for the redundancy rules is very strict, for example compared to the rule CPR in [21]. Indeed, cost-PR enforces the check on the cost even on assignments falsifying the hard clauses of the formula. Is it possible to relax cost-PR without giving up on efficient verification as in [21]?

  2. 2.

    Does cost-SR simulate MaxSAT Resolution? That is, if we have a MaxSAT instance Γ with blocking variables and MaxSAT Resolution proves in size s that cost(Γ)=k, is there a proof of cost(Γ)=k in cost-SR of size 𝗉𝗈𝗅𝗒(s)?

  3. 3.

    We proved a width lower bound for cost-SPR and an analogue of a width lower bound for cost-SR on formulas with optimal assignments far from each other in the Hamming distance. We reiterate the open problem of proving size lower bounds for cost-SPR and stronger systems.

References

  • [1] Carlos Ansótegui, Maria Luisa Bonet, and Jordi Levy. SAT-based MaxSAT algorithms. Artif. Intell., 196:77–105, 2013. doi:10.1016/J.ARTINT.2013.01.002.
  • [2] Fahiem Bacchus, Matti Järvisalo, and Ruben Martins. Maximum satisfiabiliy. In Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh, editors, Handbook of Satisfiability - Second Edition, volume 336 of Frontiers in Artificial Intelligence and Applications, pages 929–991. IOS Press, 2021. doi:10.3233/FAIA201008.
  • [3] Jeremias Berg, Bart Bogaerts, Jakob Nordström, Andy Oertel, and Dieter Vandesande. Certified core-guided MaxSAT solving. In Proceedings of the 29th International Conference on Automated Deduction (CADE), volume 14132, pages 1–22, 2023. doi:10.1007/978-3-031-38499-8_1.
  • [4] Jeremias Berg and Matti Järvisalo. Unifying reasoning and core-guided search for maximum satisfiability. In Proceedings of the 16th European Conference on Logics in Artificial Intelligence (JELIA), volume 11468, pages 287–303, 2019. doi:10.1007/978-3-030-19570-0_19.
  • [5] Bart Bogaerts, Stephan Gocht, Ciaran McCreesh, and Jakob Nordström. Certified dominance and symmetry breaking for combinatorial optimisation. Journal of Artificial Intelligence Research, 77:1539–1589, 2023. doi:10.1613/JAIR.1.14296.
  • [6] Ilario Bonacina, Maria Luisa Bonet, and Massimo Lauria. MaxSAT Resolution with Inclusion Redundancy. In Proceedings of the 27th International Conference on Theory and Applications of Satisfiability Testing (SAT), volume 305, pages 7:1–7:15, 2024. doi:10.4230/LIPIcs.SAT.2024.7.
  • [7] Maria Luisa Bonet, Sam Buss, Alexey Ignatiev, João Marques-Silva, and António Morgado. MaxSAT resolution with the dual rail encoding. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, (AAAI), the 30th innovative Applications of Artificial Intelligence (IAAI), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI), pages 6565–6572, 2018. doi:10.1609/AAAI.V32I1.12204.
  • [8] Maria Luisa Bonet, Jordi Levy, and Felip Manyà. Resolution for Max-SAT. Artificial Intelligence, 171(8-9):606–618, 2007. doi:10.1016/J.ARTINT.2007.03.001.
  • [9] Sam Buss and Neil Thapen. DRAT and propagation redundancy proofs without new variables. Logical Methods in Computer Science, 17(2), 2021. URL: https://lmcs.episciences.org/7400.
  • [10] Stephen A. Cook and Robert A. Reckhow. The relative efficiency of propositional proof systems. Journal of Symbolic Logic, 44(1):36–50, 1979. doi:10.2307/2273702.
  • [11] Jessica Davies and Fahiem Bacchus. Solving MAXSAT by solving a sequence of simpler SAT instances. In Proceedings of the 17th International Conference on Principles and Practice of Constraint Programming (CP), pages 225–239, 2011. doi:10.1007/978-3-642-23786-7_19.
  • [12] Zhaohui Fu and Sharad Malik. On solving the partial MAX-SAT problem. In Proceedings of the 9th International Conference on Theory and Applications of Satisfiability Testing (SAT), pages 252–265, 2006. doi:10.1007/11814948_25.
  • [13] Michel X. Goemans and David P. Williamson. New 34-approximation algorithms for the maximum satisfiability problem. SIAM Journal on Discrete Mathematics, 7(4):656–666, 1994. doi:10.1137/S0895480192243516.
  • [14] Marijn Heule, Warren A. Hunt Jr., and Nathan Wetzler. Trimming while checking clausal proofs. In Proceedings of the conference on Formal Methods in Computer-Aided Design (FMCAD), pages 181–188, 2013. URL: https://ieeexplore.ieee.org/document/6679408/.
  • [15] Marijn Heule, Warren A. Hunt Jr., and Nathan Wetzler. Verifying refutations with extended resolution. In Proceedings of the 24th International Conference on Automated Deduction (CADE), volume 7898, pages 345–359, 2013. doi:10.1007/978-3-642-38574-2_24.
  • [16] Marijn Heule, Warren A. Hunt Jr., and Nathan Wetzler. Expressing symmetry breaking in DRAT proofs. In Proceedings of the 25th International Conference on Automated Deduction (CADE), volume 9195, pages 591–606, 2015. doi:10.1007/978-3-319-21401-6_40.
  • [17] Marijn J. H. Heule and Armin Biere. What a difference a variable makes. In Proceedings of the 24th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), Held as Part of the European Joint Conferences on Theory and Practice of Software (ETAPS), volume 10806, pages 75–92, 2018. doi:10.1007/978-3-319-89963-3_5.
  • [18] Marijn J. H. Heule, Benjamin Kiesl, and Armin Biere. Short proofs without new variables. In Proceedings of the 26th International Conference on Automated Deduction (CADE), volume 10395, pages 130–147, 2017. doi:10.1007/978-3-319-63046-5_9.
  • [19] Marijn J. H. Heule, Benjamin Kiesl, and Armin Biere. Strong extension-free proof systems. Journal of Automated Reasoning, 64(3):533–554, 2019. Extended version of [18]. doi:10.1007/s10817-019-09516-0.
  • [20] Marijn J. H. Heule, Benjamin Kiesl, Martina Seidl, and Armin Biere. Pruning through satisfaction. In Proceedings of the Hardware and Software: Verification and Testing - 13th International Haifa Verification Conference (HVC), volume 10629, pages 179–194, 2017. doi:10.1007/978-3-319-70389-3_12.
  • [21] Hannes Ihalainen, Jeremias Berg, and Matti Järvisalo. Clause redundancy and preprocessing in maximum satisfiability. In Proceedings of the 11th International Joint Conference on Automated Reasoning (IJCAR), volume 13385, pages 75–94, 2022. doi:10.1007/978-3-031-10769-6_6.
  • [22] Matti Järvisalo, Marijn Heule, and Armin Biere. Inprocessing rules. In Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR), volume 7364, pages 355–370, 2012. doi:10.1007/978-3-642-31365-3_28.
  • [23] Benjamin Kiesl, Adrián Rebola-Pardo, and Marijn J. H. Heule. Extended resolution simulates DRAT. In Proceedings of the 9th International Joint Conference on Automated Reasoning (IJCAR), Held as Part of the Federated Logic Conference, FloC 2018, volume 10900, pages 516–531, 2018. doi:10.1007/978-3-319-94205-6_34.
  • [24] Leszek Aleksander Kołodziejczyk and Neil Thapen. The Strength of the Dominance Rule. In Proceedings of the 27th International Conference on Theory and Applications of Satisfiability Testing (SAT), volume 305, pages 20:1–20:22, 2024. doi:10.4230/LIPIcs.SAT.2024.20.
  • [25] Oliver Kullmann. On a generalization of extended resolution. Discrete Applied Mathematics, 96-97:149–176, 1999. doi:10.1016/S0166-218X(99)00037-2.
  • [26] Javier Larrosa and Federico Heras. Resolution in max-sat and its relation to local consistency in weighted csps. In Proceedings of the 19h International Joint Conference on Artificial Intelligence (IJCAI), pages 193–198, 2005. URL: http://ijcai.org/Proceedings/05/Papers/0360.pdf.
  • [27] António Morgado, Carmine Dodaro, and João Marques-Silva. Core-guided MaxSAT with soft cardinality constraints. In Proceedings of the 20th International Conference on Principles and Practice of Constraint Programming (CP), pages 564–573, 2014. doi:10.1007/978-3-319-10428-7_41.
  • [28] António Morgado, Federico Heras, Mark H. Liffiton, Jordi Planes, and João Marques-Silva. Iterative and core-guided MaxSAT solving: A survey and assessment. Constraints An Int. J., 18(4):478–534, 2013. doi:10.1007/S10601-013-9146-2.
  • [29] Nina Narodytska and Fahiem Bacchus. Maximum satisfiability using core-guided MaxSAT resolution. In Proceedings of the 28th Conference on Artificial Intelligence (AAAI), pages 2717–2723, 2014. doi:10.1609/AAAI.V28I1.9124.
  • [30] Adrián Rebola-Pardo and Martin Suda. A theory of satisfiability-preserving proofs in SAT solving. In Proceedings of the 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), volume 57, pages 583–603, 2018. doi:10.29007/TC7Q.
  • [31] Paul Saikko, Jeremias Berg, and Matti Järvisalo. LMHS: a SAT-IP hybrid MaxSAT solver. In Proceedings of the 19th International Conference on Theory and Applications of Satisfiability Testing (SAT), pages 539–546, 2016. doi:10.1007/978-3-319-40970-2_34.