Explaining Reasoning Results for Description Logic Ontologies
Abstract
The Web Ontology Language (OWL), grounded in description logics, enables reasoning systems to infer implicit knowledge in a transparent manner. However, the expressivity of description logics and the complexity of large ontologies often results in reasoning outcomes that are hard to understand without additional tool support. Explanations of these outcomes are essential for users to understand ontology content, communicate its structure and behavior effectively, and debug undesired or missing inferences. This chapter provides an overview of the central explanation techniques that have been developed for explaining reasoning with description logic ontologies. Here, we consider both explanations for positive entailments (explaining why something can be deduced), as well as negative entailments (why something cannot be deduced). More specifically, we discuss justifications, proofs and interpolation as a means to explain positive entailments, and abduction for explaining negative entailments, where we also have a closer look at practical algorithms as well as practical and theoretical challenges.
Keywords and phrases:
Explanations, Justifications, Proofs, Craig Interpolation, Contrastive ExplanationsCategory:
Invited Paper2012 ACM Subject Classification:
Computing methodologies Description logicsEditors:
Alessandro Artale, Meghyn Bienvenu, Yazmín Ibáñez García, and Filip MurlakSeries and Publisher:
Open Access Series in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In a world where subsymbolic methods from AI are the main focus of research, symbolic reasoning with knowledge representations still offers a level of transparency and control that cannot be matched by such methods. The reason is that in knowledge representation, we represent our knowledge explicitly in a format that is both machine-processable and human-understandable. Through symbolic reasoning, we can make new inferences on these knowledge bases in a transparent manner. This is in particularly true for logic-based systems, which come with a clearly defined semantics and allow for deductive reasoning methods for which we can mathematically prove that they only make correct inferences. If a symbolic system comes to a conclusion that is not true, assuming that the reasoning engine is bug-free, this can only be due to an error in the knowledge base, which can then be identified and fixed.
Description logics are arguably one of the most important logic-based knowledge representation formalisms. They are the logic-based underpinning of the Web Ontology Standard OWL 2.0 [33], and are used to describe ontologies. An ontology describes a conceptualization of some domain of interest – it defines and puts constraints on the terms that are relevant in that domain for a particular application, considering unary relations (concepts) and binary relations (roles). Automated reasoning systems for description logics can be used to solve a variety of tasks with an ontology: in addition to determining whether an ontology is logically consistent, they can infer implicit relationships between the defined concepts (a task called classification), they can classify nodes in knowledge graphs, and they can be used to evaluate complex queries on databases, which then also take into account the implicit knowledge in the data that can be inferred through use of the ontology [14].
Research on description logics in the late 1990s and 2000s lead to two developments: on the one hand, the discovery of lightweight description logics such as [13] and DL-Lite [10] which allow tractable and practically efficient reasoning with ontologies and databases that have 100,000s of entries, and on the other hand the development of very expressive description logics such as [40], the description logic underlying the OWL 2 DL standard. pushes the limit of expressivity without losing decidability of the logic, and is a logic for which reasoning becomes challenging: standard reasoning tasks for -ontologies are N2ExpTime-complete [41]. Nonetheless, we have now highly optimized reasoning systems that can reason over large ontologies as they occur in the real world [61]. This is achieved by dedicated, highly optimized reasoning systems, and especially the latest developments, the reasoners Konclude [70] and Sequoia [26], are based on very complex algorithms that are hard to understand by non-experts.
A side effect of these developments is that preserving the transparency of reasoning with ontologies becomes more challenging. Understanding why a particular inference was performed on an ontology with 10,000s of statements, potentially due to complex interactions between expressive operators, is not possible without appropriate tool support. Motivated by this, a multitude of explanation methods for description logic has been developed, and the topic has recently gained increased attention again. The ontology editor Protégé comes with an in-built explanation functionality based on justifications [37], and recent developments such as Evee [5] and Evonne [59] offer more advanced explanation methods such as proofs, abduction and counterexamples.
The aim of this tutorial is to give an overview over the most important explanation methods for description logics, taking a closer look at a selection of methods and problems. In particular, we will look at methods for explaining positive entailments, that is, for explaining why a particular statement is logically entailed from a description logic knowledge base, and at methods for explaining negative entailments, that is, for explaining why a particular statement is not logically entailed.
We start with an introduction to description logics that are relevant to the tutorial. Methods for explaining positive entailments include justifications, proofs and Craig interpolation. Justifications have been the focus of an earlier tutorial of the summer school [32]. In Section 3, we give a more brief discussion on central algorithms and challenges. In Section 4, we then have a look at proofs based on inference rules as a more detailed type of explanation, providing a formal definition, and a discussion on how to compute explanations that optimize different optimality criteria. How to generate proofs without a dedicated set of inference rules is discussed in Section 6. One of these techniques is based on the notion of interpolation, which provides a different type of explanations on its own, which we discuss before in Section 5. Methods for explaining negative entailments include counterinterpretations and abduction. There are not a lot of works on counterinterpretations, which we mention briefly in Section 7, which otherwise focuses on abduction. Abduction in description logics is a topic that would fill a tutorial on its own, which is why we give a more superficial overview about the different challenges and variants.
Most of the methods discussed here are available in tools for end-users: justifications are integrated by default in the ontology editor Protégé [60], proofs are supported by the Elk Protégé plugin [42] as well as by the web application Evonne [59], and the Evee plugin for Protégé supports the different methods for proofs and abduction that are discussed here in more detail [5].
2 Description Logics
We recall the description logics (DLs) that are relevant to this tutorial. For a full introduction, we refer to the DL textbook [14].
The central construct of DLs are concept descriptions, simply called concepts in the following. Those concepts are constructed from two disjoint, countably infinite sets of concept names () and role names (), which correspond to unary and binary predicates in first-order logic. In this chapter, we often call role names just roles. Concept and role names are used to construct the following concepts, where , and are concepts:
-
concept name , top and bottom ,
-
complement ,
-
conjunction ,
-
disjunction ,
-
existential restriction ,
-
universal restriction .
While there are DLs with many more concept constructors, the preceding operators are available in the DL , which is why they are called concepts. While many results in this paper apply to more expressive DLs, this is the most expressive DL we will use in this paper when the DL is relevant. By restricting the set of operators, we can define two more DLs that will be relevant: , which only has the operators concept name, top, conjunction and existential restriction, and , which in addition allow for bottom. Concepts are used to build concept inclusions (CIs) and equivalence axioms . A finite set of CIs and equivalence axioms is called a TBox, and intuitively specifies a terminology by putting different concepts in relation.
To also talk about specific facts, we use a set of countably infinite individual names disjoint from and . We call concept names, role names and individual names jointly names. An ABox is then a finite set of concept assertions and role assertions , where is a concept, and , . The union of a TBox and an ABox is called a knowledge base (KB).111In the literature, sometimes knowledge bases are called ontologies, and sometimes only TBoxes are called ontologies. To avoid this confusion, we avoid the term ontology in most of the following paper. Observe that TBoxes and ABoxes are special cases of KBs. CIs, equivalence axioms, concept assertions and role assertions are collectively called axioms.
Example 1.
As an illustrating example, we define a KB as a simplified KB about lectures from a university:
| (1) | ||||
| (2) | ||||
| (3) | ||||
| (4) | ||||
| (5) | ||||
| (6) | ||||
| (7) | ||||
| (8) | ||||
| (9) | ||||
| (10) |
The TBox specifies the relevant terminology. Axiom (1) defines OffCampusLecture as a lecture that has as location an OffCampusLocation. (2) specifies BotanicalGarden as not being an OffCampusLocation, and (3) species LectureHall as an OnCampusLocation. (4) states that no location can be both on campus and off campus. (5) states that if something has a lecture topic, then it is a lecture. (6) states that lectures have either an OnCampusLocation or an OffCampusLocation (7) finally gives Botany as an example of a lecture topic. The ABox gives information about one specific instance of a course, which has the identifier xb_214, is located in the Ernst garden (8) and has topic Botany (9). The Ernst garden is a botanical garden (10).
To define the meaning of concepts and axioms, we need to specify their semantics, which is usually done using interpretations. An interpretation is a tuple . The domain is a finite set of elements, and the interpretation function gives meaning to concept names and roles, by assigning to each individual name an element , to each concept name a set , and each role to a binary relation . The interpretation function is then lifted to concepts as follows:
The interpretation then satisfies a CI if , an equivalence axiom if , a concept assertion if , and a role assertion if . For an axiom , we write to indicate that satisfies . If satisfies all axioms in a KB , we write and call a model of . The notion of a model now lets us define logical entailments. In particular, if for all models of . If an axiom is satisfied in all interpretations, we call it a tautology. A KB that does not have any model is called inconsistent.
If , we say that is subsumed by , and if , we also say that is an instance of .
Example 2.
From the KB from Example 1, among others, the following axioms are entailed:
-
1.
-
2.
-
3.
This means, xb_214 is an instance of Lecture and OffCampusLecture, and ernst_garden is an instance of OffCampusLocation. We will see in the following sections how to explain some of these entailments.
Given a KB , we use to refer to the individuals occurring in it. For a KB/axiom/concept , we use to refer to its signature, i.e. the concept and role names occurring in . If a KB/axiom/concept uses only names from a signature , i.e. a set of concept and role names, then we call a -KB/axiom/concept.
and form the basis of two families of DLs that extend those logics by additional expressivity. Nominals are concepts of the form that refer to a set containing just the individual name , inverse roles are inverses of roles () that can be used instead of role names, functionality constraints allow to express that a role is functional, and role hierarchies are axioms of the form where and are roles. To describe which additional features a DL supports, we use calligraphic letters, which are then appended to the name of the DL: stands for nominals, for inverse roles, for functionality constraints, for role hierarchies. For example, is a logic that extends with inverse roles, and is a logic that extends with role hierarchies and nominals. To refer to the extension of a logic with , we use . A prominent example is which extends with bottom. For more information about these operators, we refer to [14].
can be seen as the prototypical lightweight DL that admits tractable reasoning, while is the prototypical more expressive DL for which reasoning is ExpTime-complete [14]. In fact, many logics of the family allow for tractable reasoning: deciding entailment in any logic between and is PTime-complete [13]. Adding any of the operators disjunction, negation, value restriction or inverse roles on the other hand makes the logic ExpTime-hard [13]. Indeed, allows to express a limited form of value restrictions, which are sometimes added to the language as syntactic sugar: specifically, can be expressed as .
3 Justifications
Justifications are the best investigated way of explaining entailments . As a means to understand and debug ontologies, they have been first proposed in [66], and since then many different approaches for computing them practically have been suggested [38, 16, 17, 15, 67].
The idea of justifications is quite simple:
Definition 3.
Let be a KB and an axiom s.t. . Then, a justification for is a set of axioms s.t. and for no , .
Example 4.
In our running example, for the entailment , the following axioms form a justification:
For the entailment , the following would be a justification:
The justification helps understanding why ernst_garden is an instance of OffCampusLocation. However, we already see that with rising complexity, more involved explanation methods may become necessary, which will be discussed in the coming sections.
Intuitively, a justification points to a minimal portion of the KB that is sufficient for the entailment, and can thus be considered ‘responsible’ for it. This is in particular relevant if we want to understand entailments from large and complex KBs. An example that is often given is that an older version of SNOMED CT entailed [17]. This is clearly a bug, but to fix it, one would need to find the responsible axioms in a set of around 300,000 axioms, for which justifications can be very helpful. Since justifications point to axioms in the KB responsible for the entailment, the process of finding justifications is also called axiom pinpointing. Justifications are now a standard functionality of the ontology editor Protégé and of the Java OWL API.
There is a range of different approaches for computing justifications, many of which involve modifying the DL reasoner itself to track which axioms are used for determining the entailment. An alternative is the black box approach, where an existing reasoner is used as a black box without further modifications. Indeed, the black box approach implemented in [38] provides comparable performance to implementations of glass box approaches such as the one presented in [69]. The advantage of black box approaches is that they can be used with any reasoning system, and thus benefit from future developments into more efficient reasoning systems.
For the remainder of this section, fix a KB and an axiom s.t. . It is not hard to see that using a black box approach, we can compute a justification for using at most polynomially many entailment checks: specifically, we can go through the axioms in one after the other, and check for each whether removing it from would still preserve the entailment, and remove it if this is not the case. The resulting set of axioms will be a justification (we cannot remove any axiom without breaking the entailment). From the perspective of computational complexity, we are often happy with such a polynomial procedure. However, from a practical perspective, we can do much better.
Algorithm 1 shows a simplified version of the black box algorithm presented in [38] for computing a justification for . The algorithm uses a divide and conquer strategy. The main idea is to fix a part of the KB and only consider the remaining axioms in when trying to find a minimal subset. Once this part of the KB has been minimized, we set the minimized part as fixed, and minimize the part of the KB that was previously fixed. Consequently, our algorithm uses an internal procedure that takes as arguments a fixed KB () and a varied part that is to be optimized (Line 6). This procedure is called unless is a tautology (Line 3), and we start with a fixed part that is empty (Line 5). If the variable part has only one element, there is nothing to minimize, since we know that is not a tautology, and we can directly return (Line 8). Otherwise, we split into two subsets and (Line 9) and check whether either of those sets is alone sufficient to produce the entailment. If it is, we can ignore the other set and continue (Line 11 and 13). If it is not (Line 14), we minimize with fixed part , obtaining the set which is subsequently fixed to minimize . The union of the two minimized sets is then returned.
Theorem 5.
requires at most polynomially many steps and returns a justification for .
Proof.
We first consider the claim that the algorithm requires at most polynomially many steps. For this, we analyze the size of the recursion tree of . In Line 9, we split into two disjoint, non-empty sets and , which are then used in the recursive calls in Line 11, 13, 15 and 16. Since the sets are disjoint, any neighbouring branches in the recursion tree consider a disjoint set of axioms. The recursion terminates once we use a set that has exactly one element. It follows that the recursion tree has at most leafs. At the same time, the depth of the recursion tree is bounded by , since we can only split at most that many times into two disjoint subsets. The size of such a tree is polynomially bounded by the size of .
To see that returns a justification, we need to show the following claims:
-
1.
is only called with arguments s.t. ,
-
2.
always returns a set s.t. , and
-
3.
always returns a subset-minimal set satisfying the previous condition.
Claim 1 can be checked by induction on the different calls of the procedure. For the first call in Line 5, the claim holds by our assumption that ; in Line 11 and Line 13, we checked the condition explicitly in the previous line, and in Line 15 and Line 16, the claim follows from the inductive hypothesis, and the fact that .
Using Claim 1, we can now prove Claim 2 by induction on the size of . We observe that can never be empty: first, if the initial KB was empty, since , we would directly return in Line 3. Second, in Line 9, we always create non-empty KBs and . Thus, the induction base case is where , which is covered in Line 11. Because of Claim 1, we have , and is directly returned in Line 11, so that the claim holds for this case. For , the claim follows directly from the inductive hypothesis since in each recursive call, we use a smaller set for .
Claim 3 can be shown by induction over as well. For the case that , we first observe that we never have : for the initial call in Line 5, this was checked before, and in the only places where is extended, in Line 15 and Line 16, we checked before that and . Consequently, , and since , must be subset-minimal. In all the other cases, the claim follows directly by induction, since we always call SingleJustification with a smaller set for . In particular, in line Line 17, since both and are subset minimal, removing any axiom from or would break the entailment .
It now follows from Claim 3 that in Line 5, we always return a justification for .
All DLs we consider in this paper have a reasoning complexity of at least PTime, so that a consequence of Theorem 5 is that computing justifications is not harder than deciding the entailment to begin with. In particular, for KBs, we can compute justifications in polynomial time. On the other hand, the number of different justifications for a given entailment can be exponentially large, as can be illustrated by a simple example.
Theorem 6.
There is a family of KBs , and axioms , , such that the size of each is linear in , and for some axiom , there are different justifications for .
Proof.
We define as follows:
and set . We have and that has justifications: for each , we have to include , and for each , we have to include either or .
We can compute the entire set of justifications in exponential time using Reiter’s hitting set tree algorithm from [32, 64]. We present a slightly simplified version of this algorithm, which essentially leaves out an optimization that does not affect the worst-case complexity, and refer the reader for the optimized version to [38]. The algorithm constructs a tree in which nodes are labeled with justifications, and edges with axioms that should be excluded when generating the next justification. Formally, such a hitting set tree (HST) for an entailment is a labeled tree , with
-
node labeling and
-
edge labeling .
The idea is that each node will be labeled with a different justification or the special symbol . The edge labels indicate which axioms we should exclude when looking for a justification. Since is a tree, each node has a unique path of edges leading from the root to , which are labeled with a set of axioms . The additional requirement on HSTs is that for each node , one of the following is satisfied.
-
1.
is a justification for , or
-
2.
and .
The algorithm to compute all justifications for builds up such an HST incrementally until all justifications have been found. We start with a HST consisting just of a root that is labeled with an arbitrary justification. Now in each step, we pick a node that is labeled with a justification , and add new successor nodes , , , labeling each edge with . The edge labeling makes sure that along the same path, we never compute the same justification twice: specifically, we exclude one axiom from each justification occurring on this path when computing the next justification. This ensures that the algorithm terminates. In particular, the same axiom can never occur on a path twice, which means that the depth for the HST is bounded by , which bounds the size of the HST by and means that the algorithm stops after exponentially many steps. We can indeed show that HST also computes all justifications.
Theorem 7.
returns the set of all justifications for .
Proof.
Let be a HST computed by . For a node , denote by the set of descendants of , including itself, and by the set of justifications that were found in the descendants of . Towards the theorem, we prove that for every , contains all justifications for . We prove this by induction over the size of . If , then is a leaf, and . This label gets assigned if , so that indeed contains the set of all justifications for . Assume . Then, has descendants , , , and is a justification for , where for all , . For every , we have , since at least one axiom from cannot be used by . We obtain that for all , so that indeed . We thus can apply our inductive hypothesis and obtain that for all , contains all justifications for . Now let be a justification for s.t. . This means that for every , , which means that must contain for every some axiom s.t. . In each case, this axiom can only be . We obtain that , and since is subset minimal, we indeed have . By definition, thus , and we obtain that indeed contains all justifications for . Applying this observation to the root of the HST, we obtain that contains all justifications for .
4 Proofs: Inference Rules and Optimality
Justifications show which axioms from the KB are responsible for the entailment, but not how they are responsible. Proofs are more detailed explanations. Roughly, a proof is an acyclic, directed graph that shows how the entailment in question can be inferred in small steps from the axioms in the KB. Typically, proofs are generated using a set of sound inference rules that describe on an abstract level how to infer a new axiom from an existing set of axioms. Indeed, there are reasoning methods for DLs that use such a set of rules for reasoning [36, 68, 25, 26], though many established reasoners for more expressive DLs are based on other reasoning methods such as tableaux [69, 31, 70]. Using such inference rules not only as a mechanism for reasoning but also as a means to compute explanations seems natural. Indeed, one of the earliest works on explanations in DLs, which was published well before the first works on justifications, used proofs [58]. Elk is an efficient and widely used reasoner that supports an extension of , and that does use inference rules [43]. Figure 1 shows the inference rules used as they are presented in the paper for the Elk reasoner, restricted to .222In [43], the second premise of is instead a side condition. In the context of this paper, this difference is not relevant. We call such a set of rules also calculus. The way to read these rules is as follows: if we have axioms as on top of the bar, then we can infer the axiom below the bar. Another calculus for can be found in [14], which also presents the calculus for shown in Figure 2. A calculus for can be found in [68, 51], and for more expressive logics, rule sets can be found in [36, 25, 26, 57, 50]
Example 8.
We continue with the TBox from our running example (Example 1). We have . A proof for this entailment using the Elk calculus is the following:
Here, we highlight in gray the axioms that are from .
As we have seen in Section 3, there can be exponentially many justifications for a given entailment. For proofs, there are even more degrees of freedom. To determine which proof is best to be used as an explanation, one can define quality criteria for proofs. The question is then how to compute a proof that is optimal with respect to this criterion. As one would expect, some criteria are harder to implement than others. We present a polynomial algorithm for computing proofs that are optimal for one such criterion, and then indicate how this quality criterion can be generalized to a larger class of quality criteria that admit tractable proof generation. But before we can even speak of the complexity of generating good proofs, we need a formal account on what proofs are and how they are generated.
Formally, we define proofs as a special case of derivation structures, which are hypergraphs labeled with axioms.
Definition 9.
Let be an KB. Then, a derivation structure for is a labeled hypergraph with vertices , hyperedges , and a labeling function that assigns to every vertex an axiom , and satisfies
-
1.
for every , , and
-
2.
if for some , there is no , then .
The edges in the hypergraph correspond to the inference steps. Our conditions ensure that every inference is sound, and if some axiom is not the result of an inference, then it has to be from the KB. Note that in this setting, tautological axioms can occur in edges , and thus are also the result of an inference step.
Derivation structures are a generalization of proofs, and will help us in defining the problem of computing a proof. Proofs are defined as follows:
Definition 10.
Let be a KB and an axiom. Then, a proof for is a derivation structure that satisfies:
-
1.
has exactly one sink , which is labeled with the final conclusion ,
-
2.
is acyclic,
-
3.
every node is derived exactly once, e.g. there are no two , s.t. .
Note that the last condition does not require every axiom to occur only once in the proof, since different nodes can be labeled with the same axiom. The purpose of this condition is to make inferences in a proof unique, while a derivation structure may contain many possible inferences for the same node. Our definition is flexible enough to allow also for proofs that are not tree shaped, but instead correspond to directed acyclic graphs (DAGs). Indeed, this can be a more compact representation in many cases.
There are different ways in which proofs can be generated, and inference rules are just one possibility. In Section 6, we will see methods that work without a set of inference rules. In the following, we abstract away from the specific inference mechanism, and refer to a system that produce proofs as deriver. A deriver is a method that takes as argument a pair of a KB and an entailment , and produces a derivation structure that contains all the inferences the deriver could perform towards proving . From this derivation structure, we then extract different proofs that optimizes some desired optimality criteria such as its size. For convenience, we assume that the cardinality of any inference step is polynomially bounded, and that no two nodes have the same label. For instance, the reasoner Elk only infers CIs over concepts that already occur as sub-concepts in the input. This means that for the corresponding deriver , the derivation structure is always of a size that is polynomially bounded by the size of and . For languages for which entailment is in ExpTime, such as and , derivers based on inference rules would instead produce derivation structures that can be exponential in the size of the input. This is for instance the case for the deriver based on the calculus in Figure 2, whose derivation structures are exponentially bounded. In fact, derivers may even produce larger derivation structures or even unbounded derivation structures, which are discussed in [8].
We consider the complexity of computing a good proof for a given deriver. This complexity will be influenced by whether the size of the derivation structure is polynomially bounded or larger – for simplicity, we will in the following distinguish polynomial and exponential derivers, based on whether the size of derivation structures is polynomial or exponential in the size of the input. Computing the entire derivation structure is inefficient, which is why we instead see derivation structures as functions that 1) can return the node corresponding to an axiom with , and 2) verify whether a given edge is part of a derivation structure (). In case the deriver is defined via a set of inference rules, these function calls correspond to checking whether an inference is valid according to this set of inference rules.
The first quality measure for proofs uses its size, which is simply defined as , the number of edges or inference steps in the proof. Rather than considering the problem of computing an optimal proof w.r.t. that measure, it is more convenient to instead look at the corresponding decision problem, which asks for a given entailment and integer whether there exists a proof of size at most . If is encoded in unary (e.g. using a sequence of symbols), this problem is clearly in NP, since we can always guess a proof of size in polynomial time and then just use the deriver to verify whether it is a valid proof. As shown in [2], we cannot do much better than that – computing optimal proofs is NP-hard already for the polynomial deriver . If we consider instead binary encoding of the bound , and consider exponential derivers, we obtain NExpTime-completeness.
Theorem 11 ([2]).
We can decide existence of proofs of size in in NP if is a polynomial deriver. If is an exponential deriver, this can be decided in NP if is given in unary, and in NExpTime if is given in binary. The derivers for and match the corresponding lower bounds.
We know from an analysis with the Elk deriver on a larger corpus of OWL ontologies that proofs are rarely have a size beyond 20 [2, 1], which makes these complexities a bit less dramatic. Indeed, we are usually able to compute size-minimal proofs without a noticeable delay [1].
Finding proofs becomes tractable if we instead consider tree size. The tree size of a proof corresponds to the size of its tree unravelling, which is actually how proofs are represented e.g. in the ontology editor Protégé. To avoid having to introduce tree unravellings and since it will make our life easier when we later generalize to a larger class of quality measures, we define tree size directly on proofs. Given a proof with final conclusion on the node , we call the unique inference the final inference in . Given , we refer by as the proof below , which is the sub-proof of that has as final conclusion.
Definition 12.
The tree size of a proof is defined recursively based on the final inference :
Note that the case where there is no corresponds to the case where .
Theorem 13.
For polynomial derivers , we can decide existence of proofs of tree size in in polynomial time.
Sketch.
We present an algorithm that computes a proof for of minimal tree size, and thus can be used to decide the existence of a proof with . The algorithm constructs the proof starting from its leafs, and keeps in memory a set of pairs , where is the smallest proof found so far for the node . Once the pair for a node s.t. is found, we return as the proof for of the smallest tree size. The algorithm repeats the following step until that pair is found:
-
If there is s.t. for every , either or we have a tuple , create a proof for by merging the proofs for each and adding as final inference step . Set .
One can show by induction on the proof size that in each step, every added is such that is minimal among all the proofs for . (For a detailed proof, see [2]).
Intuitively, the reason that the algorithm works is that tree size can be defined recursively. This idea was generalized in [3] to define a whole class of proof quality measures, the monotone recursive -measures, that allow for tractable computation of optimal proofs. In the context of this paper, it is sufficient to see proof quality measures simply as functions that map proofs to rational numbers, with the idea that a lower value is better. Both size and tree size can be defined as such proof measures, but we can also define the depth of a proof, the maximal complexity of an involved axiom, or the number of axioms from the KB used (the justification size of a proof), as well as versions of size and tree size that are weighted based on the complexity of the involved axioms. If proofs can be defined recursively as we did with the tree size above, and satisfy some additional criteria (e.g. that adding new inference steps cannot decrease the value), we can use the procedure used in the proof for Theorem 13 and obtain corresponding complexity results. Table 1 gives an overview over the complexities for different proof measures. For the monotone recursive measures tree size and depth, we can improve the complexity. Logarithmic depth does not have a practical motivation, and is included to match the upper bounds for the monotone recursive -measures.
| Deriver | polynomial | exponential | ||
|---|---|---|---|---|
| Encoding | unary | binary | unary | binary |
| Size | NP [2] | NP [2] | NP [2] | NExpTime [2] |
| Monotone recursive | ||||
| -measures | [3] | [3] | [3] | [3] |
| Tree size | PTime [2] | PTime [2] | NP [2] | PSpace [3] |
| Depth | PTime [3] | PTime [3] | PSpace [3] | ExpTime [3] |
| Logarithmic depth | PTime [3] | PTime [3] | ExpTime [3] | ExpTime [3] |
5 Craig Interpolation
Proofs based on inference calculi provide very detailed explanations, which may not always be desired. Before we continue on the topic of proofs and consider less detailed variants, we discuss explanations using Craig interpolation, and start with entailments of CIs.
Craig interpolation has first been investigated in the context of classical logics [24, 52]. The classical definition is as follows: Given two formulas and s.t. , a Craig interpolant for is a formula s.t. , and . We can see as an explanation of the entailment using only the common terms of and . If we lift this idea to complex DL concepts, such interpolants can be used to explain positive entailments of CIs.
Example 14 ([65]).
Consider the following concepts and :
We have . The concept satisfies , , and uses only names that occur in both and . This way, gives a more precise account on why is subsumed by .
In the presence of a TBox, using as signature the common terms of and makes less sense. Consider that we want to explain CIs of the form , where and are concept names. A meaningful concept to to explain this entailment will have to use names other than and . Consequently, concept interpolation is defined with the signature as additional parameter.
Definition 15.
Let be a DL, , concepts, an TBox s.t. and a signature. A concept -interpolant (in ) is then an concept s.t. , and .
Example 16.
The TBox in our running example entails the following CI:
Consider , BotanicalGarden, . A concept -interpolant for the entailment is
This interpolant provides an intuitive explanation without using any of the names occurring in the CI.
Whether a concept interpolant exists or not depends on the signature as well as on the DL. If a DL has the so-called Craig interpolation property (CIP), then interpolant existence can be polynomially reduced to entailment. A DL has the CIP if whenever for two ontologies , and concepts , , we have , then there exists a concept using only names that occur in , as well as in , . If we consider the simple ontology , , with concepts and , it is a simple exercise the verify that this property indeed holds for any , s.t. . For example, setting and yields . Indeed, this works for any CI entailed by ontologies formulated in or , as well as for their extensions with transitivity, inverse roles and functionality restrictions, because those are the logics that have the CIP.
To see how this property can be used to decide concept interpolants, assume a DL has the CIP and fix , , and s.t. . We construct a new TBox and a new concept from and by replacing every non- symbol uniformly by a fresh symbol. It is shown in [11, Lemma 5.3] that then a concept -interpolant for exists iff we have .
As soon as we add nominals or role hierarchies to our DL, it loses the CIP, and concept interpolation becomes much harder: for both and , deciding the existence of interpolants ins coNExpTime-complete, for , it is even 2ExpTime-complete [11].
While some papers describe algorithms [71, 65], currently, there are no dedicated implementations available that can compute concept interpolants in practice. Before we address this problem, we consider interpolation on the level of KBs.
Definition 17.
Let be a DL, , and ontologies s.t. , and a signature. Then, an KB -interpolant (in ) is an KB that uses only names from and satisfies , .
is a uniform -interpolant for if for every ontology s.t. and , is an KB -interpolant for .
Similar to concept interpolants, KB interpolants can be used to explain entailments. Specifically, we can explain by providing a KB interpolant for . Moreover, we can reduce concept interpolation to KB interpolation:
Theorem 18.
Concept interpolation can be polynomially reduced to KB interpolation.
Proof.
Fix a DL introduced in this paper. Let be an ontology, , concepts in that DL s.t. , and let be an ontology. Let be a fresh individual name. We have . If there exists a concept -interpolant for , then is a KB -interpolant for . Conversely, let be an ontology interpolant for . Since does not occur in , unless is inconsistent or , must contain assertions , , s.t. . Hence, we can construct a concept -interpolant by setting .
Interestingly, KB interpolation has not really been investigated in the literature. On the other hand, there are several results on uniform KB interpolation [56, 45, 51], and also implemented systems [45, 55, 46, 74]. In general, uniform interpolants do not always exist, as can be illustrated already by the very simple TBox . Indeed, a uniform -interpolant of would have to capture all entailments of that can be expressed using only and , while at the same time using only the names and . In particular, it would have to preserve all entailments of the form , where , which is not possible with the DLs discussed in this chapter. Consequently, a uniform -interpolant does not exist in these DLs. For this reason, existing tools for uniform interpolation cannot guarantee success, but sometimes offer the possibility to approximate ontologies to an arbitrary degree. Formally, we say that a sequence , , approximates the uniform -interpolant for if for every ,
-
1.
,
-
2.
,
-
3.
, and
-
4.
for every -axiom in the DL under consideration, iff for some , .
The approximation provided by the uniform interpolation system Lethe has this property [46], but also the system described in [55]. For computing Craig interpolants, this is indeed sufficient.
Theorem 19.
Let , be KBs s.t. and a signature. Let , , be a sequence of KB that approximates the uniform -interpolant for . Then, there exists a KB -interpolant for iff for some , is an KB -interpolant for .
Using the preceding lemmas, we can thus use existing tools to compute KB and concept interpolants. By computing interpolants for different signatures, we can provide also step-wise explanations that work a bit like linear proofs. For instance, Figure 3 shows a sequence of interpolants that explains why in our running example, . This sequence starts with a justification , and continues with -interpolants for where each has one symbol less than the preceding one, starting with the signature of and ending with the signature of .
6 Proofs Based on Justifications and Interpolation
Proofs based on calculi give a very fine-grained explanation, where each inference follows a systematic and transparent principle, the inference rule. In order to compute proofs for a DL , we need to implement a calculus for that is sound and complete. While there do exist now calculi for a range of DLs, the majority of implemented reasoning systems for expressive DLs is not based on such a set of rules.
There are other methods for computing proof-like, more detailed explanations that do not depend on a set of inference rules, and instead use reasoning methods as black boxes. One of them are justification-based proofs [39], and another one are elimination proofs [5, 2], which combine KB interpolants with justifications.
Justification-based proofs use a very flexible definition, and can be computed with any reasoner as black box. The general idea is to incrementally refine a justification-based explanation by adding layers of intermediate steps. To create an intermediate layer for a justification for an entailment , we generate a set of axioms over the signature of , based on a set of predefined syntactic axiom patterns. An example of such a set of axiom patterns is , , , which is then instantiated by replacing , , by all concept names in , and by all possible role names in . We remove from all axioms that are not entailed by , resulting in a new set , and compute for each a justification for , which then constitutes an inference step . Furthermore, every justification for constitutes an inference step . We use some evaluation function to rank the inference steps generated in this way, and extract a proof for by selecting one inference , and for every , one inference step . This construction results in a proof in which every inference step corresponds to a pair of a justification and an entailment. We can further refine each such inference step by applying the same procedure again.
Of all the explanation techniques for positive entailments discussed in this chapter, justification-based proofs are the most flexible: they make no assumptions on the reasoner used, they can be equipped with any set of axiom patterns and use arbitrary quality criteria to select the best inference step. Moreover, the incremental approach allows to flexibly refine proofs on demand. This amount of freedom is at the same time the main limitation of this approach. Since it is essentially a brute-force which considers all possible axioms that fit a defined shape, justification-based proofs are computationally expensive.
Elimination proofs follow a similar idea, but use KB interpolants as the intermediate layers. In particular, we use a sequence , , of KBs s.t. , is a justification for , and each is a ontology -interpolant for , where the signatures are such that for , for some concept or role name . Intuitively, each KB in the sequence ‘eliminates’ a name from the preceding one. We have already seen such a sequence of interpolants in Figure 3. To extract a proof from this sequence of interpolants, we again use justifications. Namely, for every and , we produce an inference step , where is a justification for .
Those proofs are called elimination proofs because in principle, each inference step may eliminate a name from its premises. In particular, such an elimination step has the form , where , , contains a name that does not occur in . The intuition is that is inferred through an inference on . As it is described here, the method will still produce inference steps where the signature is not changed for a specific axiom. We can however easily avoid this by skipping over inference steps that do not eliminate a name.
7 Explaining Negative Entailments with Abduction
Having discussed several methods for explaining positive entailments, we now turn to the problem of explaining missing entailments, e.g. of explaining cases where . Here, one can follow two approaches: 1) using counterinterpretations and 2) using abduction.
The idea of counterinterpretations follows directly from the formal definition of non-entailment: if , then there exists a model of s.t. . The model is a counterinterpretation that explains . Reasoning methods such as tableaux, but also the completion method for presented in [13] are indeed model construction methods, and can as such be used to create such counterinterpretations. To make them more useful as explanations, one can highlight elements in the model that are relevant to the non-entailment. [9] discusses different types of counterinterpretations, including also a contrastive variant that explains missing entailments of the form by providing a counterinterpretation that contrasts an instance of with an instance of , highlighting what is missing for .
Because there is not a lot of literature on counterinterpretations, we focus in the following on the other way of explaining missing entailments, which is using abduction. Abduction explains a missing entailment by pointing out what is missing in the KB for the entailment to hold. In particular, using abductive reasoning for a missing entailment , we compute a set of axioms s.t. . The set can be either used directly as explanation, or together with an explanation method for positive entailments to connect with and . Abduction is not only useful for explaining missing entailments. The original motivation is to provide rational explanations for observations that cannot be logically deduced based on our knowledge. An example of abductive reasoning in daily life is that we observe that a street is wet, and come to the conclusion that it must have rained, since this would explain why the street is wet. Usually, abduction allows for more than one possible explanation: another one would be that someone spilled water on the street.
Formally, we can define abduction as follows:
Definition 20.
An abduction problem is a pair of an ontology and an axiom or a set of axioms called the observation. A hypothesis for such an abduction problem is a set of axioms s.t. is consistent and .
This definition captures the bare minimum of what we would require of a hypothesis, which is usually not enough to get a useful explanation. In addition to consistency, we often also find the following criteria (e.g. [30]):
Definition 21.
An hypothesis for an abduction problem is
-
1.
relevant if , and
-
2.
explanatory if it is relevant and .
While these criteria have their motivation, we may usually need further restrictions. Indeed, as the following example illustrates, we may often require additional user input to distinguish usefull hypotheses from less usefull ones.
Example 22.
Consider the following KB:
A hypothesis for the observation would be
This hypothesis is relevant and explanatory following to Definition 21, and also seems intuitively like a reasonable explanation for the observation.
Now consider the following KB:
and the observation . The following hypothesis is relevant and explanatory as well, but does not explain anything:
The patient is a corona patient because he is a patient that has Corona, which is not helpful as an explanation. Observe that this example is structurally identical to the previous one, and only uses different names.
As the example illustrates, we cannot distinguish between useful and useless abductive explanations solely based on the logical structure, but may need additional user input. One solution is to simply accept that some hypotheses will be not be helpful to the user, and always compute several alternatives to choose from. Since there can be many options, we may still need to put restrictions or sort solutions based on optimality criteria. Another solution is to let the user specify beforehand what kind of hypotheses they expect, for instance by letting them restrict the signature to be used.
7.1 Restricting the Solution Space
While Definition 20 is quite generic, in the literature, one distinguishes between TBox, ABox and KB abduction, which already come with a first restriction to the solution space.333There is also research on the concept abduction problem (e.g. [18]), which operates on the level of concepts rather than axioms and is thus not compatible with our definition.
Definition 23.
We distinguish the following types of abduction problems:
-
1.
TBox abduction problems in which the observation is a CI and the hypothesis a set of CIs,
-
2.
ABox abduction problems in which observation and hypothesis are sets of assertions, and
-
3.
KB abduction problems in which we have no restriction on the observation or hypothesis.
Furthermore, we distinguish
-
1.
flat abduction problems, for which hypotheses should not use complex concepts but only concept names, and
-
2.
signature-based abduction problems, which are tuples s.t. is an abduction problem and a set of concept and role names, and that require hypotheses to only use names from .
TBox abduction is investigated in [72, 27, 34], ABox abduction in [44, 21, 35, 28, 63, 47], and KB abduction in [30, 49].
Using a signature is a quite direct way of restricting the solution space based on user output and to avoid solutions such as in Example 22. Often, there is a natural difference between the vocabularies used for hypotheses and observations. For instance, if we want to understand why a definition of a concept is insufficient for it to be classified under a more abstract category , we might want to exclude categories and include names relevant to a definition of to the signature. To illustrate, to explain why a certain Pizza is not classified as Vegetarian based on its toppings, we might want to include in the signature toppings that belong to that pizza, and exclude categories such as VegetarianTopping. Another example where signatures make sense is when we use abduction for diagnosis, that is, to explain unexpected observations. Here, we might exclude names that describe symptoms, and instead include names that describe causes. This would be the case for Example 22.
If we allow to construct arbitrarily complex concepts, restricting the signature alone does does not lead to a finite solution space – indeed, as we will see later, quite complex concepts may be required in order to solve a given abduction problem. If we restrict additionally to flat abduction problems, the situation changes. For instance, the number of axioms to be used for flat TBox abduction is always quadratic in the signature size. Indeed, even without signature-restriction, we only need to consider a finite search space, since we usually can restrict ourselves to the signature of the input. This means that flat TBox abduction problems can always be solved using a guess-and-check approach: we generate a hypothesis of polynomial size, and then we check whether it is consistent and entails the conclusion. If consistency is not a matter (for instance because we use a DL that cannot express inconsistencies), we can even directly generate subset-minimal hypothesis using the justification algorithm described in Section 3, where we simply start with an extension of the original ontology by all the axioms in the search space. It follows that subset-minimal hypotheses in can be generated in polynomial time. Indeed, practical methods for computing hypotheses often use a hitting set algorithms similar to the one described in Section 3 (see e.g. [63]).
7.2 Optimality Criteria
As for justifications, another natural requirement for justifications is that they are subset-minimal, since we would not want to include axioms that are not needed. Other criteria typically considered are cardinality-minimality, semantic minimality and semantic maximality.
Definition 24.
A hypothesis for a given abduction problem is
-
1.
subset-minimal if for no other viable hypothesis , we have ,
-
2.
cardinality-minimal if for no other viable hypothesis , we have ,
-
3.
semantically minimal if for every other viable hypothesis s.t. , also ,
-
4.
semantically maximal if for every other viable hypothesis s.t. , also .
Intuitively, semantically minimal hypotheses follow an Occam’s razor approach in the sense that they only propose as little a change as required to the original ontology to make the observation become entailed. This way, they provide a more focused explanation to why the observation is not entailed. However, other authors argue that sometimes, one instead want hypotheses that are as specific as possible, for which one may use semantically maximal solutions [72]. To illustrate the difference, for an explanation for why a certain dish is not classified as meat-based, a semantically weaker solution would be , while semantically stronger solutions would refer to specific meat ingredients. Depending on the use case, the user might prefer either one or the other.
Semantic minimality might still allow for hypotheses that are less related to the observation than others. A more refined minimality criterion that was proposed in [34] for flat TBox abduction in is connection-minimality. The formal definition requires additional terminology that is out of the scope of this paper, which is why we only give the intuition. In case , we may look for concepts and s.t. and , and now generate a hypothesis that ensures . For to be connection-minimal, and should have the same syntactic structure except for the concept names, so that we can construct the hypothesis by matching pairs the , of concept names in which the concepts and differ. Intuitively, then adds only as little connections between and as necessary, which may rule out candidates that are more disconnected to the hypothesis.
Example 25.
We consider the following TBox:
This TBox has insufficient information to derive , and we would like to understand why. The following are two possible hypotheses to the corresponding flat TBox abduction problem:
If we only consider relevant and explanatory hypotheses, and thus rule out the observation itself, both hypotheses are subset-minimal, cardinality-minimal and semantically minimal. However, only the first one is also connection-minimal. This is because it can be constructed based on the following entailments:
| Chair | |||||
| ResearchPosition |
The hypothesis adds the missing connections that are needed for these entailments to lead to the entailment of the observation. In contrast, the hypothesis does not only work due to the subsumers of subsumees of of Professor and Researcher, but relies also on the subsumers of Poet, which is why it is not connection minimal.
7.3 Size Bounds for Signature-Based Abduction
Without complex concepts:
| Description Logic | , | |||
|---|---|---|---|---|
| worst case size | polynomial | exponential | exponential | not computable |
| deciding existence | PTime-c | ExpTime-c | coNExpTime-c | undecidable |
With complex concepts:
| Description Logic | |||
|---|---|---|---|
| worst case size | polynomial | 2-exponential | 3-exponential |
| worst case #individuals | polynomial | exponential | 2-exponential |
| deciding existence | PTime-c | ExpTime-c | -c |
Once we allow fresh individuals or complex concepts in hypotheses, and do not put further constraints, the solution space becomes unbounded, and indeed, depending on the DL, solutions may become very large or even have uncomputable size. For the case of signature-based ABox abduction, that is, the case where hypotheses are ABoxes that have to use names from a given signature, this has been investigated in [47]. In particular, the authors looked at size bounds, as well as at the complexity of determining whether there exists a hypothesis for a given abduction problem. Table 2 gives an overview of the results: indeed, as was already observed in [12], once functional roles are allowed, already the flat abduction problem becomes undecidable, and correspondingly, we cannot even compute a size bound for a given abduction problem. If it was possible to compute a size bound, this would give us decidability, since we could simply iterate over all candidates within this size bound to determine whether there exists a solution. But even without functional roles, the complexity is quite high: for , solutions may become triple exponentially large and use double exponentially many fresh individuals [48].
To understand why this happens, we look at the simplest case, namely flat ABox abduction for the family. A first observation is that in , fresh individuals are not needed at all, and as a result, we here only have a polynomial bound. Indeed, if there is any hypothesis for a given signature-based abduction problem in , then the following is one:
instantiates all individuals in the input in all possible ways using the names in the signature. Clearly, is in the required signature, and since in , one cannot express any inconsistencies, is also consistent. If does not entail the observation, then neither would any other extension of with assertions in the signature. Indeed, if there exists any hypothesis for the abduction problem, we can rename any fresh individuals in so that it becomes a subset of . This cannot introduce any inconsistencies, since this is not possible in , but it preserves the entailment of the observation. We obtain that if there is a solution to the abduction problem, then must be such a solution. Specifically, we obtain that in , ABox abduction can always be performed in polynomial time. To serve as a meaningful observation in practice, we may want to remove from assertions that are unrelated to the observation – this can be done applying the justification algorithm from Section 3.
Once we move to , signature-based ABox abduction becomes harder. Indeed, simply renaming fresh individuals in a given hypothesis can now create a situation where for instance disjointness axioms in the TBox are invalidated. Indeed, it may be necessary to use a large number of fresh individuals in a hypothesis, as we will see.
To illustrate this, we define a sequence of signature-based abduction problems , , , where for each , , and any hypothesis for requires at least fresh individual names. contains the following axioms, whose function we explain below.
| (11) | ||||
| (12) | ||||
| (13) | ||||
| (14) | ||||
| (15) | ||||
| (16) | ||||
| (17) |
The signature is , and . The construction uses a common technique to prove exponential complexity bounds. Namely, it implements a binary counter. Intuitively, the concept names , , , and are used to represent numbers between and using a binary encoding. Here, indicates that the th bit of the encoding has the value , and indicates that the th bit of the encoding has the value . For example, the number , which is in binary encoding, would be represented using the concept .
Axiom 11 states that no bit can have both the value and at the same time. Axiom 12 states that instances of the concept get the value assigned. The next axioms state that if an individual has an -successor with the value assigned, then should get the value assigned. They do this by implementing a binary counter. Specifically, Axioms 13 and 14 describe the situation where a bit has to be flipped (all lower bits are ), and Axioms 15 and 16 describe the situation where a bit should not be flipped (at least one lower bit is ). Finally, Axiom 17 states that, once an individual has the number assigned, then it is an instance of .
How do hypotheses for this abduction problem look like? Recall that they are only allowed to use the concept name and the role . We can enforce the individual to become an instance of by using the assertion , which assigns the value 0 to the individual , and now link to via a chain of -successors, specifically:
The TBox axioms ensure that . Moreover, there is no other way to enforce this situation.
Theorem 26.
There exists a family of flat signature based abduction problems such that each is of size polynomial in , but only admits hypotheses that have at least axioms.
Using a similar argument but two roles, one can show that if complex concepts are admitted instead of fresh individual names, one might even require hypotheses of double-exponential size. For , the situation becomes even worse: here, we can encode a counter with double exponentially many bits and obtain that hypotheses may be triple exponential in size, which is similar to a corresponding result for uniform interpolation in [56]. We can also show that these bounds are tight, e.g. larger hypotheses are in general not needed.
Despite this high complexity bound, there is actually an implemented system that supports signature-based KB abduction with complex concepts, which generalizes signature-based ABox abduction and is able to compute hypotheses for realistic ontologies in many cases [49]. Essentially, the method computes a uniform interpolant in an extended logic for , e.g., the conjunction of the KB with the negated observation, when seen as formulas in first-order logic. By negating this uniform interpolant, we obtain a formula from which several hypotheses can be extracted.
8 Outlook
We gave an overview about the central types of explanations to explain reasoning with DL ontologies: justifications, proofs, interpolation and abduction. Our focus here was on explaining (non-)entailments of axioms. Another relevant problem in the context of DLs is explaining query answers for ontology-mediated query answering (OMQA) [73]. Specifically, one may want to explain why a certain tuple is an answer or is not an answer for a given ontology-mediated query. Justifications, proofs and abduction have also been investigated in this setting. We refer the interested reader to [23, 8, 21, 12].
Many of the methods we discussed are also relevant for other formalisms than description logics. The black box methods for justifications and justification-based proofs can in general be applied to any formalism that has a monotone consequence relation. Our method for computing proofs of minimal tree size is not restricted to DLs as well, but can be used with a deriver or calculus defined for any other logic. Abduction has also been investigated for existential rules [22], a formalism quite related to DLs.
Justifications were discussed in more detail in an earlier edition of the Reasoning Web Summer School [32]. In addition to the black box approach discussed here, glass box approaches try to trace the inferences performed by a reasoning system such as a tableaux based method [16] or using inference rules [17]. Those methods usually compute a pinpointing formula, a propositional formula that succinctly expresses the set of all justifications. To compute many justifications more efficiently, also encodings into SAT and ASP have been investigated [67, 62].
In addition to proofs and interpolation, there are other ways of explaining positive entailments that we did not discuss. Horn DLs, which includes and , but also others [53], are a special type of DLs for which all entailments can be captured by a single model of the KB, the so-called universal model. This gives way to another type of explaining positive entailments that directly makes use of the universal model [8]. DLs with concrete domains allow to integrate numerical constraints into axioms. For the resulting KBs, we thus require hybrid explanations that combine explanations of numerical reasoning with logic-based explanations. This is discussed in [4]. In the context of ontology-mediated query answering, works on provenance [20] and Shapley values [19] are also relevant.
We did not discuss aspects of human computer interaction and user studies on explanations, which are as relevant for explaining reasoning with DLs. Most of the more advanced explanation techniques discussed here are implemented in Evee, which provides both a Java library [6] and a Protégé-plugin [5]: proofs based on different sets of inference rules, optimized for different quality measures, elimination proofs, connection-minimal, signature-based abduction, and counterinterpretations. Evee’s component for signature-based abduction lets users interactively select different signatures based on previous explanations and can also directly show a proof that then further explains how the hypothesis interacts with the KB to infer the observation. A functionality we did not discuss here is that users can specify parts of the vocabulary that they are already familiar with. Proofs are then generated in a way that takes this into account. The web application Evonne more advanced proof visualizations and interactions [59], and also supports explanations that integrate numerical with logical reasoning. Users have different layouts and exploration modes for looking at proofs, exploring the proof either starting from a justification, from a conclusion, or from both, and hide parts of the proofs that have already been understood. Evonne can also be used to determine bugs in the ontology that lead to undesired entailments, and offers and visualizes different ways of fixing them in the ontology. There have also been different types of user studies that try to understand in general what type of explanations different users best respond to [7].
Current explanation methods still have limitations compared to the way humans would explain things. One example is that humans usually explain by means of contrast: to explain an unexpected or surprising occurrence to someone, we would not give an account to all the reasons for it, but rather focus on the ones that distinguish this occurrence from others. Contrastive explanations [54] are a first step towards this. A contrastive explanation explains both a positive and a negative entailment at the same time, and thus answers a question such as “Why A but not B?”. There is first research into contrastive explanations for answer set programming [29], but research into contrastive explanations with DLs is still future work.
References
- [1] Christian Alrabbaa, Franz Baader, Stefan Borgwardt, Raimund Dachselt, Patrick Koopmann, and Julián Méndez. Evonne: Interactive proof visualization for description logics (system description). In Jasmin Blanchette, Laura Kovács, and Dirk Pattinson, editors, Automated Reasoning - 11th International Joint Conference, IJCAR 2022, volume 13385 of Lecture Notes in Computer Science, pages 271–280. Springer, 2022. doi:10.1007/978-3-031-10769-6_16.
- [2] Christian Alrabbaa, Franz Baader, Stefan Borgwardt, Patrick Koopmann, and Alisa Kovtunova. Finding small proofs for description logic entailments: Theory and practice. In Elvira Albert and Laura Kovács, editors, LPAR 2020: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, volume 73 of EPiC Series in Computing, pages 32–67. EasyChair, 2020. doi:10.29007/NHPP.
- [3] Christian Alrabbaa, Franz Baader, Stefan Borgwardt, Patrick Koopmann, and Alisa Kovtunova. Finding good proofs for description logic entailments using recursive quality measures. In André Platzer and Geoff Sutcliffe, editors, Automated Deduction - CADE 28 - 28th International Conference on Automated Deduction, volume 12699, pages 291–308. Springer, 2021. doi:10.1007/978-3-030-79876-5_17.
- [4] Christian Alrabbaa, Franz Baader, Stefan Borgwardt, Patrick Koopmann, and Alisa Kovtunova. Combining proofs for description logic and concrete domain reasoning. In Anna Fensel, Ana Ozaki, Dumitru Roman, and Ahmet Soylu, editors, Rules and Reasoning - 7th International Joint Conference, RuleML+RR 2023, volume 14244 of Lecture Notes in Computer Science, pages 54–69. Springer, 2023. doi:10.1007/978-3-031-45072-3_4.
- [5] Christian Alrabbaa, Stefan Borgwardt, Tom Friese, Anke Hirsch, Nina Knieriemen, Patrick Koopmann, Alisa Kovtunova, Antonio Krüger, Alexej Popovic, and Ida S. R. Siahaan. Explaining reasoning results for OWL ontologies with evee. In Pierre Marquis, Magdalena Ortiz, and Maurice Pagnucco, editors, Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning, KR 2024, 2024. doi:10.24963/KR.2024/67.
- [6] Christian Alrabbaa, Stefan Borgwardt, Tom Friese, Patrick Koopmann, Julián Méndez, and Alexej Popovic. On the eve of true explainability for OWL ontologies: Description logic proofs with evee and evonne. In Ofer Arieli, Martin Homola, Jean Christoph Jung, and Marie-Laure Mugnier, editors, Proceedings of the 35th International Workshop on Description Logics (DL 2022) co-located with Federated Logic Conference (FLoC 2022), Haifa, Israel, August 7th to 10th, 2022, volume 3263 of CEUR Workshop Proceedings. CEUR-WS.org, 2022. URL: https://ceur-ws.org/Vol-3263/paper-2.pdf.
- [7] Christian Alrabbaa, Stefan Borgwardt, Anke Hirsch, Nina Knieriemen, Alisa Kovtunova, Anna Milena Rothermel, and Frederik Wiehr. In the head of the beholder: Comparing different proof representations. In Guido Governatori and Anni-Yasmin Turhan, editors, Rules and Reasoning - 6th International Joint Conference on Rules and Reasoning, RuleML+RR 2022, volume 13752 of Lecture Notes in Computer Science, pages 211–226. Springer, 2022. doi:10.1007/978-3-031-21541-4_14.
- [8] Christian Alrabbaa, Stefan Borgwardt, Patrick Koopmann, and Alisa Kovtunova. Explaining ontology-mediated query answers using proofs over universal models. In Guido Governatori and Anni-Yasmin Turhan, editors, Rules and Reasoning - 6th International Joint Conference on Rules and Reasoning, RuleML+RR 2022, volume 13752 of Lecture Notes in Computer Science, pages 167–182. Springer, 2022. doi:10.1007/978-3-031-21541-4_11.
- [9] Christian Alrabbaa and Willi Hieke. Explaining non-entailment by model transformation for the description logic . In Alessandro Artale, Diego Calvanese, Haofen Wang, and Xiaowang Zhang, editors, Proceedings of the 11th International Joint Conference on Knowledge Graphs, IJCKG 2022, pages 1–9. ACM, 2022. doi:10.1145/3579051.3579060.
- [10] Alessandro Artale, Diego Calvanese, Roman Kontchakov, and Michael Zakharyaschev. The DL-Lite family and relations. J. Artif. Intell. Res., 36:1–69, 2009. doi:10.1613/JAIR.2820.
- [11] Alessandro Artale, Jean Christoph Jung, Andrea Mazzullo, Ana Ozaki, and Frank Wolter. Living without beth and craig: Definitions and interpolants in description logics with nominals and role inclusions. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI 2021, pages 6193–6201. AAAI Press, 2021. doi:10.1609/AAAI.V35I7.16770.
- [12] Franz Baader, Meghyn Bienvenu, Carsten Lutz, and Frank Wolter. Query and predicate emptiness in ontology-based data access. J. Artif. Intell. Res., 56:1–59, 2016. doi:10.1613/JAIR.4866.
- [13] Franz Baader, Sebastian Brandt, and Carsten Lutz. Pushing the envelope. In Leslie Pack Kaelbling and Alessandro Saffiotti, editors, IJCAI-05, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, pages 364–369. Professional Book Center, 2005. URL: http://ijcai.org/Proceedings/05/Papers/0372.pdf.
- [14] Franz Baader, Ian Horrocks, Carsten Lutz, and Ulrike Sattler. An Introduction to Description Logic. Cambridge University Press, 2017. URL: http://www.cambridge.org/de/academic/subjects/computer-science/knowledge-management-databases-and-data-mining/introduction-description-logic?format=PB#17zVGeWD2TZUeu6s.97.
- [15] Franz Baader and Rafael Peñaloza. Automata-based axiom pinpointing. J. Autom. Reason., 45(2):91–129, 2010. doi:10.1007/S10817-010-9181-2.
- [16] Franz Baader and Rafael Peñaloza. Axiom pinpointing in general tableaux. J. Log. Comput., 20(1):5–34, 2010. doi:10.1093/LOGCOM/EXN058.
- [17] Franz Baader and Boontawee Suntisrivaraporn. Debugging SNOMED CT using axiom pinpointing in the description logic . In Ronald Cornet and Kent A. Spackman, editors, Proceedings of the Third International Conference on Knowledge Representation in Medicine, volume 410 of CEUR Workshop Proceedings. CEUR-WS.org, 2008. URL: https://ceur-ws.org/Vol-410/Paper01.pdf.
- [18] Meghyn Bienvenu. Complexity of abduction in the family of lightweight description logics. In Gerhard Brewka and Jérôme Lang, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Eleventh International Conference, KR 2008, pages 220–230. AAAI Press, 2008. URL: http://www.aaai.org/Library/KR/2008/kr08-022.php.
- [19] Meghyn Bienvenu, Diego Figueira, and Pierre Lafourcade. Shapley revisited: Tractable responsibility measures for query answers. Proc. ACM Manag. Data, 3(2):112:1–112:26, 2025. doi:10.1145/3725249.
- [20] Diego Calvanese, Davide Lanti, Ana Ozaki, Rafael Peñaloza, and Guohui Xiao. Enriching ontology-based data access with provenance. In Sarit Kraus, editor, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 1616–1623. ijcai.org, 2019. doi:10.24963/IJCAI.2019/224.
- [21] Diego Calvanese, Magdalena Ortiz, Mantas Simkus, and Giorgio Stefanoni. Reasoning about explanations for negative query answers in DL-Lite. J. Artif. Intell. Res., 48:635–669, 2013. doi:10.1613/jair.3870.
- [22] İsmail İlkan Ceylan, Thomas Lukasiewicz, Enrico Malizia, Cristian Molinaro, and Andrius Vaicenavicius. Explanations for negative query answers under existential rules. In Diego Calvanese, Esra Erdem, and Michael Thielscher, editors, Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, pages 223–232, 2020. doi:10.24963/KR.2020/23.
- [23] İsmail İlkan Ceylan, Thomas Lukasiewicz, Enrico Malizia, and Andrius Vaicenavicius. Explanations for ontology-mediated query answering in description logics. In Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, and Jérôme Lang, editors, ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), volume 325 of Frontiers in Artificial Intelligence and Applications, pages 672–679. IOS Press, 2020. doi:10.3233/FAIA200153.
- [24] William Craig. Three uses of the herbrand-gentzen theorem in relating model theory and proof theory. J. Symb. Log., 22(3):269–285, 1957. doi:10.2307/2963594.
- [25] David Tena Cucala, Bernardo Cuenca Grau, and Ian Horrocks. 15 years of consequence-based reasoning. In Carsten Lutz, Uli Sattler, Cesare Tinelli, Anni-Yasmin Turhan, and Frank Wolter, editors, Description Logic, Theory Combination, and All That - Essays Dedicated to Franz Baader on the Occasion of His 60th Birthday, volume 11560 of Lecture Notes in Computer Science, pages 573–587. Springer, 2019. doi:10.1007/978-3-030-22102-7_27.
- [26] David Tena Cucala, Bernardo Cuenca Grau, and Ian Horrocks. Pay-as-you-go consequence-based reasoning for the description logic . Artif. Intell., 298:103518, 2021. doi:10.1016/J.ARTINT.2021.103518.
- [27] Jianfeng Du, Hai Wan, and Huaguan Ma. Practical TBox abduction based on justification patterns. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 1100–1106, 2017. doi:10.1609/AAAI.V31I1.10683.
- [28] Jianfeng Du, Kewen Wang, and Yi-Dong Shen. A tractable approach to ABox abduction over description logic ontologies. In Carla E. Brodley and Peter Stone, editors, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1034–1040. AAAI Press, 2014. doi:10.1609/AAAI.V28I1.8852.
- [29] Thomas Eiter, Tobias Geibinger, and Johannes Oetsch. Contrastive explanations for answer-set programs. In European Conference on Logics in Artificial Intelligence, pages 73–89. Springer, 2023. doi:10.1007/978-3-031-43619-2_6.
- [30] Corinna Elsenbroich, Oliver Kutz, and Ulrike Sattler. A case for abductive reasoning over ontologies. In Bernardo Cuenca Grau, Pascal Hitzler, Conor Shankey, and Evan Wallace, editors, Proceedings of the OWLED*06 Workshop on OWL: Experiences and Directions, volume 216 of CEUR Workshop Proceedings. CEUR-WS.org, 2006. URL: https://ceur-ws.org/Vol-216/submission_25.pdf.
- [31] Birte Glimm, Ian Horrocks, Boris Motik, Giorgos Stoilos, and Zhe Wang. HermiT: An OWL 2 reasoner. J. Autom. Reason., 53(3):245–269, 2014. doi:10.1007/S10817-014-9305-1.
- [32] Birte Glimm and Yevgeny Kazakov. Classical algorithms for reasoning and explanation in description logics. In Markus Krötzsch and Daria Stepanova, editors, Reasoning Web. Explainable Artificial Intelligence - 15th International Summer School 2019, Bolzano, Italy, September 20-24, 2019, Tutorial Lectures, volume 11810 of Lecture Notes in Computer Science, pages 1–64. Springer, 2019. doi:10.1007/978-3-030-31423-1_1.
- [33] Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter F. Patel-Schneider, and Ulrike Sattler. OWL 2: The next step for OWL. J. Web Semant., 6(4):309–322, 2008. doi:10.1016/J.WEBSEM.2008.05.001.
- [34] Fajar Haifani, Patrick Koopmann, Sophie Tourret, and Christoph Weidenbach. Connection-minimal abduction in via translation to FOL. In Jasmin Blanchette, Laura Kovács, and Dirk Pattinson, editors, Automated Reasoning - 11th International Joint Conference, IJCAR 2022, volume 13385 of Lecture Notes in Computer Science, pages 188–207. Springer, 2022. doi:10.1007/978-3-031-10769-6_12.
- [35] Ken Halland and Katarina Britz. ABox abduction in using a DL tableau. In 2012 South African Institute of Computer Scientists and Information Technologists Conference, SAICSIT ’12, pages 51–58, 2012. doi:10.1145/2389836.2389843.
- [36] Martin Hofmann. Proof-theoretic approach to description-logic. In 20th IEEE Symposium on Logic in Computer Science (LICS 2005), pages 229–237. IEEE Computer Society, 2005. doi:10.1109/LICS.2005.38.
- [37] Matthew Horridge, Bijan Parsia, and Ulrike Sattler. Laconic and precise justifications in OWL. In Amit P. Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy W. Finin, and Krishnaprasad Thirunarayan, editors, The Semantic Web - ISWC 2008, volume 5318 of Lecture Notes in Computer Science, pages 323–338. Springer, 2008. doi:10.1007/978-3-540-88564-1_21.
- [38] Matthew Horridge, Bijan Parsia, and Ulrike Sattler. Explaining inconsistencies in OWL ontologies. In Lluís Godo and Andrea Pugliese, editors, Scalable Uncertainty Management, Third International Conference, SUM 2009, volume 5785 of Lecture Notes in Computer Science, pages 124–137. Springer, 2009. doi:10.1007/978-3-642-04388-8_11.
- [39] Matthew Horridge, Bijan Parsia, and Ulrike Sattler. Justification oriented proofs in OWL. In Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, and Birte Glimm, editors, The Semantic Web - ISWC 2010 - 9th International Semantic Web Conference, ISWC 2010, volume 6496 of Lecture Notes in Computer Science, pages 354–369. Springer, 2010. doi:10.1007/978-3-642-17746-0_23.
- [40] Ian Horrocks, Oliver Kutz, and Ulrike Sattler. The even more irresistible . In Patrick Doherty, John Mylopoulos, and Christopher A. Welty, editors, Proceedings, Tenth International Conference on Principles of Knowledge Representation and Reasoning, KR 2006, pages 57–67. AAAI Press, 2006. URL: http://www.aaai.org/Library/KR/2006/kr06-009.php.
- [41] Yevgeny Kazakov. and are harder than . In Gerhard Brewka and Jérôme Lang, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Eleventh International Conference, KR 2008, pages 274–284. AAAI Press, 2008. URL: http://www.aaai.org/Library/KR/2008/kr08-027.php.
- [42] Yevgeny Kazakov, Pavel Klinov, and Alexander Stupnikov. Towards reusable explanation services in protege. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors, Proceedings of the 30th International Workshop on Description Logics, volume 1879 of CEUR Workshop Proceedings. CEUR-WS.org, 2017. URL: http://ceur-ws.org/Vol-1879/paper31.pdf.
- [43] Yevgeny Kazakov, Markus Krötzsch, and Frantisek Simancik. The incredible ELK - from polynomial procedures to efficient reasoning with ontologies. J. Autom. Reason., 53(1):1–61, 2014. doi:10.1007/S10817-013-9296-3.
- [44] Szymon Klarman, Ulle Endriss, and Stefan Schlobach. ABox abduction in the description logic . Journal of Automated Reasoning, 46(1):43–80, 2011. doi:10.1007/s10817-010-9168-z.
- [45] Boris Konev, Dirk Walther, and Frank Wolter. Forgetting and uniform interpolation in large-scale description logic terminologies. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI 2009, pages 830–835, 2009. URL: http://ijcai.org/Proceedings/09/Papers/142.pdf.
- [46] Patrick Koopmann. LETHE: forgetting and uniform interpolation for expressive description logics. Künstliche Intell., 34(3):381–387, 2020. doi:10.1007/S13218-020-00655-W.
- [47] Patrick Koopmann. Signature-based abduction with fresh individuals and complex concepts for description logics. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, pages 1929–1935. ijcai.org, 2021. doi:10.24963/IJCAI.2021/266.
- [48] Patrick Koopmann. Signature-based ABox abduction in is hard. In Renate A. Schmidt, Christoph Wernhard, and Yizheng Zhao, editors, Proceedings of the Second Workshop on Second-Order Quantifier Elimination and Related Topics (SOQE 2021), volume 3009 of CEUR Workshop Proceedings, pages 61–74. CEUR-WS.org, 2021. URL: https://ceur-ws.org/Vol-3009/paper3.pdf.
- [49] Patrick Koopmann, Warren Del-Pinto, Sophie Tourret, and Renate A. Schmidt. Signature-based abduction for expressive description logics. In Diego Calvanese, Esra Erdem, and Michael Thielscher, editors, Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020, pages 592–602, 2020. doi:10.24963/KR.2020/59.
- [50] Patrick Koopmann and Renate A. Schmidt. Count and forget: Uniform interpolation of -ontologies. In Stéphane Demri, Deepak Kapur, and Christoph Weidenbach, editors, Automated Reasoning - 7th International Joint Conference, IJCAR 2014, volume 8562 of Lecture Notes in Computer Science, pages 434–448. Springer, 2014. doi:10.1007/978-3-319-08587-6_34.
- [51] Patrick Koopmann and Renate A. Schmidt. Uniform interpolation and forgetting for ontologies with ABoxes. In Blai Bonet and Sven Koenig, editors, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 175–181. AAAI Press, 2015. doi:10.1609/AAAI.V29I1.9206.
- [52] Patrick Koopmann, Christoph Wernhard, and Frank Wolter. Interpolation in classical propositional logic, 2025. Final version to appear in Balder ten Cate, Jean Christoph Jung, Patrick Koopmann, Christoph Wernhard and Frank Wolter, editors, Theory and Applications of Craig Interpolation. Ubiquity Press, 2026. doi:10.48550/arXiv.2508.11449.
- [53] Markus Krötzsch, Sebastian Rudolph, and Pascal Hitzler. Complexities of Horn description logics. ACM Trans. Comput. Log., 14(1):2:1–2:36, 2013. doi:10.1145/2422085.2422087.
- [54] Peter Lipton. Contrastive explanation. Royal Institute of Philosophy Supplements, 27:247–266, 1990.
- [55] Michel Ludwig and Boris Konev. Practical uniform interpolation and forgetting for tboxes with applications to logical difference. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, KR 2014. AAAI Press, 2014. URL: http://www.aaai.org/ocs/index.php/KR/KR14/paper/view/7985.
- [56] Carsten Lutz and Frank Wolter. Foundations for uniform interpolation and forgetting in expressive description logics. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pages 989–995. IJCAI/AAAI, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-170.
- [57] Tim S. Lyon and Jonas Karge. Constructive interpolation and concept-based beth definability for description logics via sequents. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI 2024, pages 3484–3492. ijcai.org, 2024. URL: https://www.ijcai.org/proceedings/2024/386.
- [58] Deborah L. McGuinness and Alexander Borgida. Explaining subsumption in description logics. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, IJCAI 95, pages 816–821. Morgan Kaufmann, 1995. URL: http://ijcai.org/Proceedings/95-1/Papers/105.pdf.
- [59] Julián Méndez, Christian Alrabbaa, Patrick Koopmann, Ricardo Langner, Franz Baader, and Raimund Dachselt. Evonne: A visual tool for explaining reasoning with OWL ontologies and supporting interactive debugging. Comput. Graph. Forum, 42(6), 2023. doi:10.1111/CGF.14730.
- [60] Mark A. Musen. The Protégé project: A look back and a look forward. AI Matters, 1(4):4–12, 2015. doi:10.1145/2757001.2757003.
- [61] Bijan Parsia, Nicolas Matentzoglu, Rafael S. Gonçalves, Birte Glimm, and Andreas Steigmiller. The OWL reasoner evaluation (ORE) 2015 competition report. J. Autom. Reason., 59(4):455–482, 2017. doi:10.1007/S10817-017-9406-8.
- [62] Rafael Peñaloza and Francesco Ricca. Pinpointing axioms in ontologies via ASP. In Georg Gottlob, Daniela Inclezan, and Marco Maratea, editors, Logic Programming and Nonmonotonic Reasoning - 16th International Conference, LPNMR 2022, Genova, Italy, September 5-9, 2022, Proceedings, volume 13416 of Lecture Notes in Computer Science, pages 315–321. Springer, 2022. doi:10.1007/978-3-031-15707-3_24.
- [63] Júlia Pukancová and Martin Homola. Tableau-based ABox abduction for the description logic. In Proceedings of the 30th International Workshop on Description Logics, 2017. URL: http://ceur-ws.org/Vol-1879/paper11.pdf.
- [64] Raymond Reiter. A theory of diagnosis from first principles. Artif. Intell., 32(1):57–95, 1987. doi:10.1016/0004-3702(87)90062-2.
- [65] Stefan Schlobach. Explaining subsumption by optimal interpolation. In José Júlio Alferes and João Alexandre Leite, editors, Logics in Artificial Intelligence, 9th European Conference, JELIA 2004, volume 3229 of Lecture Notes in Computer Science, pages 413–425. Springer, 2004. doi:10.1007/978-3-540-30227-8_35.
- [66] Stefan Schlobach and Ronald Cornet. Non-standard reasoning services for the debugging of description logic terminologies. In Georg Gottlob and Toby Walsh, editors, IJCAI-03, Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 355–362. Morgan Kaufmann, 2003. URL: http://ijcai.org/Proceedings/03/Papers/053.pdf.
- [67] Roberto Sebastiani and Michele Vescovi. Axiom pinpointing in lightweight description logics via horn-sat encoding and conflict analysis. In Renate A. Schmidt, editor, Automated Deduction - CADE-22, 22nd International Conference on Automated Deduction, Montreal, Canada, August 2-7, 2009. Proceedings, volume 5663 of Lecture Notes in Computer Science, pages 84–99. Springer, 2009. doi:10.1007/978-3-642-02959-2_6.
- [68] Frantisek Simancik, Yevgeny Kazakov, and Ian Horrocks. Consequence-based reasoning beyond Horn ontologies. In Toby Walsh, editor, IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pages 1093–1098. IJCAI/AAAI, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-187.
- [69] Evren Sirin, Bijan Parsia, Bernardo Cuenca Grau, Aditya Kalyanpur, and Yarden Katz. Pellet: A practical OWL-DL reasoner. J. Web Semant., 5(2):51–53, 2007. doi:10.1016/J.WEBSEM.2007.03.004.
- [70] Andreas Steigmiller, Thorsten Liebig, and Birte Glimm. Konclude: System description. J. Web Semant., 27-28:78–85, 2014. doi:10.1016/J.WEBSEM.2014.06.003.
- [71] Balder ten Cate, Enrico Franconi, and Inanç Seylan. Beth definability in expressive description logics. J. Artif. Intell. Res., 48:347–414, 2013. doi:10.1613/JAIR.4057.
- [72] Fang Wei-Kleiner, Zlatan Dragisic, and Patrick Lambrix. Abduction framework for repairing incomplete ontologies: Complexity results and algorithms. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1120–1127. AAAI Press, 2014. doi:10.1609/AAAI.V28I1.8858.
- [73] Guohui Xiao, Diego Calvanese, Roman Kontchakov, Domenico Lembo, Antonella Poggi, Riccardo Rosati, and Michael Zakharyaschev. Ontology-based data access: A survey. In Jérôme Lang, editor, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 5511–5519. ijcai.org, 2018. doi:10.24963/IJCAI.2018/777.
- [74] Yizheng Zhao and Renate A. Schmidt. FAME(Q): an automated tool for forgetting in description logics with qualified number restrictions. In Automated Deduction - CADE 27 - 27th International Conference on Automated Deduction, volume 11716 of Lecture Notes in Computer Science, pages 568–579. Springer, 2019. doi:10.1007/978-3-030-29436-6_34.
