eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
23
10.4230/DagSemProc.05431.1
article
05431 Abstracts Collection – Deduction and Applications
Baader, Franz
Baumgartner, Peter
Nieuwenhuis, Robert
Voronkov, Andrei
From 23.10.05 to 28.10.05, the Dagstuhl Seminar 05431 ``Deduction and Applications'' was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.1/DagSemProc.05431.1.pdf
Formal logic
deduction
artificial intelligence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
3
10.4230/DagSemProc.05431.2
article
05431 Executive Summary – Deduction and Applications
Baader, Franz
Baumgartner, Peter
Nieuwenhuis, Robert
Voronkov, Andrei
Formal logic provides a mathematical foundation for many areas of
computer science. Logical languages are used as specification
language within, e.g., program development and verification,
hardware design and verification, relational databases, and many
subfields of Artificial Intelligence. Automated Deduction is
concerned with the design and implementation of algorithms based on
logical deduction for solving problems in these areas.
The last years have seen considerable improvements concerning both
basic automated deduction technology and its (real-world)
applications. Accordingly, the goal of the seminar was to bring
together researchers from both sides in order to get an overview of
the state of the art, and also to get ideas how to advance automated
deduction from an application oriented point of view.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.2/DagSemProc.05431.2.pdf
Formal logic
deduction
artificial intelligence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
17
10.4230/DagSemProc.05431.3
article
Automatically Generating Loop Invariants Using Quantifier Elimination
Kapur, Deepak
An approach for automatically generating loop invariants using
quantifier-elimination is proposed. An invariant of a loop is hypothesized as
a parameterized formula. Parameters in the invariant are discovered by generating
constraints on the parameters by ensuring that the formula is indeed
preserved by the execution path corresponding to every basic cycle of the loop.
The parameterized formula can be successively refined by considering execution
paths one by one; heuristics can be developed for determining the order in
which the paths are considered. Initialization of program variables as well as
the precondition and postcondition of the loop, if available, can also be used
to further refine the hypothesized invariant. Constraints on parameters generated
in this way are solved for possible values of parameters. If no solution is
possible, this means that an invariant of the hypothesized form does not exist
for the loop. Otherwise, if the parametric constraints are solvable, then under
certain conditions on methods for generating these constraints, the strongest
possible invariant of the hypothesized form can be generated from most general
solutions of the parametric constraints. The approach is illustrated using the
first-order theory of polynomial equations as well as Presburger arithmetic.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.3/DagSemProc.05431.3.pdf
Program verification
loop invariants
inductive assertions
quantifier elimination
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
0
10.4230/DagSemProc.05431.4
article
On Algorithms and Complexity for Sets with Cardinality Constraints
Kuncak, Viktor
Rinard, Martin
Marnette, Bruno
Complexity of data structures in modern programs presents a
challenge for current analysis and verification tools,
forcing them to report false alarms or miss errors. I will
describe a new approach for verifying programs with complex
data structures. This approach builds on program analysis
techniques, as well as decision procedures and theorem
provers.
The approach is based on specifying interfaces of data
structures by writing procedure preconditions and
postconditions in terms of abstract sets and relations. Our
system then separately verifies that 1) each data structure
conforms to its interface, 2) each data structure interface
is used correctly, and 3) desired high-level
application-specific invariants hold. The system verifies
these conditions by combining decision procedures, theorem
provers, and static analyses, promising an unprecedented
tradeoff between precision and scalability. In the context
of this system, we have developed new decision procedures
for reasoning about sets and their cardinalities, approaches
for extending the applicability of existing decision
procedures, and techniques for modular analysis of
dynamically created data structure instances.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.4/DagSemProc.05431.4.pdf
Static analysis
data structure consistency
program verification
decision procedures
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
7
10.4230/DagSemProc.05431.5
article
Proof Presentation
Siekmann, Jörg
The talk is based on a book about the human-oriented presentation of a mathematical proof in natural
language, in a style as we may find it in a typical mathematical text book.
How can a proof be other than human-oriented?
What we have in mind is a deduction systems, which is implemented on a computer,
that proves – with some human interaction – a mathematical textbook as may be used
in an undergraduate course. The proofs generated by these systems today are far from
being human-oriented and can in general only be read by an expert in the respective
field: proofs between several hundred (for a common mathematical theorem), for more
than a thousand steps (for an unusually difficult theorem) and more than ten thousand
deduction steps (in a program verification task) are not uncommon.
Although these proofs are provably correct, they are typically marred by many problems:
to start with, that are usually written in a highly specialised logic such as the
resolution calculus, in a matrix format, or even worse, they may be generated by a
model checker. Moreover they record every logical step that may be necessary for the
minute detail of some term transformation (such as, for example, the rearrangement of
brackets) along side those arguments, a mathematician would call important steps or
heureka-steps that capture the main idea of the proof. Only these would he be willing
to communicate to his fellow mathematicians – provided they have a similar academic
background and work in the same mathematical discipline. If not, i.e. if the proof was
written say for an undergraduate textbook, the option of an important step may be
viewed differently depending on the intended reader.
Now, even if we were able to isolate the ten important steps – out of those hundreds
of machine generated proof steps – there would still be the startling problem
that they are usually written in the "wrong" order. A human reader might say: "they do
not have a logical structure"; which is to say that of course they follow a logical pattern
(as they are correctly generated by a machine), but, given the convention of the respective
field and the way the trained mathematician in this field is used to communicate,
they are somewhat strange and ill structured.
And finally, there is the problem that proofs are purely formal and recorded in a
predicate logic that is very far from the usual presentation that relies on a mixture
of natural language arguments interspersed with some formalism.
The book (about 800 page) which gives an answer to some of these problems is to appear with Elsevier
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.5/DagSemProc.05431.5.pdf
Artificial intelligence
mathematics
proof presentation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Dagstuhl Seminar Proceedings
1862-4405
2006-04-25
5431
1
1
10.4230/DagSemProc.05431.6
article
Proving and Disproving Termination in the Dependency Pair Framework
Giesl, Jürgen
Thiemann, René
Schneider-Kamp, Peter
The dependency pair framework is a new general concept to integrate
arbitrary techniques for termination analysis of term rewriting.
In this way, the benefits of different techniques can be combined and
their modularity and power are increased significantly. Moreover, this
framework facilitates the development of new methods for termination analysis.
Traditionally, the research on termination focused on methods which
prove termination and there were hardly any approaches for disproving
termination. We show that with the dependency pair framework, one
can combine the search for a proof and for a disproof of termination.
In this way, we obtain the first powerful method which can also
verify non-termination of term rewrite systems.
We implemented and evaluated our contributions in the automated termination
prover AProVE. Due to these results, AProVE was the winning tool in the
International Competition of Termination Provers 2005, both for proving and
for disproving termination of term rewriting.
https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol05431/DagSemProc.05431.6/DagSemProc.05431.6.pdf
Termination
non-termination
term rewriting
dependency pairs