Technical Communications of the 26th International Conference on Logic Programming, ICLP 2010, July 16-19, 2010, Edinburgh, Scotland, UK
ICLP 2010
July 16-19, 2010
Edinburgh, Scotland, UK
International Conference on Logic Programming
ICLP
https://www.cs.nmsu.edu/ALP/conferences/
https://dblp.org/db/conf/iclp
Leibniz International Proceedings in Informatics
LIPIcs
https://www.dagstuhl.de/dagpub/1868-8969
https://dblp.org/db/series/lipics
1868-8969
Manuel
Hermenegildo
Manuel Hermenegildo
Torsten
Schaub
Torsten Schaub
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
7
2010
978-3-939897-17-0
https://www.dagstuhl.de/dagpub/978-3-939897-17-0
Titlepage, Table of Contents, Conference Organization
Frontmatter including titlepage, table of contents and conference organization.
Titlepage
Table of Contents
Conference Organization
i-x
Front Matter
Manuel
Hermenegildo
Manuel Hermenegildo
Torsten
Schaub
Torsten Schaub
10.4230/LIPIcs.ICLP.2010.i
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Introduction to the Technical Communications of the 26th International Conference on Logic Programming
Introduction to the Technical Communications of the 26th International Conference on Logic Programming.
Introduction
11-14
Regular Paper
Manuel
Hermenegildo
Manuel Hermenegildo
Torsten
Schaub
Torsten Schaub
10.4230/LIPIcs.ICLP.2010.XI
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Datalog for Enterprise Software: from Industrial Applications to Research (Invited Talk)
LogicBlox is a platform for the rapid development of enterprise applications in the domains of decision automation, analytics, and planning. Although the LogicBlox platform embodies several components and technology decisions (e.g., an emphasis on software-as- a-service), the key substrate and glue is an implementation of the Datalog language. All application development on the LogicBlox platform is done declaratively in Datalog: The language is used to query large data sets, but also to develop web and desktop GUIs (with the help of pre-defined libraries), to interface with solvers, statistics tools, and optimizers for complex analytics solutions, and to express the overall business logic of the application.
The goal of this talk is to present both the business case for Datalog and the fruitful interaction of research and industrial applications in the LogicBlox context.
Datalog
decision automation
analytics
planning
1-1
Invited Talk
Molham
Aref
Molham Aref
10.4230/LIPIcs.ICLP.2010.1
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
A Logical Paradigm for Systems Biology (Invited Talk)
Biologists use diagrams to represent complex systems of interaction between molecular species. These graphical notations encompass two types of information: interactions (e.g. protein complexation, modification, binding to a gene, etc.) and regulations (of an interaction or a transcription). Based on these structures, mathematical models can be developed by equipping such molecular interaction networks with kinetic expressions leading to quantitative models of mainly two kinds: ordinary differential equations for a continuous interpretation of the kinetics and continuous-time Markov chains for a stochastic interpretation of the kinetics.
Since 2002, we investigate the transposition of programming concepts and tools to the analysis of living processes at the cellular level.
temporal logic
model-checking
systems biology
hybrid systems
2-3
Invited Talk
François
Fages
François Fages
10.4230/LIPIcs.ICLP.2010.2
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Runtime Addition of Integrity Constraints in an Abductive Proof Procedure
Abductive Logic Programming is a computationally founded representation of abductive reasoning. In most ALP frameworks, integrity constraints express domainspecific logical relationships that abductive answers are required to satisfy.
Integrity constraints are usually known a priori. However, in some applications (such as interactive abductive logic programming, multi-agent interactions, contracting) it makes sense to relax this assumption, in order to let the abductive reasoning start with incomplete knowledge of integrity constraints, and to continue without restarting when new integrity constraints become known.
In this paper, we propose a declarative semantics for abductive logic programming with addition of integrity constraints during the abductive reasoning process, an operational instantiation (with formal termination, soundness and completeness properties) and an implementation of such a framework based on the SCIFF language and proof procedure.
Abduction
semantics
interactive computation
proof procedure
4-13
Regular Paper
Marco
Alberti
Marco Alberti
Marco
Gavanelli
Marco Gavanelli
Evelina
Lamma
Evelina Lamma
10.4230/LIPIcs.ICLP.2010.4
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Learning Domain-Specific Heuristics for Answer Set Solvers
In spite of the recent improvements in the performance of Answer Set Programming (ASP) solvers, when the search space is sufficiently large, it is still possible for the search algorithm to mistakenly focus on areas of the search space that contain no solutions or very few. When that happens, performance degrades substantially, even to the point that the solver may need to be terminated before returning an answer. This prospect is a concern when one is considering using such a solver in an industrial setting, where users typically expect consistent performance. To overcome this problem, in this paper we propose a technique that allows learning domain-specific heuristics for ASP solvers. The learning is done off-line, on representative instances from the target domain, and the learned heuristics are then used for choice-point selection. In our experiments, the introduction of domain-specific heuristics improved performance on hard instances by up to 3 orders of magnitude (and 2 on average), nearly completely eliminating the cases in which the solver had to be terminated because the wait for an answer had become unacceptable.
Answer set programming
solvers
domain-specific heuristics
14-23
Regular Paper
Marcello
Balduccini
Marcello Balduccini
10.4230/LIPIcs.ICLP.2010.14
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
HEX Programs with Action Atoms
HEX programs were originally introduced as a general framework for extending declarative logic programming, under the stable model semantics, with the possibility of bidirectionally accessing external sources of knowledge and/or computation. The original framework, however, does not deal satisfactorily with stateful external environments: the possibility of predictably influencing external environments has thus not yet been considered explicitly. This paper lifts HEX programs to ACTHEX programs: ACTHEX programs introduce the notion of action atoms, which are associated to corresponding functions capable of actually changing the state of external environments. The execution of specific sequences of action atoms can be declaratively programmed. Furthermore, ACTHEX programs allow for selecting preferred actions, building on weights and corresponding cost functions. We introduce syntax and semantics of acthex programs; ACTHEX programs can successfully be exploited as a general purpose language for the declarative implementation of executable specifications, which we illustrate by encodings of knowledge bases updates, action languages, and an agent programming language. A system capable of executing ACTHEX programs has been implemented and is publicly available.
Answer Set Programming
Logic programming interoperability
Action languages
24-33
Regular Paper
Selen
Basol
Selen Basol
Ozan
Erdem
Ozan Erdem
Michael
Fink
Michael Fink
Giovambattista
Ianni
Giovambattista Ianni
10.4230/LIPIcs.ICLP.2010.24
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Communicating Answer Set Programs
Answer set programming is a form of declarative programming that has proven very successful in succinctly formulating and solving complex problems. Although mechanisms for representing and reasoning with the combined answer set programs of multiple agents have already been proposed, the actual gain in expressivity when adding communication has not been thoroughly studied. We show that allowing simple programs to talk to each other results in the same expressivity as adding negation-as-failure. Furthermore, we show that the ability to focus on one program in a network of simple programs results in the same expressivity as adding disjunction in the head of the rules.
34-43
Regular Paper
Kim
Bauters
Kim Bauters
Jeroen
Janssen
Jeroen Janssen
Steven
Schockaert
Steven Schockaert
Dirk
Vermeir
Dirk Vermeir
Martine
De Cock
Martine De Cock
10.4230/LIPIcs.ICLP.2010.34
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Implementation Alternatives for Bottom-Up Evaluation
Bottom-up evaluation is a central part of query evaluation / program execution in deductive databases. It is used after a source code optimization like magic sets or SLDmagic that ensures that only facts relevant for the query can be derived. Then bottom-up evaluation simply performs the iteration of the standard TP -operator to compute the minimal model. However, there are different ways to implement bottom-up evaluation efficiently. Since this is most critical for the performance of a deductive database system, and since performance is critical for the acceptance of deductive database technology, this question deserves a thorough analysis. In this paper we start this work by discussing several different implementation alternatives. Especially, we propose a new implementation of bottom-up evaluation called "Push-Method".
Deductive databases
bottom-up evaluation
implementation
44-53
Regular Paper
Stefan
Brass
Stefan Brass
10.4230/LIPIcs.ICLP.2010.44
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Inductive Logic Programming as Abductive Search
We present a novel approach to non-monotonic ILP and its implementation called TAL (Top-directed Abductive Learning). TAL overcomes some of the completeness problems of ILP systems based on Inverse Entailment and is the first top-down ILP system that allows background theories and hypotheses to be normal logic programs. The approach relies on mapping an ILP problem into an equivalent ALP one. This enables the use of established ALP proof procedures and the specification of richer language bias with integrity constraints. The mapping provides a principled search space for an ILP problem, over which an abductive search is used to compute inductive solutions.
Inductive Logic Programming
Abductive Logic Programming
Non-monotonic Reasoning
54-63
Regular Paper
Domenico
Corapi
Domenico Corapi
Alessandra
Russo
Alessandra Russo
Emil
Lupu
Emil Lupu
10.4230/LIPIcs.ICLP.2010.54
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Efficient Solving of Time-dependent Answer Set Programs
Answer set programs with time predicates are useful to model systems whose properties depend on time, like for example gene regulatory networks. A state of such a system at time point t then corresponds to the literals of an answer set that are grounded with time constant t. An important task when modelling time-dependent systems is to find steady states from which the system's behaviour does not change anymore. This task is complicated by the fact that it is typically not known in advance at what time steps these steady states occur. A brute force approach of estimating a time upper bound tmax and grounding and solving the program w.r.t. that upper bound leads to a suboptimal solving time when the estimate is too low or too high. In this paper we propose a more efficient algorithm for solving Markovian programs, which are time-dependent programs for which the next state depends only on the previous state. Instead of solving these Markovian programs for a long time interval {0,...,tmax}, we successively find answer sets of parts of the grounded program. Our approach guarantees the discovery of all steady states and cycles while avoiding unnecessary extra work.
Answer set programming
time-dependent programs
gene regulation networks
64-73
Regular Paper
Timur
Fayruzov
Timur Fayruzov
Jeroen
Janssen
Jeroen Janssen
Dirk
Vermeir
Dirk Vermeir
Chris
Cornelis
Chris Cornelis
Martine
De Cock
Martine De Cock
10.4230/LIPIcs.ICLP.2010.64
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Improving the Efficiency of Gibbs Sampling for Probabilistic Logical Models by Means of Program Specialization
There is currently a large interest in probabilistic logical models. A popular algorithm for approximate probabilistic inference with such models is Gibbs sampling. From a computational perspective, Gibbs sampling boils down to repeatedly executing certain queries on a knowledge base composed of a static part and a dynamic part. The larger the static part, the more redundancy there is in these repeated calls. This is problematic since inefficient Gibbs sampling yields poor approximations.
We show how to apply program specialization to make Gibbs sampling more efficient. Concretely, we develop an algorithm that specializes the definitions of the query-predicates with respect to the static part of the knowledge base. In experiments on real-world benchmarks we obtain speedups of up to an order of magnitude.
Probabilistic logical models
probabilistic logic programming
program specialization
Gibbs sampling
74-83
Regular Paper
Daan
Fierens
Daan Fierens
10.4230/LIPIcs.ICLP.2010.74
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Focused Proof Search for Linear Logic in the Calculus of Structures
The proof-theoretic approach to logic programming has benefited from the introduction of focused proof systems, through the non-determinism reduction and control they provide when searching for proofs in the sequent calculus. However, this technique was not available in the calculus of structures, known for inducing even more non-determinism than other logical formalisms. This work in progress aims at translating the notion of focusing into the presentation of linear logic in this setting, and use some of its specific features, such as deep application of rules and fine granularity, in order to improve proof search procedures. The starting point for this research line is the multiplicative fragment of linear logic, for which a simple focused proof system can be built.
Proof theory
focusing
proof search
deep inference
linear logic
84-93
Regular Paper
Nicolas
Guenot
Nicolas Guenot
10.4230/LIPIcs.ICLP.2010.84
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Sampler Programs: The Stable Model Semantics of Abstract Constraint Programs Revisited
Abstract constraint atoms provide a general framework for the study of aggregates utilized in answer set programming. Such primitives suitably increase the expressive power of rules and enable more concise representation of various domains as answer set programs. However, it is non-trivial to generalize the stable model semantics for programs involving arbitrary abstract constraint atoms. For instance, a nondeterministic variant of the immediate consequence operator is needed, or the definition of stable models cannot be stated directly using primitives of logic programs. In this paper, we propose sampler programs as a relaxation of abstract constraint programs that better lend themselves to the program transformation involved in the definition of stable models. Consequently, the declarative nature of stable models can be restored for sampler programs and abstract constraint programs are also covered if decomposed into sampler programs. Moreover, we study the relationships of the classes of programs involved and provide a characterization in terms of abstract but essentially deterministic computations. This result indicates that all nondeterminism related with abstract constraint atoms can be resolved at the level of program reduct when sampler programs are used as the intermediate representation.
Stable models
abstract constraints
program reduction
translation
choice rules
94-103
Regular Paper
Tomi
Janhunen
Tomi Janhunen
10.4230/LIPIcs.ICLP.2010.94
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
A Framework for Verification and Debugging of Resource Usage Properties: Resource Usage Verification
We present a framework for (static) verification of general resource usage program properties. The framework extends the criteria of correctness as the conformance of a program to a specification expressing non-functional global properties, such as upper and lower bounds on execution time, memory, energy, or user defined resources, given as functions on input data sizes. A given specification can include both lower and upper bound resource usage functions, i.e., it can express intervals where the resource usage is supposed to be included in. We have defined an abstract semantics for resource usage properties and operations to compare the (approximated) intended semantics of a program (i.e., the specification) with approximated semantics inferred by static analysis. These operations include the comparison of arithmetic functions (e.g., polynomial, exponential or logarithmic functions). A novel aspect of our framework is that the static checking of assertions generates answers that include conditions under which a given specification can be proved or disproved. For example, these conditions can express intervals of input data sizes such that a given specification can be proved for some intervals but disproved for others. We have implemented our techniques within the Ciao/CiaoPP system in a natural way, so that the novel resource usage verification blends in with the CiaoPP framework that unifies static verification and static debugging (as well as run-time verification and unit testing).
Program Verification and Debugging
Cost Analysis
Resource Usage Analysis
Complexity Analysis
104-113
Regular Paper
Pedro
Lopez-Garcia
Pedro Lopez-Garcia
Luthfi
Darmawan
Luthfi Darmawan
Francisco
Bueno
Francisco Bueno
10.4230/LIPIcs.ICLP.2010.104
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Contractibility and Contractible Approximations of Soft Global Constraints
We study contractibility and its approximation for two very general classes of soft global constraints. We introduce a general formulation of decomposition-based soft constraints and provide a sufficient condition for contractibility and an approach to approximation. For edit-based soft constraints, we establish that the tightest contractible approximation cannot be expressed in edit-based terms, in general.
Constraint logic programming
global constraints
open constraints
soft constraints
114-123
Regular Paper
Michael J.
Maher
Michael J. Maher
10.4230/LIPIcs.ICLP.2010.114
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Dedicated Tabling for a Probabilistic Setting
ProbLog is a probabilistic framework that extends Prolog with probabilistic facts. To compute the probability of a query, the complete SLD proof tree of the query is collected as a sum of products. ProbLog applies advanced techniques to make this feasible and to assess the correct probability. Tabling is a well-known technique to avoid repeated subcomputations and to terminate loops. We investigate how tabling can be used in ProbLog. The challenge is that we have to reconcile tabling with the advanced ProbLog techniques. While standard tabling collects only the answers for the calls, we do need the SLD proof tree. Finally we discuss how to deal with loops in our probabilistic framework. By avoiding repeated subcomputations, our tabling approach not only improves the execution time of ProbLog programs, but also decreases accordingly the memory consumption. We obtain promising results for ProbLog programs using exact probability inference.
Tabling
Loop Detection
Probabilistic Logical Programming
ProbLog
124-133
Regular Paper
Theofrastos
Mantadelis
Theofrastos Mantadelis
Gerda
Janssens
Gerda Janssens
10.4230/LIPIcs.ICLP.2010.124
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Tight Semantics for Logic Programs
We define the Tight Semantics (TS), a new semantics for all NLPs complying with the requirements of: 2-valued semantics; preserving the models of SM; guarantee of model existence, even in face of Odd Loops Over Negation (OLONs) or infinite chains; relevance cumulativity; and compliance with the Well-Founded Model.
When complete models are unnecessary, and top-down querying (à la Prolog) is desired, TS provides the 2-valued option that guarantees model existence, as a result of its relevance property. Top-down querying with abduction by need is rendered available too by TS. The user need not pay the price of computing whole models, nor that of generating all possible abductions, only to filter irrelevant ones subsequently.
A TS model of a NLP P is any minimal model M of P that further satisfies P^---the program remainder of P---in that each loop in P^ has a MM contained in M, whilst respecting the constraints imposed by the MMs of the other loops so-constrained too.
The applications afforded by TS are all those of Stable Models, which it generalizes, plus those permitting to solve OLONs for model existence, plus those employing OLONs for productively obtaining problem solutions, not just filtering them (like Integrity Constraints).
Normal Logic Programs
Relevance
Cumulativity
Stable Models
Well-Founded Semantics
Program Remainder
134-143
Regular Paper
Luis Moniz
Pereira
Luis Moniz Pereira
Alexandre Miguel
Pinto
Alexandre Miguel Pinto
10.4230/LIPIcs.ICLP.2010.134
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
From Relational Specifications to Logic Programs
This paper presents a compiler from expressive, relational specifications to logic programs. Specifically, the compiler translates the Imperative Alloy specification language to Prolog. Imperative Alloy is a declarative, relational specification language based on first-order logic and extended with imperative constructs; Alloy specifications are traditionally not executable. In spite of this theoretical limitation, the compiler produces useful prototype implementations for many specifications.
logic programming
specification languages
executable specifications
144-153
Regular Paper
Joseph P.
Near
Joseph P. Near
10.4230/LIPIcs.ICLP.2010.144
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Methods and Methodologies for Developing Answer-Set Programs - Project Description
Answer-set programming (ASP) is a well-known formalism for declarative problem solving, enjoying a continuously increasing number of diverse applications. However, arguably one of the main challenges for a wider acceptance of ASP is the need of tools, methods, and methodologies that support the actual programming process. In this paper, we review the main goals of a project, funded by the Austrian Science Fund (FWF), which aims to address this aspect in a systematic manner. The project is planned for a duration of three years and started in September 2009. Generally, the focus of research will be on methodologies for systematic program development, program testing, and debugging. In particular, in working on these areas, special emphasis shall be given to the ability of the developed techniques to respect the declarative nature of ASP. To support a sufficient level of usability, solutions are planned to be compatible not only for the core language of ASP but also for important extensions thereof that are commonly used and realised in various answer-set solvers. Ultimately, the methods resulting from the project shall form the basis of an integrated development environment (IDE) for ASP that is envisaged to combine straightforward as well as advanced techniques, realising a convenient tool for developing answer-set programs.
Answer-set programming
program development
testing
debugging
154-161
Regular Paper
Johannes
Oetsch
Johannes Oetsch
Jörg
Pührer
Jörg Pührer
Hans
Tompits
Hans Tompits
10.4230/LIPIcs.ICLP.2010.154
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Tabling and Answer Subsumption for Reasoning on Logic Programs with Annotated Disjunctions
The paper presents the algorithm "Probabilistic Inference with Tabling and Answer subsumption" (PITA) for computing the probability of queries from Logic Programs with Annotated Disjunctions. PITA is based on a program transformation techniques that adds an extra argument to every atom. PITA uses tabling for saving intermediate results and answer subsumption for combining different answers for the same subgoal. PITA has been implemented in XSB and compared with the ProbLog, cplint and CVE systems. The results show that in almost all cases, PITA is able to solve larger problems and is faster than competing algorithms.
Probabilistic Logic Programming
Tabling
Answer Subsumption
Logic Programs with Annotated Disjunction
Program Transformation
162-171
Regular Paper
Fabrizio
Riguzzi
Fabrizio Riguzzi
Terrance
Swift
Terrance Swift
10.4230/LIPIcs.ICLP.2010.162
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Subsumer: A Prolog theta-subsumption engine
State-of-the-art theta-subsumption engines like Django (C) and Resumer2 (Java) are implemented in imperative languages. Since theta-subsumption is inherently a logic problem, in this paper we explore how to efficiently implement it in Prolog.
theta-subsumption is an important problem in computational logic and particularly relevant to the Inductive Logic Programming (ILP) community as it is at the core of the hypotheses coverage test which is often the bottleneck of an ILP system. Also, since most of those systems are implemented in Prolog, they can immediately take advantage of a Prolog based theta-subsumption engine.
We present a relatively simple (~1000 lines in Prolog) but efficient and general theta-subsumption engine, Subsumer. Crucial to Subsumer's performance is the dynamic and recursive decomposition of a clause in sets of independent components. Also important are ideas borrowed from constraint programming that empower Subsumer to efficiently work on clauses with up to several thousand literals and several dozen distinct variables.
Using the notoriously challenging Phase Transition dataset we show that, cputime wise, Subsumer clearly outperforms the Django subsumption engine and is competitive with the more sophisticated, state-of-the-art, Resumer2. Furthermore, Subsumer's memory requirements are only a small fraction of those engines and it can handle arbitrary Prolog clauses whereas Django and Resumer2 can only handle Datalog clauses.
Theta-subsumption
Prolog
Inductive Logic Programming
172-181
Regular Paper
Jose
Santos
Jose Santos
Stephen
Muggleton
Stephen Muggleton
10.4230/LIPIcs.ICLP.2010.172
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Using Generalized Annotated Programs to Solve Social Network Optimization Problems
Reasoning about social networks (labeled, directed, weighted graphs) is becoming increasingly important and there are now models of how certain phenomena (e.g. adoption of products/services by consumers, spread of a given disease) "diffuse" through the network. Some of these diffusion models can be expressed via generalized annotated programs (GAPs). In this paper, we consider the following problem: suppose we have a given goal to achieve (e.g. maximize the expected number of adoptees of a product or minimize the spread of a disease) and suppose we have limited resources to use in trying to achieve the goal (e.g. give out a few free plans, provide medication to key people in the SN) - how should these resources be used so that we optimize a given objective function related to the goal? We define a class of social network optimization problems (SNOPs) that supports this type of reasoning. We formalize and study the complexity of SNOPs and show how they can be used in conjunction with existing economic and disease diffusion models.
Annotated logic programming
optimization queries
social networks
182-191
Regular Paper
Paulo
Shakarian
Paulo Shakarian
V.S.
Subrahmanian
V.S. Subrahmanian
Maria Luisa
Sapino
Maria Luisa Sapino
10.4230/LIPIcs.ICLP.2010.182
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Abductive Inference in Probabilistic Logic Programs
Action-probabilistic logic programs (ap-programs) are a class of probabilistic logic programs that have been extensively used during the last few years for modeling behaviors of entities. Rules in ap-programs have the form "If the environment in which entity E operates satisfies certain conditions, then the probability that E will take some action A is between L and U". Given an ap-program, we are interested in trying to change the environment, subject to some constraints, so that the probability that entity E takes some action (or combination of actions) is maximized. This is called the Basic Probabilistic Logic Abduction Problem (Basic PLAP). We first formally define and study the complexity of Basic PLAP and then provide an exact (exponential) algorithm to solve it, followed by more efficient algorithms for specific subclasses of the problem. We also develop appropriate heuristics to solve Basic PLAP efficiently.
Probabilistic Logic Programming
Imprecise Probabilities
Abductive Inference
192-201
Regular Paper
Gerardo
Simari
Gerardo Simari
V.S.
Subrahmanian
V.S. Subrahmanian
10.4230/LIPIcs.ICLP.2010.192
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Circumscription and Projection as Primitives of Logic Programming
KWe pursue a representation of logic programs as classical first-order sentences. Different semantics for logic programs can then be expressed by the way in which they are wrapped into - semantically defined - operators for circumscription and projection. (Projection is a generalization of second-order quantification.) We demonstrate this for the stable model semantics, Clark's completion and a three-valued semantics based on the Fitting operator. To represent the latter, we utilize the polarity sensitiveness of projection, in contrast to second-order quantification, and a variant of circumscription that allows to express predicate minimization in parallel with maximization. In accord with the aim of an integrated view on different logic-based representation techniques, the material is worked out on the basis of first-order logic with a Herbrand semantics.
202-211
Regular Paper
Christoph
Wernhard
Christoph Wernhard
10.4230/LIPIcs.ICLP.2010.202
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Timed Definite Clause Omega-Grammars
We propose timed context-free grammars (TCFGs) and show how parsers for such grammars can be developed using definite clause grammars (DCGs) coupled with constraints over reals (CLP(R)). Timed context-free grammars describe timed context-free languages (TCFLs). We next extend timed context-free grammars to timed context-free omega-grammars (omega-TCFGs for brevity) and incorporate co-inductive logic programming in DCGs to obtain parsers for them. Timed context-free omega-grammars describe timed context-free languages containing infinite-sized words, and are a generalization of timed omega-regular languages recognized by timed automata. We show a practical application of omega-TCFGs to the well-known generalized railroad crossing problem.
Constraint Logic Programming over reals
Co-induction
Context-Free Grammars
Omega-Grammars
212-221
Regular Paper
Neda
Saeedloei
Neda Saeedloei
Gopal
Gupta
Gopal Gupta
10.4230/LIPIcs.ICLP.2010.212
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Towards a Parallel Virtual Machine for Functional Logic Programming
Functional logic programming is a multi-paradigm programming that combines the best features of functional programming and logic programming. Functional programming provides mechanisms for demand-driven evaluation, higher order functions and polymorphic typing. Logic programming deals with non-determinism, partial information and constraints. Both programming paradigms fall under the umbrella of declarative programming. For the most part, the current implementations of functional logic languages belong to one of two categories: (1) Implementations that include the logic programming features in a functional language. (2) Implementations that extend logic languages with functional programming features. In this paper we describe the undergoing research efforts to build a parallel virtual machine that performs functional logic computations. The virtual machine will tackle several issues that other implementations do not tackle: (1) Sharing of sub-terms among different terms especially when such sub-terms are evaluated to more than one value (non-determinism). (2) Exploitation of all forms of parallelism present in computations. The evaluation strategy used to evaluate functional logic terms is needed narrowing, which is a complete and sound strategy.
Functional logic programming
term rewriting system
non-determinism
needed narrowing
and-parallelism
or-parallelism
222-225
Regular Paper
Abdulla
Alqaddoumi
Abdulla Alqaddoumi
10.4230/LIPIcs.ICLP.2010.222
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Dynamic Magic Sets for Disjunctive Datalog Programs
Answer set programming (ASP) is a powerful formalism for knowledge representation and common sense reasoning that allows disjunction in rule heads and nonmonotonic negation in bodies. Magic Sets are a technique for optimizing query answering over logic programs and have been originally defined for standard Datalog, that is, ASP without disjunction and negation. Essentially, the input program is rewritten in order to identify a subset of the program instantiation which is sufficient for answering the query.
Dynamic Magic Sets (DMS) are an extension of this technique to ASP. The optimization provided by DMS can be exploited also during the nondeterministic phase of ASP systems. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query (because of these assumptions). This allows for dynamic pruning of the search space, which may result in exponential performance gains.
DMS has been implemented in the dlv system and experimental results confirm the effectiveness of the technique.
Answer set programming
decidability
magic sets
disjunctive logic programs
226-235
Regular Paper
Mario
Alviano
Mario Alviano
10.4230/LIPIcs.ICLP.2010.226
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Bisimilarity in Concurrent Constraint Programming
In this doctoral work we aim at developing a new approach to labelled semantics and equivalences for the Concurrent Constraint Programming (CCP) which will enable a broader capture of processes’ behavioural equivalence. Moreover, we work towards exploiting the strong connection between first order logic and CCP. Something which will allow us to represent logical formulae in terms of CCP processes and verify its logical equivalence by means of our notion of bisimilarity. Finally, following the lines of the Concurrecy Workbench we plan to implement a CCP Workbench based on our theoretical structure.
Concurrent Constraint Programming
Concurrency
Behavioural Equivalence
Bisimilarity
Process Calculi
Operational Semantics
Labelled Semantics
236-240
Regular Paper
Andres A.
Aristizabal P.
Andres A. Aristizabal P.
10.4230/LIPIcs.ICLP.2010.236
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Program analysis for code duplication in logic programs
In this PhD project, we deal with the issue of code duplication in logic programs. In particular semantical duplication or redundancy is generally viewed as a possible seed of inconvenience in all phases of the program lifecycle, from development to maintenance. The core of this research is the elaboration of a theory of semantical duplication, and of an automated program analysis capable of detecting such duplication and which could steer, to some extent, automatic refactoring of program code.
Logic programming
program comprehension
static program analysis
code duplication
code clone
software engineering
241-247
Regular Paper
Celine
Dandois
Celine Dandois
10.4230/LIPIcs.ICLP.2010.241
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Program Analysis to Support Concurrent Programming in Declarative Languages
In recent years, manufacturers of processors are focusing on parallel architectures in order to increase performance.
This shift in hardware evolution is provoking a fundamental turn towards concurrency in software development.
Unfortunately, developing concurrent programs which are correct and efficient is hard, as the underlying programming model is much more complex than it is for simple sequential programs.
The goal of this research is to study and to develop program analysis to support and improve concurrent software development in declarative languages. The characteristics of these languages offer opportunities, as they are good candidates for building concurrent applications while their simple and uniform data representation, together with a small and formally defined semantics makes them well-adapted to automatic program analysis techniques. In our work, we focus primarily on developing static analysis techniques for detecting race conditions at the application level in Mercury and Prolog programs. A further step is to derive (semi-) automatically the location and the granularity of the critical sections using a data-centric approach.
Program Analysis -- Concurrent Programming -- Logic Languages -- Abstract Interpretation
248-254
Regular Paper
Romain
Demeyer
Romain Demeyer
10.4230/LIPIcs.ICLP.2010.248
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Constraint Answer Set Programming Systems
We present an integration of answer set programming and constraint processing as an interesting approach to constraint logic programming.
Although our research is in a very early stage, we motivate constraint answer set programming and report on related work, our research objectives, preliminary results we achieved, and future work.
Answer set programming
constraint logic programming
constraint processing
255-264
Regular Paper
Christian
Drescher
Christian Drescher
10.4230/LIPIcs.ICLP.2010.255
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Towards a General Argumentation System based on Answer-Set Programming
Within the last years, especially since the work proposed by Dung in 1995, argumentation has emerged as a central issue in Artificial Intelligence. With the so called argumentation frameworks (AFs) it is possible to represent statements (arguments) together with a binary attack relation between them.
The conflicts between the statements are solved on a semantical level by selecting acceptable sets of arguments. An increasing amount of data requires an automated computation of such solutions.
Logic Programming in particular Answer-Set Programming (ASP) turned out to be adequate to solve problems associated to such AFs.
In this work we use ASP to design a sophisticated system for the evaluation of several types of argumentation frameworks.
Argumentation
Implementation
Answer-Set Programming
265-269
Regular Paper
Sarah Alice
Gaggl
Sarah Alice Gaggl
10.4230/LIPIcs.ICLP.2010.265
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Models for Trustworthy Service and Process Oriented Systems
Service and process-oriented systems promise to provide more effective business and work processes and more flexible and adaptable enterprise IT systems. However, the technologies and standards are still young and unstable, making research in their theoretical foundations increasingly important.
Our studies focus on two dichotomies: the global/local views of service interactions, and their imperative/declarative specification.
A global view of service interactions describes a process as a protocol for interactions, as e.g. an UML sequence diagram or a WS-CDL choreography. A local view describes the system as a set of processes, e.g. specified as a mipi-calculus or WS-BPEL process, implementing each participant in the process. While the global view is what is usually provided as specification, the local view is a necessary step towards a distributed implementation.
If processes are defined imperatively, the control flow is defined explicitly, e.g. as a sequence or flow graph of interactions/commands. In a declarative approach processes are described as a collection of conditions they should fulfill in order to be considered correct. The two approaches have evolved rather independently from each other. Our thesis is that we can provide a theoretical framework based on typed concurrent process and concurrent constraint calculi for the specification, analysis and verification of service and process oriented system designs which bridges the global and local view and combines the imperative and declarative specification approaches, and can be employed to increase the trust in the developed systems.
This article describes our main motivations, results and future research directions.
Concurrent Constraint Calculi
Session Types
Logic
Service and Process oriented computing
270-276
Regular Paper
Hugo A.
Lopez
Hugo A. Lopez
10.4230/LIPIcs.ICLP.2010.270
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Design and Implementation of a Concurrent Logic Programming Language with Linear Logic Constraints
My thesis aims at designing a practical language as close as possible to the linear concurrent constraint (LCC) theory. The main contribution is a new operational semantics which behaves as an angelic scheduler with a tractable algorithmic complexity. This operational semantics is sound and complete with respect to the logical semantics and allows the construction of a rich language over a very simple kernel.
Concurrent logic programming
277-280
Regular Paper
Thierry
Martinez
Thierry Martinez
10.4230/LIPIcs.ICLP.2010.277
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Higher-order Logic Learning and lambda-Progol
We present our research produced about Higher-order Logic Learning (HOLL), which consists of adapting First-order Logic Learning (FOLL), like Inductive Logic Programming (ILP), within a Higher-order Logic (HOL) context. We describe a first working implementation of lambda-Progol, a HOLL system adapting the ILP system Progol and the HOL formalism lambda-Prolog. We compare lambda-Progol and Progol on the learning of recursive theories showing that HOLL can, in these cases, outperform FOLL.
Inductive Logic Programming
Progol
Higher-order Logic
Higher-order Logic Learning
$lambda$Prolog
281-285
Regular Paper
Niels
Pahlavi
Niels Pahlavi
10.4230/LIPIcs.ICLP.2010.281
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Local Branching in a Constraint Programming Framework
We argue that integrating local branching in CP merges the advantages of the intensification and diversification mechanisms specific to local search methods, with constraint propagation that speeds up the neighborhood exploration by removing infeasible variable value assignments.
Local Branching
LDS
Local Search
Tree Search
Constraint Programming
286-288
Regular Paper
Fabio
Parisini
Fabio Parisini
10.4230/LIPIcs.ICLP.2010.286
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Logic Programming Foundations of Cyber-Physical Systems
Cyber-physical systems (CPS) are becoming ubiquitous. Almost every device today has a controller that reads inputs through sensors, does some processing and then performs actions through actuators. These controllers are discrete digital systems whose inputs are continuous physical quantities and whose outputs control physical (analog) devices. Thus, CPS involve both digital and analog data. In addition, CPS are assumed to run forever, and many CPS may run concurrently with each other. we will develop techniques for faithfully and elegantly modeling CPS. Our approach is based on using constraint logic programming over reals, co-induction, and coroutining.
Cyber-Physical Systems
Constraint Logic Programming over reals
Co-induction
Coroutining
289-293
Regular Paper
Neda
Saeedloei
Neda Saeedloei
10.4230/LIPIcs.ICLP.2010.289
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Realizing the Dependently Typed Lambda Calculus
Dependently typed lambda calculi such as the Edinburgh Logical Framework (LF) can encode relationships between terms in types and can naturally capture correspondences between formulas and their proofs. Such calculi can also be given a logic programming interpretation: the system is based on such an interpretation of LF. We have considered whether a conventional logic programming language can also provide the benefits of a Twelf-like system for encoding type and term dependencies through dependent typing, and whether it can do so in an efficient manner.
In particular, we have developed a simple mapping from LF specifications to a set of formulas in the higher-order hereditary Harrop (hohh) language, that relates derivations and proof-search between the two frameworks. We have shown that this encoding can be improved by exploiting knowledge of the well-formedness of the original LF specifications to elide much redundant type-checking information. The resulting logic program has a structure that closely follows the original specification, thereby allowing LF specifications to be viewed as meta-programs that generate hohh programs. We have proven that this mapping is correct, and, using the Teyjus implementation of lprolog, we have shown that our translation provides an efficient means for executing LF specifications, complementing the ability the Twelf system provides for reasoning about them. In addition, the translation offers new avenues for reasoning about such specifications, via reasoning over the generated hohh programs.
Logical frameworks
logic programming
294-299
Regular Paper
Zachary
Snow
Zachary Snow
10.4230/LIPIcs.ICLP.2010.294
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Structured Interactive Musical Scores
Interactive Scores is a formalism for the design and performance of interactive scenarios that provides temporal relations (TRs) among the objects of the scenario. We can model TRs among objects in Time Stream Petri nets, but it is difficult to represent global constraints. This can be done explicitly in the Non-deterministic Timed Concurrent Constraint (ntcc) calculus. We want to formalize a heterogeneous system that controls in one subsystem the concurrent execution of the objects using ntcc, and audio and video processing in the other. We also plan to develop an automatic verifier for ntcc.
Ntcc
ccp
interactive scores
temporal relations
faust
ntccrt
heterogeneous systems
automatic verification
300-302
Regular Paper
Mauricio
Toro-Bermudez
Mauricio Toro-Bermudez
10.4230/LIPIcs.ICLP.2010.300
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
Cutting-Edge Timing Analysis Techniques
This text gives an overview about my current research in timing analysis at the Vienna University of Technology. After a short introduction to the topic follows the description of an approach relying on CLP, the implicit path enumeration technique (IPET). This technique is also used in a tool developed at the institute of Computer Languages (TuBound). Current timing analysis tools suffer from a few flaws worth further investigation in order to achieve better results than current state-of-the-art timing analysis tools.
Verification
timing analysis
hard real-time systems
static analysis
worst-case execution time
loop-invariants
nested loop
symbolic computation,
303-305
Regular Paper
Jakob
Zwirchmayr
Jakob Zwirchmayr
10.4230/LIPIcs.ICLP.2010.303
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode