61 Search Results for "Lang, J�r�me"


Document
Discovering Predictive Dependencies on Multi-Temporal Relations

Authors: Beatrice Amico, Carlo Combi, Romeo Rizzi, and Pietro Sala

Published in: LIPIcs, Volume 278, 30th International Symposium on Temporal Representation and Reasoning (TIME 2023)


Abstract
In this paper, we propose a methodology for deriving a new kind of approximate temporal functional dependencies, called Approximate Predictive Functional Dependencies (APFDs), based on a three-window framework and on a multi-temporal relational model. Different features are proposed for the Observation Window (OW), where we observe predictive data, for the Waiting Window (WW), and for the Prediction Window (PW), where the predicted event occurs. We then discuss the concept of approximation for such APFDs, introduce two new error measures. We prove that the problem of deriving APFDs is intractable. Moreover, we discuss some preliminary results in deriving APFDs from real clinical data using MIMIC III dataset, related to patients from Intensive Care Units.

Cite as

Beatrice Amico, Carlo Combi, Romeo Rizzi, and Pietro Sala. Discovering Predictive Dependencies on Multi-Temporal Relations. In 30th International Symposium on Temporal Representation and Reasoning (TIME 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 278, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{amico_et_al:LIPIcs.TIME.2023.4,
  author =	{Amico, Beatrice and Combi, Carlo and Rizzi, Romeo and Sala, Pietro},
  title =	{{Discovering Predictive Dependencies on Multi-Temporal Relations}},
  booktitle =	{30th International Symposium on Temporal Representation and Reasoning (TIME 2023)},
  pages =	{4:1--4:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-298-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{278},
  editor =	{Artikis, Alexander and Bruse, Florian and Hunsberger, Luke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2023.4},
  URN =		{urn:nbn:de:0030-drops-190945},
  doi =		{10.4230/LIPIcs.TIME.2023.4},
  annote =	{Keywords: temporal databases, temporal data mining, functional dependencies}
}
Document
RANDOM
Testing Versus Estimation of Graph Properties, Revisited

Authors: Lior Gishboliner, Nick Kushnir, and Asaf Shapira

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
A graph G on n vertices is ε-far from property P if one should add/delete at least ε n² edges to turn G into a graph satisfying P. A distance estimator for P is an algorithm that given G and α, ε > 0 distinguishes between the case that G is (α-ε)-close to 𝒫 and the case that G is α-far from 𝒫. If P has a distance estimator whose query complexity depends only on ε, then P is said to be estimable. Every estimable property is clearly also testable, since testing corresponds to estimating with α = ε. A central result in the area of property testing is the Fischer-Newman theorem, stating that an inverse statement also holds, that is, that every testable property is in fact estimable. The proof of Fischer and Newmann was highly ineffective, since it incurred a tower-type loss when transforming a testing algorithm for P into a distance estimator. This raised the natural problem, studied recently by Fiat-Ron and by Hoppen-Kohayakawa-Lang-Lefmann-Stagni, whether one can find a transformation with a polynomial loss. We obtain the following results. - We show that if P is hereditary, then one can turn a tester for P into a distance estimator with an exponential loss. This is an exponential improvement over the result of Hoppen et. al., who obtained a transformation with a double exponential loss. - We show that for every P, one can turn a testing algorithm for P into a distance estimator with a double exponential loss. This improves over the transformation of Fischer-Newman that incurred a tower-type loss. Our main conceptual contribution in this work is that we manage to turn the approach of Fischer-Newman, which was inherently ineffective, into an efficient one. On the technical level, our main contribution is in establishing certain properties of Frieze-Kannan Weak Regular partitions that are of independent interest.

Cite as

Lior Gishboliner, Nick Kushnir, and Asaf Shapira. Testing Versus Estimation of Graph Properties, Revisited. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 46:1-46:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gishboliner_et_al:LIPIcs.APPROX/RANDOM.2023.46,
  author =	{Gishboliner, Lior and Kushnir, Nick and Shapira, Asaf},
  title =	{{Testing Versus Estimation of Graph Properties, Revisited}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{46:1--46:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.46},
  URN =		{urn:nbn:de:0030-drops-188713},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.46},
  annote =	{Keywords: Testing, estimation, weak regularity, randomized algorithms, graph theory, Frieze-Kannan Regularity}
}
Document
Algebraic Replicated Data Types: Programming Secure Local-First Software

Authors: Christian Kuessner, Ragnar Mogk, Anna-Katharina Wickert, and Mira Mezini

Published in: LIPIcs, Volume 263, 37th European Conference on Object-Oriented Programming (ECOOP 2023)


Abstract
This paper is about programming support for local-first applications that manage private data locally, but still synchronize data between multiple devices. Typical use cases are synchronizing settings and data, and collaboration between multiple users. Such applications must preserve the privacy and integrity of the user’s data without impeding or interrupting the user’s normal workflow - even when the device is offline or has a flaky network connection. From the programming perspective, availability along with privacy and security concerns pose significant challenges, for which developers have to learn and use specialized solutions such as conflict-free replicated data types (CRDTs) or APIs for centralized data stores. This work relieves developers from this complexity by enabling the direct and automatic use of algebraic data types - which developers already use to express the business logic of the application - for synchronization and collaboration. Moreover, we use this approach to provide end-to-end encryption and authentication between multiple replicas (using a shared secret), that is suitable for a coordination-free setting. Overall, our approach combines all the following advantages: it (1) allows developers to design custom data types, (2) provides data privacy and integrity when using untrusted intermediaries, (3) is coordination free, (4) guarantees eventual consistency by construction (i.e., independent of developer errors), (5) does not cause indefinite growth of metadata, (6) has sufficiently efficient implementations for the local-first setting.

Cite as

Christian Kuessner, Ragnar Mogk, Anna-Katharina Wickert, and Mira Mezini. Algebraic Replicated Data Types: Programming Secure Local-First Software. In 37th European Conference on Object-Oriented Programming (ECOOP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 263, pp. 14:1-14:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kuessner_et_al:LIPIcs.ECOOP.2023.14,
  author =	{Kuessner, Christian and Mogk, Ragnar and Wickert, Anna-Katharina and Mezini, Mira},
  title =	{{Algebraic Replicated Data Types: Programming Secure Local-First Software}},
  booktitle =	{37th European Conference on Object-Oriented Programming (ECOOP 2023)},
  pages =	{14:1--14:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-281-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{263},
  editor =	{Ali, Karim and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2023.14},
  URN =		{urn:nbn:de:0030-drops-182076},
  doi =		{10.4230/LIPIcs.ECOOP.2023.14},
  annote =	{Keywords: local-first, data privacy, coordination freedom, CRDTs, AEAD}
}
Document
Type Theory as a Language Workbench

Authors: Jan de Muijnck-Hughes, Guillaume Allais, and Edwin Brady

Published in: OASIcs, Volume 109, Eelco Visser Commemorative Symposium (EVCS 2023)


Abstract
Language Workbenches offer language designers an expressive environment in which to create their Domain Specific Languages (DSLs). Similarly, research into mechanised meta-theory has shown how dependently typed languages provide expressive environments to formalise and study DSLs and their meta-theoretical properties. But can we claim that dependently typed languages qualify as language workbenches? We argue yes! We have developed an exemplar DSL called Vélo that showcases not only dependently typed techniques to realise and manipulate Intermediate Representations (IRs), but that dependently typed languages make fine language workbenches. Vélo is a simple verified language with well-typed holes and comes with a complete compiler pipeline: parser, elaborator, REPL, evaluator, and compiler passes. Specifically, we describe our design choices for well-typed IR design that includes support for well-typed holes, how CSE is achieved in a well-typed setting, and how the mechanised type-soundness proof for Vélo is the source of the evaluator.

Cite as

Jan de Muijnck-Hughes, Guillaume Allais, and Edwin Brady. Type Theory as a Language Workbench. In Eelco Visser Commemorative Symposium (EVCS 2023). Open Access Series in Informatics (OASIcs), Volume 109, pp. 9:1-9:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{demuijnckhughes_et_al:OASIcs.EVCS.2023.9,
  author =	{de Muijnck-Hughes, Jan and Allais, Guillaume and Brady, Edwin},
  title =	{{Type Theory as a Language Workbench}},
  booktitle =	{Eelco Visser Commemorative Symposium (EVCS 2023)},
  pages =	{9:1--9:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-267-9},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{109},
  editor =	{L\"{a}mmel, Ralf and Mosses, Peter D. and Steimann, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.EVCS.2023.9},
  URN =		{urn:nbn:de:0030-drops-177797},
  doi =		{10.4230/OASIcs.EVCS.2023.9},
  annote =	{Keywords: dependent types, language workbenches, idris2, dsl, edsl, intrinsically scoped, well typed, co-De Bruijn}
}
Document
Towards Modular Compilation Using Higher-Order Effects

Authors: Jaro S. Reinders

Published in: OASIcs, Volume 109, Eelco Visser Commemorative Symposium (EVCS 2023)


Abstract
Compilers transform a human readable source language into machine readable target language. Nanopass compilers simplify this approach by breaking up this transformation into small steps that are more understandable, maintainable, and extensible. We propose a semantics-driven variant of the nanopass compiler architecture exploring the use a effects and handlers to model the intermediate languages and the transformation passes, respectively. Our approach is fully typed and ensures that all cases in the compiler are covered. Additionally, by using an effect system we abstract over the control flow of the intermediate language making the compiler even more flexible. We apply this approach to a minimal compiler from a language with arithmetic and let-bound variables to a string of pretty printed X86 instructions. In the future, we hope to extend this work to compile a larger and more complicated language and we envision a formal verification framework from compilers written in this style.

Cite as

Jaro S. Reinders. Towards Modular Compilation Using Higher-Order Effects. In Eelco Visser Commemorative Symposium (EVCS 2023). Open Access Series in Informatics (OASIcs), Volume 109, pp. 22:1-22:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{reinders:OASIcs.EVCS.2023.22,
  author =	{Reinders, Jaro S.},
  title =	{{Towards Modular Compilation Using Higher-Order Effects}},
  booktitle =	{Eelco Visser Commemorative Symposium (EVCS 2023)},
  pages =	{22:1--22:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-267-9},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{109},
  editor =	{L\"{a}mmel, Ralf and Mosses, Peter D. and Steimann, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.EVCS.2023.22},
  URN =		{urn:nbn:de:0030-drops-177926},
  doi =		{10.4230/OASIcs.EVCS.2023.22},
  annote =	{Keywords: algebraic effects and handlers, higher-order effects, monadic semantics, modularity, compilation, nanopass}
}
Document
Beyond the Threaded Programming Model on Real-Time Operating Systems

Authors: Erling Rennemo Jellum, Shaokai Lin, Peter Donovan, Efsane Soyer, Fuzail Shakir, Torleiv Bryne, Milica Orlandic, Marten Lohstroh, and Edward A. Lee

Published in: OASIcs, Volume 108, Fourth Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2023)


Abstract
The use of a real-time operating system (RTOS) raises the abstraction level for embedded systems design when compared to traditional bare-metal programming, resulting in simpler and more reusable application code. Modern RTOSes for resource-constrained platforms, like Zephyr and FreeRTOS, also offer threading support, but this kind of shared memory concurrency is a poor fit for expressing the reactive and interactive behaviors that are common in embedded systems. To address this, alternative concurrency models like the actor model or communicating sequential processes have been proposed. While those alternatives enable reactive design patterns, they fail to deliver determinism and do not address timing. This makes it difficult to verify that implemented behavior is as intended and impossible to specify timing constraints in a portable way. This makes it hard to create reusable library components out of common embedded design patterns, forcing developers to keep reinventing the wheel for each application and each platform. In this paper, we introduce the embedded target of Lingua Franca (LF) as a means to move beyond the threaded programming model provided by RTOSes and improve the state of the art in embedded programming. LF is based on the reactor model of computation, which is reactive, deterministic, and timed, providing a means to express concurrency and timing in a platform-independent way. We compare the performance of LF versus threaded C code - both running on Zephyr - in terms of response time, timing precision, throughput, and memory footprint.

Cite as

Erling Rennemo Jellum, Shaokai Lin, Peter Donovan, Efsane Soyer, Fuzail Shakir, Torleiv Bryne, Milica Orlandic, Marten Lohstroh, and Edward A. Lee. Beyond the Threaded Programming Model on Real-Time Operating Systems. In Fourth Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2023). Open Access Series in Informatics (OASIcs), Volume 108, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{jellum_et_al:OASIcs.NG-RES.2023.3,
  author =	{Jellum, Erling Rennemo and Lin, Shaokai and Donovan, Peter and Soyer, Efsane and Shakir, Fuzail and Bryne, Torleiv and Orlandic, Milica and Lohstroh, Marten and Lee, Edward A.},
  title =	{{Beyond the Threaded Programming Model on Real-Time Operating Systems}},
  booktitle =	{Fourth Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2023)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-268-6},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{108},
  editor =	{Terraneo, Federico and Cattaneo, Daniele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.NG-RES.2023.3},
  URN =		{urn:nbn:de:0030-drops-177348},
  doi =		{10.4230/OASIcs.NG-RES.2023.3},
  annote =	{Keywords: Real time, concurrency, reactors, Lingua Franca, RTOS}
}
Document
Kolmogorov Complexity Characterizes Statistical Zero Knowledge

Authors: Eric Allender, Shuichi Hirahara, and Harsha Tirumala

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We show that a decidable promise problem has a non-interactive statistical zero-knowledge proof system if and only if it is randomly reducible via an honest polynomial-time reduction to a promise problem for Kolmogorov-random strings, with a superlogarithmic additive approximation term. This extends recent work by Saks and Santhanam (CCC 2022). We build on this to give new characterizations of Statistical Zero Knowledge SZK, as well as the related classes NISZK_L and SZK_L.

Cite as

Eric Allender, Shuichi Hirahara, and Harsha Tirumala. Kolmogorov Complexity Characterizes Statistical Zero Knowledge. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.ITCS.2023.3,
  author =	{Allender, Eric and Hirahara, Shuichi and Tirumala, Harsha},
  title =	{{Kolmogorov Complexity Characterizes Statistical Zero Knowledge}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.3},
  URN =		{urn:nbn:de:0030-drops-175063},
  doi =		{10.4230/LIPIcs.ITCS.2023.3},
  annote =	{Keywords: Kolmogorov Complexity, Interactive Proofs}
}
Document
Space-Bounded Unitary Quantum Computation with Postselection

Authors: Seiichiro Tani

Published in: LIPIcs, Volume 241, 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)


Abstract
Space-bounded computation has been a central topic in classical and quantum complexity theory. In the quantum case, every elementary gate must be unitary. This restriction makes it unclear whether the power of space-bounded computation changes by allowing intermediate measurement. In the bounded error case, Fefferman and Remscrim [STOC 2021, pp.1343-1356] and Girish, Raz and Zhan [ICALP 2021, pp.73:1-73:20] recently provided the break-through results that the power does not change. This paper shows that a similar result holds for space-bounded quantum computation with postselection. Namely, it is proved possible to eliminate intermediate postselections and measurements in the space-bounded quantum computation in the bounded-error setting. Our result strengthens the recent result by Le Gall, Nishimura and Yakaryilmaz [TQC 2021, pp.10:1-10:17] that logarithmic-space bounded-error quantum computation with intermediate postselections and measurements is equivalent in computational power to logarithmic-space unbounded-error probabilistic computation. As an application, it is shown that bounded-error space-bounded one-clean qubit computation (DQC1) with postselection is equivalent in computational power to unbounded-error space-bounded probabilistic computation, and the computational supremacy of the bounded-error space-bounded DQC1 is interpreted in complexity-theoretic terms.

Cite as

Seiichiro Tani. Space-Bounded Unitary Quantum Computation with Postselection. In 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 241, pp. 81:1-81:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tani:LIPIcs.MFCS.2022.81,
  author =	{Tani, Seiichiro},
  title =	{{Space-Bounded Unitary Quantum Computation with Postselection}},
  booktitle =	{47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)},
  pages =	{81:1--81:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-256-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{241},
  editor =	{Szeider, Stefan and Ganian, Robert and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2022.81},
  URN =		{urn:nbn:de:0030-drops-168798},
  doi =		{10.4230/LIPIcs.MFCS.2022.81},
  annote =	{Keywords: quantum complexity theory, space-bounded computation, postselection}
}
Document
Determining a Slater Winner Is Complete for Parallel Access to NP

Authors: Michael Lampis

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
We consider the complexity of deciding the winner of an election under the Slater rule. In this setting we are given a tournament T = (V,A), where the vertices of V represent candidates and the direction of each arc indicates which of the two endpoints is preferable for the majority of voters. The Slater score of a vertex v ∈ V is defined as the minimum number of arcs that need to be reversed so that T becomes acyclic and v becomes the winner. We say that v is a Slater winner in T if v has minimum Slater score in T. Deciding if a vertex is a Slater winner in a tournament has long been known to be NP-hard. However, the best known complexity upper bound for this problem is the class Θ₂^p, which corresponds to polynomial-time Turing machines with parallel access to an NP oracle. In this paper we close this gap by showing that the problem is Θ₂^p-complete, and that this hardness applies to instances constructible by aggregating the preferences of 7 voters.

Cite as

Michael Lampis. Determining a Slater Winner Is Complete for Parallel Access to NP. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lampis:LIPIcs.STACS.2022.45,
  author =	{Lampis, Michael},
  title =	{{Determining a Slater Winner Is Complete for Parallel Access to NP}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.45},
  URN =		{urn:nbn:de:0030-drops-158555},
  doi =		{10.4230/LIPIcs.STACS.2022.45},
  annote =	{Keywords: Slater winner, Feedback Arc Set, Tournaments}
}
Document
Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity

Authors: Eric Allender, John Gouwar, Shuichi Hirahara, and Caleb Robelle

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT complexity of a string). Both MKTP and MCSP are hard for SZK (Statistical Zero Knowledge) under BPP-Turing reductions; neither is known to be NP-complete. Recently, some hardness results for MKTP were proved that are not (yet) known to hold for MCSP. In particular, MKTP is hard for DET (a subclass of P) under nonuniform ≤^{NC^0}_m reductions. In this paper, we improve this, to show that the complement of MKTP is hard for the (apparently larger) class NISZK_L under not only ≤^{NC^0}_m reductions but even under projections. Also, the complement of MKTP is hard for NISZK under ≤^{P/poly}_m reductions. Here, NISZK is the class of problems with non-interactive zero-knowledge proofs, and NISZK_L is the non-interactive version of the class SZK_L that was studied by Dvir et al. As an application, we provide several improved worst-case to average-case reductions to problems in NP, and we obtain a new lower bound on MKTP (which is currently not known to hold for MCSP).

Cite as

Eric Allender, John Gouwar, Shuichi Hirahara, and Caleb Robelle. Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.ISAAC.2021.54,
  author =	{Allender, Eric and Gouwar, John and Hirahara, Shuichi and Robelle, Caleb},
  title =	{{Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.54},
  URN =		{urn:nbn:de:0030-drops-154875},
  doi =		{10.4230/LIPIcs.ISAAC.2021.54},
  annote =	{Keywords: Kolmogorov Complexity, Interactive Proofs, Minimum Circuit Size Problem, Worst-case to Average-case Reductions}
}
Document
Towards Learning Terminological Concept Systems from Multilingual Natural Language Text

Authors: Lennart Wachowiak, Christian Lang, Barbara Heinisch, and Dagmar Gromann

Published in: OASIcs, Volume 93, 3rd Conference on Language, Data and Knowledge (LDK 2021)


Abstract
Terminological Concept Systems (TCS) provide a means of organizing, structuring and representing domain-specific multilingual information and are important to ensure terminological consistency in many tasks, such as translation and cross-border communication. While several approaches to (semi-)automatic term extraction exist, learning their interrelations is vastly underexplored. We propose an automated method to extract terms and relations across natural languages and specialized domains. To this end, we adapt pretrained multilingual neural language models, which we evaluate on term extraction standard datasets with best performing results and a combination of relation extraction standard datasets with competitive results. Code and dataset are publicly available.

Cite as

Lennart Wachowiak, Christian Lang, Barbara Heinisch, and Dagmar Gromann. Towards Learning Terminological Concept Systems from Multilingual Natural Language Text. In 3rd Conference on Language, Data and Knowledge (LDK 2021). Open Access Series in Informatics (OASIcs), Volume 93, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wachowiak_et_al:OASIcs.LDK.2021.22,
  author =	{Wachowiak, Lennart and Lang, Christian and Heinisch, Barbara and Gromann, Dagmar},
  title =	{{Towards Learning Terminological Concept Systems from Multilingual Natural Language Text}},
  booktitle =	{3rd Conference on Language, Data and Knowledge (LDK 2021)},
  pages =	{22:1--22:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-199-3},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{93},
  editor =	{Gromann, Dagmar and S\'{e}rasset, Gilles and Declerck, Thierry and McCrae, John P. and Gracia, Jorge and Bosque-Gil, Julia and Bobillo, Fernando and Heinisch, Barbara},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2021.22},
  URN =		{urn:nbn:de:0030-drops-145586},
  doi =		{10.4230/OASIcs.LDK.2021.22},
  annote =	{Keywords: Terminologies, Neural Language Models, Multilingual Information Extraction}
}
Document
Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences

Authors: Justus Bogner, Adrian Weller, Stefan Wagner, and Alfred Zimmermann

Published in: OASIcs, Volume 78, Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019)


Abstract
To ensure sustainable software maintenance and evolution, a diverse set of activities and concepts like metrics, change impact analysis, or antipattern detection can be used. Special maintainability assurance techniques have been proposed for service- and microservice-based systems, but it is difficult to get a comprehensive overview of this publication landscape. We therefore conducted a systematic literature review (SLR) to collect and categorize maintainability assurance approaches for service-oriented architecture (SOA) and microservices. Our search strategy led to the selection of 223 primary studies from 2007 to 2018 which we categorized with a threefold taxonomy: a) architectural (SOA, microservices, both), b) methodical (method or contribution of the study), and c) thematic (maintainability assurance subfield). We discuss the distribution among these categories and present different research directions as well as exemplary studies per thematic category. The primary finding of our SLR is that, while very few approaches have been suggested for microservices so far (24 of 223, ∼11%), we identified several thematic categories where existing SOA techniques could be adapted for the maintainability assurance of microservices.

Cite as

Justus Bogner, Adrian Weller, Stefan Wagner, and Alfred Zimmermann. Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences. In Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019). Open Access Series in Informatics (OASIcs), Volume 78, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bogner_et_al:OASIcs.Microservices.2017-2019.3,
  author =	{Bogner, Justus and Weller, Adrian and Wagner, Stefan and Zimmermann, Alfred},
  title =	{{Exploring Maintainability Assurance Research for Service- and Microservice-Based Systems: Directions and Differences}},
  booktitle =	{Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019)},
  pages =	{3:1--3:22},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-137-5},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{78},
  editor =	{Cruz-Filipe, Lu{\'\i}s and Giallorenzo, Saverio and Montesi, Fabrizio and Peressotti, Marco and Rademacher, Florian and Sachweh, Sabine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.Microservices.2017-2019.3},
  URN =		{urn:nbn:de:0030-drops-118255},
  doi =		{10.4230/OASIcs.Microservices.2017-2019.3},
  annote =	{Keywords: Maintainability, Software Evolution, Quality Assurance, Service-Based Systems, SOA, Microservices, Systematic Literature Review}
}
Document
APPROX
Improved Algorithms for Time Decay Streams

Authors: Vladimir Braverman, Harry Lang, Enayat Ullah, and Samson Zhou

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a coreset, a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions. We also consider the exponential time decay model for k-median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores O(k log(h Delta)+h) points where h is the half-life of the decay function and Delta is the aspect ratio of the dataset. Our techniques extend to k-means clustering and M-estimators as well.

Cite as

Vladimir Braverman, Harry Lang, Enayat Ullah, and Samson Zhou. Improved Algorithms for Time Decay Streams. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX-RANDOM.2019.27,
  author =	{Braverman, Vladimir and Lang, Harry and Ullah, Enayat and Zhou, Samson},
  title =	{{Improved Algorithms for Time Decay Streams}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.27},
  URN =		{urn:nbn:de:0030-drops-112429},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.27},
  annote =	{Keywords: Streaming algorithms, approximation algorithms, facility location and clustering}
}
Document
RANDOM
Streaming Coreset Constructions for M-Estimators

Authors: Vladimir Braverman, Dan Feldman, Harry Lang, and Daniela Rus

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We introduce a new method of maintaining a (k,epsilon)-coreset for clustering M-estimators over insertion-only streams. Let (P,w) be a weighted set (where w : P - > [0,infty) is the weight function) of points in a rho-metric space (meaning a set X equipped with a positive-semidefinite symmetric function D such that D(x,z) <=rho(D(x,y) + D(y,z)) for all x,y,z in X). For any set of points C, we define COST(P,w,C) = sum_{p in P} w(p) min_{c in C} D(p,c). A (k,epsilon)-coreset for (P,w) is a weighted set (Q,v) such that for every set C of k points, (1-epsilon)COST(P,w,C) <= COST(Q,v,C) <= (1+epsilon)COST(P,w,C). Essentially, the coreset (Q,v) can be used in place of (P,w) for all operations concerning the COST function. Coresets, as a method of data reduction, are used to solve fundamental problems in machine learning of streaming and distributed data. M-estimators are functions D(x,y) that can be written as psi(d(x,y)) where ({X}, d) is a true metric (i.e. 1-metric) space. Special cases of M-estimators include the well-known k-median (psi(x) =x) and k-means (psi(x) = x^2) functions. Our technique takes an existing offline construction for an M-estimator coreset and converts it into the streaming setting, where n data points arrive sequentially. To our knowledge, this is the first streaming construction for any M-estimator that does not rely on the merge-and-reduce tree. For example, our coreset for streaming metric k-means uses O(epsilon^{-2} k log k log n) points of storage. The previous state-of-the-art required storing at least O(epsilon^{-2} k log k log^{4} n) points.

Cite as

Vladimir Braverman, Dan Feldman, Harry Lang, and Daniela Rus. Streaming Coreset Constructions for M-Estimators. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 62:1-62:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX-RANDOM.2019.62,
  author =	{Braverman, Vladimir and Feldman, Dan and Lang, Harry and Rus, Daniela},
  title =	{{Streaming Coreset Constructions for M-Estimators}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{62:1--62:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.62},
  URN =		{urn:nbn:de:0030-drops-112778},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.62},
  annote =	{Keywords: Streaming, Clustering, Coresets}
}
Document
Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows

Authors: Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson Zhou

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We study the distinct elements and l_p-heavy hitters problems in the sliding window model, where only the most recent n elements in the data stream form the underlying set. We first introduce the composable histogram, a simple twist on the exponential (Datar et al., SODA 2002) and smooth histograms (Braverman and Ostrovsky, FOCS 2007) that may be of independent interest. We then show that the composable histogram{} along with a careful combination of existing techniques to track either the identity or frequency of a few specific items suffices to obtain algorithms for both distinct elements and l_p-heavy hitters that are nearly optimal in both n and epsilon. Applying our new composable histogram framework, we provide an algorithm that outputs a (1+epsilon)-approximation to the number of distinct elements in the sliding window model and uses O{1/(epsilon^2) log n log (1/epsilon)log log n+ (1/epsilon) log^2 n} bits of space. For l_p-heavy hitters, we provide an algorithm using space O{(1/epsilon^p) log^2 n (log^2 log n+log 1/epsilon)} for 0<p <=2, improving upon the best-known algorithm for l_2-heavy hitters (Braverman et al., COCOON 2014), which has space complexity O{1/epsilon^4 log^3 n}. We also show complementing nearly optimal lower bounds of Omega ((1/epsilon) log^2 n+(1/epsilon^2) log n) for distinct elements and Omega ((1/epsilon^p) log^2 n) for l_p-heavy hitters, both tight up to O{log log n} and O{log 1/epsilon} factors.

Cite as

Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson Zhou. Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 7:1-7:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX-RANDOM.2018.7,
  author =	{Braverman, Vladimir and Grigorescu, Elena and Lang, Harry and Woodruff, David P. and Zhou, Samson},
  title =	{{Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{7:1--7:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.7},
  URN =		{urn:nbn:de:0030-drops-94118},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.7},
  annote =	{Keywords: Streaming algorithms, sliding windows, heavy hitters, distinct elements}
}
  • Refine by Author
  • 9 Lang, Jérôme
  • 6 Delgrande, James
  • 6 Rott, Hans
  • 5 Bonanno, Giacomo
  • 5 Braverman, Vladimir
  • Show More...

  • Refine by Classification
  • 2 Software and its engineering → Compilers
  • 2 Theory of computation → Circuit complexity
  • 2 Theory of computation → Complexity classes
  • 2 Theory of computation → Facility location and clustering
  • 2 Theory of computation → Problems, reductions and completeness
  • Show More...

  • Refine by Keyword
  • 11 Belief revision
  • 5 Belief change
  • 5 iterated belief revision
  • 4 conditionals
  • 4 dynamic logic
  • Show More...

  • Refine by Type
  • 61 document

  • Refine by Publication Year
  • 24 2007
  • 12 2005
  • 7 2023
  • 3 2015
  • 3 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail