23 Search Results for "Pavan, A."


Document
Research
Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web

Authors: Florian Ruosch, Cristina Sarasua, and Abraham Bernstein

Published in: TGDK, Volume 3, Issue 3 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 3


Abstract
In Argument Mining, predicting argumentative relations between texts (or spans) remains one of the most challenging aspects, even more so in the cross-document setting. This paper makes three key contributions to advance research in this domain. We first extend an existing dataset, the Sci-Arg corpus, by annotating it with explicit inter-document argumentative relations, thereby allowing arguments to be distributed over several documents forming an Argument Web; these new annotations are published using Semantic Web technologies (RDF, OWL). Second, we explore and evaluate three automated approaches for predicting these inter-document argumentative relations, establishing critical baselines on the new dataset. We find that a simple classifier based on discourse indicators with access to context outperforms neural methods. Third, we conduct a comparative analysis of these approaches for both intra- and inter-document settings, identifying statistically significant differences in results that indicate the necessity of distinguishing between these two scenarios. Our findings highlight significant challenges in this complex domain and open crucial avenues for future research on the Argument Web of Science, particularly for those interested in leveraging Semantic Web technologies and knowledge graphs to understand scholarly discourse. With this, we provide the first stepping stones in the form of a benchmark dataset, three baseline methods, and an initial analysis for a systematic exploration of this field relevant to the Web of Data and Science.

Cite as

Florian Ruosch, Cristina Sarasua, and Abraham Bernstein. Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ruosch_et_al:TGDK.3.3.4,
  author =	{Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham},
  title =	{{Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:33},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.4},
  URN =		{urn:nbn:de:0030-drops-252159},
  doi =		{10.4230/TGDK.3.3.4},
  annote =	{Keywords: Argument Mining, Large Language Models, Knowledge Graphs, Link Prediction}
}
Document
New Algorithmic Directions in Optimal Transport and Applications for Product Spaces

Authors: Salman Beigi, Omid Etesami, Mohammad Mahmoody, and Amir Najafi

Published in: LIPIcs, Volume 359, 36th International Symposium on Algorithms and Computation (ISAAC 2025)


Abstract
We consider the problem of optimal transport between two high-dimensional distributions μ,ν in ℝⁿ from a new algorithmic perspective, in which we are given a sample x ∼ μ and we have to find a close y ∼ ν while running in poly(n) time, where n is the size/dimension of x,y. In other words, we are interested in making the running time bounded in dimension of the spaces rather than bounded in the total size of the representations of the two distributions. Our main result is a general algorithmic transport result between any product distribution μ and an arbitrary distribution ν of total cost Δ + δ under 𝓁_p^p cost; here Δ is the cost of the so-called Knothe–Rosenblatt transport from μ to ν, while δ is a computational error that goes to zero for larger running time in the transport algorithm. For this result, we need ν to be "sequentially samplable" with a "bounded average sampling cost" which is a novel but natural notion of independent interest. In addition, we prove the following. - We prove an algorithmic version of the celebrated Talagrand’s inequality for transporting the standard Gaussian distribution Φⁿ to an arbitrary ν under the Euclidean-squared cost. When ν is Φⁿ conditioned on a set S of measure ε, we show how to implement the needed sequential sampler for ν in expected time poly(n/ε), using membership oracle access to S. Hence, we obtain an algorithmic transport that maps Φⁿ to Φⁿ|S in time poly(n/ε) and expected Euclidean-squared distance O(log 1/ε), which is optimal for a general set S of measure ε. - As corollary, we find the first computational concentration (Etesami et al. SODA 2020) result for the Gaussian measure under the Euclidean distance with a dimension-independent transportation cost, resolving a question of Etesami et al. More precisely, for any set S of Gaussian measure ε, we map most of Φⁿ samples to S with Euclidean distance O(√{log 1/ε}) in time poly(n/ε).

Cite as

Salman Beigi, Omid Etesami, Mohammad Mahmoody, and Amir Najafi. New Algorithmic Directions in Optimal Transport and Applications for Product Spaces. In 36th International Symposium on Algorithms and Computation (ISAAC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 359, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{beigi_et_al:LIPIcs.ISAAC.2025.10,
  author =	{Beigi, Salman and Etesami, Omid and Mahmoody, Mohammad and Najafi, Amir},
  title =	{{New Algorithmic Directions in Optimal Transport and Applications for Product Spaces}},
  booktitle =	{36th International Symposium on Algorithms and Computation (ISAAC 2025)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-408-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{359},
  editor =	{Chen, Ho-Lin and Hon, Wing-Kai and Tsai, Meng-Tsung},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2025.10},
  URN =		{urn:nbn:de:0030-drops-249187},
  doi =		{10.4230/LIPIcs.ISAAC.2025.10},
  annote =	{Keywords: Optimal transport, Randomized algorithms, Concentration bounds}
}
Document
Short Paper
Temporal Considerations in DJ Mix Information Retrieval and Generation (Short Paper)

Authors: Alexander Williams, Gregor Meehan, Stefan Lattner, Johan Pauwels, and Mathieu Barthet

Published in: LIPIcs, Volume 355, 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)


Abstract
Music is the art of arranging sounds in time so as to produce a continuous, unified, and evocative composition. Electronic dance music (EDM) is a collection of musical sub-genres produced using computers and electronic instruments and often presented through the medium of DJing, where tracks are curated and mixed sequentially into a continuous stream of music to offer unique listening and dancing experiences over time periods ranging from several minutes to several hours. A DJ’s actions and decisions occur at several levels of temporal granularity, from real-time audio manipulation (e.g. of tempo) for smooth inter-track transitions to long-term planning of track selection and sequencing for mix content and flow. While human DJs can instinctively operate across these different temporal resolutions, replicating this capability in an end-to-end automated DJing system presents significant challenges. In this paper, we analyse existing works in DJ mix information retrieval and generation from this temporal perspective. We first explain the close link between DJing and the temporal notion of musical rhythm, then describe a framework for categorising DJing actions by temporal granularity. Using this framework, we summarise and contrast potential approaches for automating and augmenting sequential DJ decision making, and discuss the unique characteristics of DJ mix track selection as a sequential recommendation task. In doing so, we hope to facilitate the implementation of more robust and complete automated DJing systems in future research.

Cite as

Alexander Williams, Gregor Meehan, Stefan Lattner, Johan Pauwels, and Mathieu Barthet. Temporal Considerations in DJ Mix Information Retrieval and Generation (Short Paper). In 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 355, pp. 20:1-20:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{williams_et_al:LIPIcs.TIME.2025.20,
  author =	{Williams, Alexander and Meehan, Gregor and Lattner, Stefan and Pauwels, Johan and Barthet, Mathieu},
  title =	{{Temporal Considerations in DJ Mix Information Retrieval and Generation}},
  booktitle =	{32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)},
  pages =	{20:1--20:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-401-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{355},
  editor =	{Vidal, Thierry and Wa{\l}\k{e}ga, Przemys{\l}aw Andrzej},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2025.20},
  URN =		{urn:nbn:de:0030-drops-244662},
  doi =		{10.4230/LIPIcs.TIME.2025.20},
  annote =	{Keywords: Music Information Retrieval, Computational Creativity, Recommender Systems, Electronic Dance Music, DJ}
}
Document
Hardness Amplification for Real-Valued Functions

Authors: Yunqi Li and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
Given an integer-valued function f:{0,1}ⁿ → {0,1,… , m-1} that is mildly hard to compute on instances drawn from some distribution D over {0,1}ⁿ, we show that the function g(x_1, … , x_t) = f(x_1) + ⋯ + f(x_t) is strongly hard to compute on instances (x_1,… ,x_t) drawn from the product distribution D^t. We also show the same for the task of approximately computing real-valued functions f:{0,1}ⁿ → [0,m). Our theorems immediately imply hardness self-amplification for several natural problems including Max-Clique and Max-SAT, Approximate #SAT, Entropy Estimation, etc..

Cite as

Yunqi Li and Prashant Nalini Vasudevan. Hardness Amplification for Real-Valued Functions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 2:1-2:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2025.2,
  author =	{Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Hardness Amplification for Real-Valued Functions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{2:1--2:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.2},
  URN =		{urn:nbn:de:0030-drops-236967},
  doi =		{10.4230/LIPIcs.CCC.2025.2},
  annote =	{Keywords: Average-case complexity, hardness amplification}
}
Document
Counting Martingales for Measure and Dimension in Complexity Classes

Authors: John M. Hitchcock, Adewale Sekoni, and Hadi Shafei

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
This paper makes two primary contributions. First, we introduce the concept of counting martingales and use it to define counting measures and counting dimensions. Second, we apply these new tools to strengthen previous circuit lower bounds. Resource-bounded measure and dimension have traditionally focused on deterministic time and space bounds. We use counting complexity classes to develop resource-bounded counting measures and dimensions. Counting martingales are constructed using functions from the #𝖯, SpanP, and GapP complexity classes. We show that counting martingales capture many martingale constructions in complexity theory. The resulting counting measures and dimensions are intermediate in power between the standard time-bounded and space-bounded notions, enabling finer-grained analysis where space-bounded measures are known, but time-bounded measures remain open. For example, we show that BPP has #𝖯-dimension 0 and BQP has GapP-dimension 0, whereas the 𝖯-dimensions of these classes remain open. As our main application, we improve circuit-size lower bounds. Lutz (1992) strengthened Shannon’s classic (1-ε) 2ⁿ/n lower bound (1949) to PSPACE-measure, showing that almost all problems require circuits of size (2ⁿ/n)(1+(α log n)/n), for any α < 1. We extend this result to SpanP-measure, with a proof that uses a connection through the Minimum Circuit Size Problem (MCSP) to construct a counting martingale. Our results imply that the stronger lower bound holds within the third level of the exponential-time hierarchy, whereas previously, it was only known in ESPACE. Under a derandomization hypothesis, this lower bound holds within the second level of the exponential-time hierarchy, specifically in the class 𝖤^NP. We also study the #𝖯-dimension of classical circuit complexity classes and the GapP-dimension of quantum circuit complexity classes.

Cite as

John M. Hitchcock, Adewale Sekoni, and Hadi Shafei. Counting Martingales for Measure and Dimension in Complexity Classes. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 20:1-20:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hitchcock_et_al:LIPIcs.CCC.2025.20,
  author =	{Hitchcock, John M. and Sekoni, Adewale and Shafei, Hadi},
  title =	{{Counting Martingales for Measure and Dimension in Complexity Classes}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{20:1--20:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.20},
  URN =		{urn:nbn:de:0030-drops-237145},
  doi =		{10.4230/LIPIcs.CCC.2025.20},
  annote =	{Keywords: resource-bounded measure, resource-bounded dimension, counting martingales, counting complexity, circuit complexity, Kolmogorov complexity, quantum complexity, Minimum Circuit Size Problem}
}
Document
Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation

Authors: Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, and Yanheng Wang

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
Union volume estimation is a classical algorithmic problem. Given a family of objects O₁,…,O_n ⊂ ℝ^d, we want to approximate the volume of their union. In the special case where all objects are boxes (also called hyperrectangles) this is known as Klee’s measure problem. The state-of-the-art (1+ε)-approximation algorithm [Karp, Luby, Madras '89] for union volume estimation as well as Klee’s measure problem in constant dimension d uses a total of O(n/ε²) queries of three types: (i) determine the volume of O_i; (ii) sample a point uniformly at random from O_i; and (iii) ask whether a given point is contained in O_i. First, we show that if an algorithm learns about the objects only through these types of queries, then Ω(n/ε²) queries are necessary. In this sense, the complexity of [Karp, Luby, Madras '89] is optimal. Our lower bound holds even if the objects are equiponderous axis-aligned polygons in ℝ², if the containment query allows arbitrary (not necessarily sampled) points, and if the algorithm can spend arbitrary time and space examining the query responses. Second, we provide a more efficient approximation algorithm for Klee’s measure problem, which improves the running time from O(n/ε²) to O((n+1/ε²) ⋅ log^{O(d)} (n)). We circumvent our lower bound by exploiting the geometry of boxes in various ways: (1) We sort the boxes into classes of similar shapes after inspecting their corner coordinates. (2) With orthogonal range searching, we show how to sample points from the union of boxes in each class, and how to merge samples from different classes. (3) We bound the amount of wasted work by arguing that most pairs of classes have a small intersection.

Cite as

Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, and Yanheng Wang. Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 25:1-25:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2025.25,
  author =	{Bringmann, Karl and Larsen, Kasper Green and Nusser, Andr\'{e} and Rotenberg, Eva and Wang, Yanheng},
  title =	{{Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{25:1--25:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.25},
  URN =		{urn:nbn:de:0030-drops-231778},
  doi =		{10.4230/LIPIcs.SoCG.2025.25},
  annote =	{Keywords: approximation, volume of union, union of objects, query complexity}
}
Document
Extended Abstract
Cost over Content: Information Choice in Trade (Extended Abstract)

Authors: Kristóf Madarász and Marek Pycia

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
How much would buyers pay to have some control over what a seller knows about them? When deciding what information to provide to her counterpart, a privately-informed trader chooses between options that may differ both in their contents and in their costs. For a large class of static and dynamic trading environments where buyers choose from arbitrary sets of signal processes that reveal or obfuscate information to a seller, we establish a "cost-over-content" theorem. In equilibrium, buyers only choose cheapest processes, regardless of the information content they provide. Pooling on any cheapest process is an equilibrium. Our paper uncovers a general source of market failure linked to the direct cost of information choice with consequences for the role of information defaults. We explore applications to bargaining, signaling, disclosure, consumer privacy, and data trade. At the FORC presentation we focus on the privacy application. Our cost-over-content results explain why consumers may choose little protection of their private information even when protecting it would improve their welfare; we show that this so-called privacy paradox is an equilibrium phenomenon driven by market power. Our results also lead to predictions on how an online-shopping platform provider designs the information flows between sellers and the buyers. While the platform can charge both the buyer and the seller for its offering of information structures, following the logic of our cost-over-content theorem, we show that the platform neither charges nor compensates the buyer for choosing an information structure. At the same time, the platform charges the seller and maximizes the value of the information passed to the seller, as measured in the seller’s direct profit from interacting with the buyer. This asymmetry is relevant for the impact of the widespread ability to track consumers on economic outcomes and the distribution of gains from trade, and it is relevant for the effectiveness of various consumer-protection policies.

Cite as

Kristóf Madarász and Marek Pycia. Cost over Content: Information Choice in Trade (Extended Abstract). In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, p. 13:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{madarasz_et_al:LIPIcs.FORC.2025.13,
  author =	{Madar\'{a}sz, Krist\'{o}f and Pycia, Marek},
  title =	{{Cost over Content: Information Choice in Trade}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{13:1--13:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.13},
  URN =		{urn:nbn:de:0030-drops-231409},
  doi =		{10.4230/LIPIcs.FORC.2025.13},
  annote =	{Keywords: Communication Costs, Default-based Regulation, Data Trade, Dynamic Pricing, Privacy Paradox, Signaling, Disclosure, Efficiency, Online Platforms}
}
Document
Fault Detection and Identification by Autonomous Mobile Robots

Authors: Stefano Clemente and Caterina Feletti

Published in: LIPIcs, Volume 330, 4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025)


Abstract
The Look-Compute-Move model (LCM) is adopted to study swarms of mobile robots that have to solve a given problem. Robots are generally assumed to be autonomous, indistinguishable, anonymous, homogeneous and to move on the Euclidean plane. Different LCM sub-models have been theorized to study different settings and their computational power. Notably, the literature has focused on four base models (i.e., OBLOT, FSTA, FCOM, LUMI) that differ in memory and communication capabilities, and in different synchronization modes (e.g., fully synchronous FSYNCH, semi-synchronous SSYNCH). In this paper, we consider fault-prone models where robots can suffer from crash faults: each robot may irremediably stop working after an unpredictable time. We study the general Fault Detection (FD) problem which is solved by a swarm if it correctly detects whether a faulty robot exists in the swarm. The Fault Identification (FI) problem additionally requires identifying which robots are faulty. We consider 12 LCM sub-models (OBLOT, FSTA, FCOM, LUMI, combined with FSYNCH, SSYNCH, and the round-robin RROBIN) and we study the (im)possibility of designing reliable procedures to solve FD or FI. In particular, we propose three distributed algorithms so that a swarm can collectively solve FD under the models LUMI^FSYNCH, FCOM^FSYNCH, and LUMI^RROBIN.

Cite as

Stefano Clemente and Caterina Feletti. Fault Detection and Identification by Autonomous Mobile Robots. In 4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 330, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{clemente_et_al:LIPIcs.SAND.2025.10,
  author =	{Clemente, Stefano and Feletti, Caterina},
  title =	{{Fault Detection and Identification by Autonomous Mobile Robots}},
  booktitle =	{4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-368-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{330},
  editor =	{Meeks, Kitty and Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAND.2025.10},
  URN =		{urn:nbn:de:0030-drops-230639},
  doi =		{10.4230/LIPIcs.SAND.2025.10},
  annote =	{Keywords: Autonomous mobile robots, Faulty robots, Look-Compute-Move, Fault detection, Fault identification, Round-robin}
}
Document
Quantum Data Sketches

Authors: Qin Zhang and Mohsen Heidari

Published in: LIPIcs, Volume 328, 28th International Conference on Database Theory (ICDT 2025)


Abstract
Recent advancements in quantum technologies, particularly in quantum sensing and simulation, have facilitated the generation and analysis of inherently quantum data. This progress underscores the necessity for developing efficient and scalable quantum data management strategies. This goal faces immense challenges due to the exponential dimensionality of quantum data and its unique quantum properties such as no-cloning and measurement stochasticity. Specifically, classical storage and manipulation of an arbitrary n-qubit quantum state requires exponential space and time. Hence, there is a critical need to revisit foundational data management concepts and algorithms for quantum data. In this paper, we propose succinct quantum data sketches to support basic database operations such as search and selection. We view our work as an initial step towards the development of quantum data management model, opening up many possibilities for future research in this direction.

Cite as

Qin Zhang and Mohsen Heidari. Quantum Data Sketches. In 28th International Conference on Database Theory (ICDT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 328, pp. 16:1-16:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ICDT.2025.16,
  author =	{Zhang, Qin and Heidari, Mohsen},
  title =	{{Quantum Data Sketches}},
  booktitle =	{28th International Conference on Database Theory (ICDT 2025)},
  pages =	{16:1--16:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-364-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{328},
  editor =	{Roy, Sudeepa and Kara, Ahmet},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2025.16},
  URN =		{urn:nbn:de:0030-drops-229570},
  doi =		{10.4230/LIPIcs.ICDT.2025.16},
  annote =	{Keywords: quantum data representation, data sketching, query execution}
}
Document
The Randomness Complexity of Differential Privacy

Authors: Clément L. Canonne, Francis E. Su, and Salil P. Vadhan

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We initiate the study of the randomness complexity of differential privacy, i.e., how many random bits an algorithm needs in order to generate accurate differentially private releases. As a test case, we focus on the task of releasing the results of d counting queries, or equivalently all one-way marginals on a d-dimensional dataset with boolean attributes. While standard differentially private mechanisms for this task have randomness complexity that grows linearly with d, we show that, surprisingly, only log₂ d+O(1) random bits (in expectation) suffice to achieve an error that depends polynomially on d (and is independent of the size n of the dataset), and furthermore this is possible with pure, unbounded differential privacy and privacy-loss parameter ε = 1/poly(d). Conversely, we show that at least log₂ d-O(1) random bits are also necessary for nontrivial accuracy, even with approximate, bounded DP, provided the privacy-loss parameters satisfy ε,δ ≤ 1/poly(d). We obtain our results by establishing a close connection between the randomness complexity of differentially private mechanisms and the geometric notion of "deterministic rounding schemes" recently introduced and studied by Vander Woude et al. (2022, 2023).

Cite as

Clément L. Canonne, Francis E. Su, and Salil P. Vadhan. The Randomness Complexity of Differential Privacy. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 27:1-27:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.27,
  author =	{Canonne, Cl\'{e}ment L. and Su, Francis E. and Vadhan, Salil P.},
  title =	{{The Randomness Complexity of Differential Privacy}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{27:1--27:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.27},
  URN =		{urn:nbn:de:0030-drops-226556},
  doi =		{10.4230/LIPIcs.ITCS.2025.27},
  annote =	{Keywords: differential privacy, randomness, geometry}
}
Document
Crash-Tolerant Perpetual Exploration with Myopic Luminous Robots on Rings

Authors: Fukuhito Ooshita, Naoki Kitamura, Ryota Eguchi, Michiko Inoue, Hirotsugu Kakugawa, Sayaka Kamei, Masahiro Shibata, and Yuichi Sudo

Published in: LIPIcs, Volume 324, 28th International Conference on Principles of Distributed Systems (OPODIS 2024)


Abstract
We investigate crash-tolerant perpetual exploration algorithms by myopic luminous robots on ring networks. Myopic robots mean that they can observe nodes only within a certain fixed distance ϕ, and luminous robots mean that they have light devices that can emit a color from a set of colors. The goal of perpetual exploration is to ensure that robots, starting from specific initial positions and colors, move in such a way that every node is visited by at least one robot infinitely often. As a main contribution, we clarify the tight necessary and sufficient number of robots to realize perpetual exploration when at most f robots crash. In the fully synchronous model, we prove that f+2 robots are necessary and sufficient for any ϕ ≥ 1. In the semi-synchronous and asynchronous models, we prove that 3f+3 (resp., 2f+2) robots are necessary and sufficient if ϕ = 1 (resp., ϕ ≥ 2).

Cite as

Fukuhito Ooshita, Naoki Kitamura, Ryota Eguchi, Michiko Inoue, Hirotsugu Kakugawa, Sayaka Kamei, Masahiro Shibata, and Yuichi Sudo. Crash-Tolerant Perpetual Exploration with Myopic Luminous Robots on Rings. In 28th International Conference on Principles of Distributed Systems (OPODIS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 324, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ooshita_et_al:LIPIcs.OPODIS.2024.12,
  author =	{Ooshita, Fukuhito and Kitamura, Naoki and Eguchi, Ryota and Inoue, Michiko and Kakugawa, Hirotsugu and Kamei, Sayaka and Shibata, Masahiro and Sudo, Yuichi},
  title =	{{Crash-Tolerant Perpetual Exploration with Myopic Luminous Robots on Rings}},
  booktitle =	{28th International Conference on Principles of Distributed Systems (OPODIS 2024)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-360-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{324},
  editor =	{Bonomi, Silvia and Galletta, Letterio and Rivi\`{e}re, Etienne and Schiavoni, Valerio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2024.12},
  URN =		{urn:nbn:de:0030-drops-225486},
  doi =		{10.4230/LIPIcs.OPODIS.2024.12},
  annote =	{Keywords: mobile robots, crash faults, LCM model, exploration}
}
Document
Position
Grounding Stream Reasoning Research

Authors: Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic. In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream. This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Cite as

Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer. Grounding Stream Reasoning Research. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 2:1-2:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{bonte_et_al:TGDK.2.1.2,
  author =	{Bonte, Pieter and Calbimonte, Jean-Paul and de Leng, Daniel and Dell'Aglio, Daniele and Della Valle, Emanuele and Eiter, Thomas and Giannini, Federico and Heintz, Fredrik and Schekotihin, Konstantin and Le-Phuoc, Danh and Mileo, Alessandra and Schneider, Patrik and Tommasini, Riccardo and Urbani, Jacopo and Ziffer, Giacomo},
  title =	{{Grounding Stream Reasoning Research}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:47},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.2},
  URN =		{urn:nbn:de:0030-drops-198597},
  doi =		{10.4230/TGDK.2.1.2},
  annote =	{Keywords: Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases}
}
Document
Vision
Multilingual Knowledge Graphs and Low-Resource Languages: A Review

Authors: Lucie-Aimée Kaffee, Russa Biswas, C. Maria Keet, Edlira Kalemi Vakaj, and Gerard de Melo

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
There is a lack of multilingual data to support applications in a large number of languages, especially for low-resource languages. Knowledge graphs (KG) could contribute to closing the gap of language support by providing easily accessible, machine-readable, multilingual linked data, which can be reused across applications. In this paper, we provide an overview of work in the domain of multilingual KGs with a focus on low-resource languages. We review the current state of multilingual KGs along with the different aspects that are crucial for creating KGs with language coverage in mind. Special consideration is given to challenges particular to low-resource languages in KGs. We further provide an overview of applications that yield multilingual KG information as well as downstream applications reusing such multilingual data. Finally, we explore open problems regarding multilingual KGs with a focus on low-resource languages.

Cite as

Lucie-Aimée Kaffee, Russa Biswas, C. Maria Keet, Edlira Kalemi Vakaj, and Gerard de Melo. Multilingual Knowledge Graphs and Low-Resource Languages: A Review. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 10:1-10:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{kaffee_et_al:TGDK.1.1.10,
  author =	{Kaffee, Lucie-Aim\'{e}e and Biswas, Russa and Keet, C. Maria and Vakaj, Edlira Kalemi and de Melo, Gerard},
  title =	{{Multilingual Knowledge Graphs and Low-Resource Languages: A Review}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{10:1--10:19},
  ISSN =	{2942-7517},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.10},
  URN =		{urn:nbn:de:0030-drops-194845},
  doi =		{10.4230/TGDK.1.1.10},
  annote =	{Keywords: knowledge graphs, multilingual, low-resource languages, review}
}
Document
Survey
How Does Knowledge Evolve in Open Knowledge Graphs?

Authors: Axel Polleres, Romana Pernisch, Angela Bonifati, Daniele Dell'Aglio, Daniil Dobriy, Stefania Dumbrava, Lorena Etcheverry, Nicolas Ferranti, Katja Hose, Ernesto Jiménez-Ruiz, Matteo Lissandrini, Ansgar Scherp, Riccardo Tommasini, and Johannes Wachs

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
Openly available, collaboratively edited Knowledge Graphs (KGs) are key platforms for the collective management of evolving knowledge. The present work aims t o provide an analysis of the obstacles related to investigating and processing specifically this central aspect of evolution in KGs. To this end, we discuss (i) the dimensions of evolution in KGs, (ii) the observability of evolution in existing, open, collaboratively constructed Knowledge Graphs over time, and (iii) possible metrics to analyse this evolution. We provide an overview of relevant state-of-the-art research, ranging from metrics developed for Knowledge Graphs specifically to potential methods from related fields such as network science. Additionally, we discuss technical approaches - and their current limitations - related to storing, analysing and processing large and evolving KGs in terms of handling typical KG downstream tasks.

Cite as

Axel Polleres, Romana Pernisch, Angela Bonifati, Daniele Dell'Aglio, Daniil Dobriy, Stefania Dumbrava, Lorena Etcheverry, Nicolas Ferranti, Katja Hose, Ernesto Jiménez-Ruiz, Matteo Lissandrini, Ansgar Scherp, Riccardo Tommasini, and Johannes Wachs. How Does Knowledge Evolve in Open Knowledge Graphs?. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 11:1-11:59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{polleres_et_al:TGDK.1.1.11,
  author =	{Polleres, Axel and Pernisch, Romana and Bonifati, Angela and Dell'Aglio, Daniele and Dobriy, Daniil and Dumbrava, Stefania and Etcheverry, Lorena and Ferranti, Nicolas and Hose, Katja and Jim\'{e}nez-Ruiz, Ernesto and Lissandrini, Matteo and Scherp, Ansgar and Tommasini, Riccardo and Wachs, Johannes},
  title =	{{How Does Knowledge Evolve in Open Knowledge Graphs?}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{11:1--11:59},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.11},
  URN =		{urn:nbn:de:0030-drops-194855},
  doi =		{10.4230/TGDK.1.1.11},
  annote =	{Keywords: KG evolution, temporal KG, versioned KG, dynamic KG}
}
Document
Brief Announcement
Brief Announcement: Relations Between Space-Bounded and Adaptive Massively Parallel Computations

Authors: Michael Chen, A. Pavan, and N. V. Vinodchandran

Published in: LIPIcs, Volume 281, 37th International Symposium on Distributed Computing (DISC 2023)


Abstract
In this work, we study the class of problems solvable by (deterministic) Adaptive Massively Parallel Computations in constant rounds from a computational complexity theory perspective. A language L is in the class AMPC⁰ if, for every ε > 0, there is a deterministic AMPC algorithm running in constant rounds with a polynomial number of processors, where the local memory of each machine s = O(N^ε). We prove that the space-bounded complexity class ReachUL is a proper subclass of AMPC⁰. The complexity class ReachUL lies between the well-known space-bounded complexity classes Deterministic Logspace (DLOG) and Nondeterministic Logspace (NLOG). In contrast, we establish that it is unlikely that PSPACE admits AMPC algorithms, even with polynomially many rounds. We also establish that showing PSPACE is a subclass of nonuniform-AMPC with polynomially many rounds leads to a significant separation result in complexity theory, namely PSPACE is a proper subclass of EXP^{Σ₂^{𝖯}}.

Cite as

Michael Chen, A. Pavan, and N. V. Vinodchandran. Brief Announcement: Relations Between Space-Bounded and Adaptive Massively Parallel Computations. In 37th International Symposium on Distributed Computing (DISC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 281, pp. 37:1-37:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.DISC.2023.37,
  author =	{Chen, Michael and Pavan, A. and Vinodchandran, N. V.},
  title =	{{Brief Announcement: Relations Between Space-Bounded and Adaptive Massively Parallel Computations}},
  booktitle =	{37th International Symposium on Distributed Computing (DISC 2023)},
  pages =	{37:1--37:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-301-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{281},
  editor =	{Oshman, Rotem},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2023.37},
  URN =		{urn:nbn:de:0030-drops-191634},
  doi =		{10.4230/LIPIcs.DISC.2023.37},
  annote =	{Keywords: Massively Parallel Computation, AMPC, Complexity Classes, LogSpace, NL, PSPACE}
}
  • Refine by Type
  • 23 Document/PDF
  • 14 Document/HTML

  • Refine by Publication Year
  • 11 2025
  • 1 2024
  • 3 2023
  • 1 2021
  • 1 2018
  • Show More...

  • Refine by Author
  • 7 Pavan, A.
  • 6 Vinodchandran, N. V.
  • 4 Hitchcock, John M.
  • 3 Dixon, Peter
  • 2 Dell'Aglio, Daniele
  • Show More...

  • Refine by Series/Journal
  • 19 LIPIcs
  • 4 TGDK

  • Refine by Classification
  • 3 Information systems → Graph-based database models
  • 3 Theory of computation → Computational complexity and cryptography
  • 3 Theory of computation → Probabilistic computation
  • 2 Computing methodologies → Semantic networks
  • 2 Theory of computation → Circuit complexity
  • Show More...

  • Refine by Keyword
  • 3 NP-completeness
  • 2 Pseudodeterminism
  • 2 circuit complexity
  • 2 randomness
  • 1 AMPC
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail