28 Search Results for "Pavan, A."


Document
Survey
Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support

Authors: Oleksandra Bruns, Jörg Waitelonis, Jeff Z. Pan, and Harald Sack

Published in: TGDK, Volume 4, Issue 1 (2026). Transactions on Graph Data and Knowledge, Volume 4, Issue 1


Abstract
Our culture, history and world are in constant motion, continuously shaped by the flow of time, evolving narratives, and shifting relationships. Capturing this temporal complexity within cultural heritage (CH) knowledge graphs is essential for preserving the dynamic nature of human heritage. However, standard RDF predicates fail to effectively model the temporal aspects of cultural data, such as changing facts, evolving relationships, and temporal concepts. Over the past two decades, a variety of RDF-based approaches have been proposed to address this limitation, yet guidance is missing on which method best suits specific CH contexts. This paper presents a systematic evaluation of temporal RDF modelling approaches from a CH perspective. Based on an analysis of real-world CH use cases, core temporal requirements are identified that reflect both modelling expressivity and practical concerns. Six prominent approaches - RDF*, tRDF, Named Graphs, Singleton Property, N-ary Relations, and 4D Fluents - are assessed across these requirements. Our findings reveal that no single solution fits all scenarios, but suitable approaches can be selected based on project-specific priorities. To support practitioners, a decision-support tool is introduced to guide them in selecting the most suitable extension for their specific needs. This work provides practical guidance for CH modelling and contributes to the broader development of temporally aware Linked Data.

Cite as

Oleksandra Bruns, Jörg Waitelonis, Jeff Z. Pan, and Harald Sack. Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support. In Transactions on Graph Data and Knowledge (TGDK), Volume 4, Issue 1, pp. 2:1-2:46, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@Article{bruns_et_al:TGDK.4.1.2,
  author =	{Bruns, Oleksandra and Waitelonis, J\"{o}rg and Pan, Jeff Z. and Sack, Harald},
  title =	{{Temporal Modelling in Cultural Heritage Knowledge Graphs: Use Cases, Requirements, Evaluation, and Decision Support}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:46},
  ISSN =	{2942-7517},
  year =	{2026},
  volume =	{4},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.4.1.2},
  URN =		{urn:nbn:de:0030-drops-256871},
  doi =		{10.4230/TGDK.4.1.2},
  annote =	{Keywords: Temporal Data Representation, RDF Extensions, Cultural Heritage, Knowledge Graphs}
}
Document
On Approximating the f-Divergence Between Two Ising Models

Authors: Weiming Feng and Yucheng Fu

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
The f-divergence is a fundamental notion that measures the difference between two distributions. In this paper, we study the problem of approximating the f-divergence between two Ising models, which is a generalization of recent work on approximating the TV-distance. Given two Ising models ν and μ, which are specified by their interaction matrices and external fields, the problem is to approximate the f-divergence D_f (ν ‖ μ) within an arbitrary relative error e^{±ε}. For χ^α-divergence with a constant integer α, we establish both algorithmic and hardness results. The algorithm works in a parameter regime that matches the hardness result. Our algorithm can be extended to other f-divergences such as α-divergence, Kullback-Leibler divergence, Rényi divergence, Jensen-Shannon divergence, and squared Hellinger distance.

Cite as

Weiming Feng and Yucheng Fu. On Approximating the f-Divergence Between Two Ising Models. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 59:1-59:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2026.59,
  author =	{Feng, Weiming and Fu, Yucheng},
  title =	{{On Approximating the f-Divergence Between Two Ising Models}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{59:1--59:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.59},
  URN =		{urn:nbn:de:0030-drops-253469},
  doi =		{10.4230/LIPIcs.ITCS.2026.59},
  annote =	{Keywords: Ising model, f-divergence, approximation algorithms, randomized algorithms}
}
Document
Simplicial Covering Dimension of Extremal Concept Classes

Authors: Ari Blondal, Hamed Hatami, Pooya Hatami, Chavdar Lalov, and Sivan Tretiak

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Dimension theory is a branch of topology concerned with defining and analyzing dimensions of geometric and topological spaces in purely topological terms. In this work, we adapt the classical notion of topological dimension (Lebesgue covering) to binary concept classes. The topological space naturally associated with a concept class is its space of realizable distributions. The loss function and the class itself induce a simplicial structure on this space, with respect to which we define a simplicial covering dimension. We prove that for finite concept classes, this simplicial covering dimension exactly characterizes the list replicability number (equivalently, global stability) in PAC learning. This connection allows us to apply tools from classical dimension theory to compute the exact list replicability number of the broad family of extremal concept classes.

Cite as

Ari Blondal, Hamed Hatami, Pooya Hatami, Chavdar Lalov, and Sivan Tretiak. Simplicial Covering Dimension of Extremal Concept Classes. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 22:1-22:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{blondal_et_al:LIPIcs.ITCS.2026.22,
  author =	{Blondal, Ari and Hatami, Hamed and Hatami, Pooya and Lalov, Chavdar and Tretiak, Sivan},
  title =	{{Simplicial Covering Dimension of Extremal Concept Classes}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{22:1--22:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.22},
  URN =		{urn:nbn:de:0030-drops-253094},
  doi =		{10.4230/LIPIcs.ITCS.2026.22},
  annote =	{Keywords: PAC Learning, Extremal Concept Classes, Replicability, List Replicability, Topology, Geometry}
}
Document
Identity Check Problem for Shallow Quantum Circuits

Authors: Sergey Bravyi, Natalie Parham, and Minh Tran

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Verifying that a quantum circuit correctly implements a desired transformation is essential for validating quantum algorithms. We consider the closely related identity check problem: given a quantum circuit U, estimate the diamond-norm distance between U and the identity channel. Ji and Wu showed that estimating this distance to within an additive 1/poly error is QMA-hard, even when U is constant-depth and 1D local - ruling out efficient algorithms in this regime. We show that this hardness barrier disappears if one seeks a constant multiplicative-approximation instead. We present a classical algorithm that, for shallow geometrically local D-dimensional circuits, approximates the distance to the identity within a factor α = D+1, provided that the circuit is sufficiently close to the identity. The runtime of the algorithm scales linearly with the number of qubits for any constant circuit depth and spatial dimension. We also show that the operator-norm distance to the identity ‖U-I‖ can be efficiently approximated within a factor α = 5 for shallow 1D circuits and, under a certain technical condition, within a factor α = 2D+3 for shallow D-dimensional circuits. A numerical implementation of the identity check algorithm is reported for 1D Trotter circuits with up to 100 qubits.

Cite as

Sergey Bravyi, Natalie Parham, and Minh Tran. Identity Check Problem for Shallow Quantum Circuits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 27:1-27:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{bravyi_et_al:LIPIcs.ITCS.2026.27,
  author =	{Bravyi, Sergey and Parham, Natalie and Tran, Minh},
  title =	{{Identity Check Problem for Shallow Quantum Circuits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{27:1--27:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.27},
  URN =		{urn:nbn:de:0030-drops-253147},
  doi =		{10.4230/LIPIcs.ITCS.2026.27},
  annote =	{Keywords: Quantum computing, Identity check problem, quantum circuits, classical simulation of quantum computation, shallow circuits}
}
Document
Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits

Authors: Hanlin Ren, Yichuan Wang, and Yan Zhong

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Given a circuit G: {0, 1}ⁿ → {0, 1}^m with m > n, the range avoidance problem (Avoid) asks to output a string y ∈ {0, 1}^m that is not in the range of G. Besides its profound connection to circuit complexity and explicit construction problems, this problem is also related to the existence of proof complexity generators - circuits G: {0, 1}ⁿ → {0, 1}^m where m > n but for every y ∈ {0, 1}^m, it is infeasible to prove the statement "y ̸ ∈ Range(G)" in a given propositional proof system. This paper connects these two problems with the existence of demi-bits generators, a fundamental cryptographic primitive against nondeterministic adversaries introduced by Rudich (RANDOM '97). - We show that the existence of demi-bits generators implies Avoid is hard for nondeterministic algorithms. This resolves an open problem raised by Chen and Li (STOC '24). Furthermore, assuming the demi-hardness of certain LPN-style generators or Goldreich’s PRG, we prove the hardness of Avoid even when the instances are constant-degree polynomials over 𝔽₂. - We show that the dual weak pigeonhole principle is unprovable in Cook’s theory PV₁ under the existence of demi-bits generators secure against AM/_{O(1)}, thereby separating Jeřábek’s theory APC₁ from PV₁. Previously, Ilango, Li, and Williams (STOC '23) obtained the same separation under different (and arguably stronger) cryptographic assumptions. - We transform demi-bits generators to proof complexity generators that are pseudo-surjective in certain parameter regime. Pseudo-surjectivity is the strongest form of hardness considered in the literature for proof complexity generators. Our constructions are inspired by the recent breakthroughs on the hardness of Avoid by Ilango, Li, and Williams (STOC '23) and Chen and Li (STOC '24). We use randomness extractors to significantly simplify the construction and the proof.

Cite as

Hanlin Ren, Yichuan Wang, and Yan Zhong. Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 111:1-111:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.ITCS.2026.111,
  author =	{Ren, Hanlin and Wang, Yichuan and Zhong, Yan},
  title =	{{Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{111:1--111:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.111},
  URN =		{urn:nbn:de:0030-drops-253982},
  doi =		{10.4230/LIPIcs.ITCS.2026.111},
  annote =	{Keywords: Range Avoidance, Proof Complexity Generators}
}
Document
Research
Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web

Authors: Florian Ruosch, Cristina Sarasua, and Abraham Bernstein

Published in: TGDK, Volume 3, Issue 3 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 3


Abstract
In Argument Mining, predicting argumentative relations between texts (or spans) remains one of the most challenging aspects, even more so in the cross-document setting. This paper makes three key contributions to advance research in this domain. We first extend an existing dataset, the Sci-Arg corpus, by annotating it with explicit inter-document argumentative relations, thereby allowing arguments to be distributed over several documents forming an Argument Web; these new annotations are published using Semantic Web technologies (RDF, OWL). Second, we explore and evaluate three automated approaches for predicting these inter-document argumentative relations, establishing critical baselines on the new dataset. We find that a simple classifier based on discourse indicators with access to context outperforms neural methods. Third, we conduct a comparative analysis of these approaches for both intra- and inter-document settings, identifying statistically significant differences in results that indicate the necessity of distinguishing between these two scenarios. Our findings highlight significant challenges in this complex domain and open crucial avenues for future research on the Argument Web of Science, particularly for those interested in leveraging Semantic Web technologies and knowledge graphs to understand scholarly discourse. With this, we provide the first stepping stones in the form of a benchmark dataset, three baseline methods, and an initial analysis for a systematic exploration of this field relevant to the Web of Data and Science.

Cite as

Florian Ruosch, Cristina Sarasua, and Abraham Bernstein. Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ruosch_et_al:TGDK.3.3.4,
  author =	{Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham},
  title =	{{Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:33},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.4},
  URN =		{urn:nbn:de:0030-drops-252159},
  doi =		{10.4230/TGDK.3.3.4},
  annote =	{Keywords: Argument Mining, Large Language Models, Knowledge Graphs, Link Prediction}
}
Document
New Algorithmic Directions in Optimal Transport and Applications for Product Spaces

Authors: Salman Beigi, Omid Etesami, Mohammad Mahmoody, and Amir Najafi

Published in: LIPIcs, Volume 359, 36th International Symposium on Algorithms and Computation (ISAAC 2025)


Abstract
We consider the problem of optimal transport between two high-dimensional distributions μ,ν in ℝⁿ from a new algorithmic perspective, in which we are given a sample x ∼ μ and we have to find a close y ∼ ν while running in poly(n) time, where n is the size/dimension of x,y. In other words, we are interested in making the running time bounded in dimension of the spaces rather than bounded in the total size of the representations of the two distributions. Our main result is a general algorithmic transport result between any product distribution μ and an arbitrary distribution ν of total cost Δ + δ under 𝓁_p^p cost; here Δ is the cost of the so-called Knothe–Rosenblatt transport from μ to ν, while δ is a computational error that goes to zero for larger running time in the transport algorithm. For this result, we need ν to be "sequentially samplable" with a "bounded average sampling cost" which is a novel but natural notion of independent interest. In addition, we prove the following. - We prove an algorithmic version of the celebrated Talagrand’s inequality for transporting the standard Gaussian distribution Φⁿ to an arbitrary ν under the Euclidean-squared cost. When ν is Φⁿ conditioned on a set S of measure ε, we show how to implement the needed sequential sampler for ν in expected time poly(n/ε), using membership oracle access to S. Hence, we obtain an algorithmic transport that maps Φⁿ to Φⁿ|S in time poly(n/ε) and expected Euclidean-squared distance O(log 1/ε), which is optimal for a general set S of measure ε. - As corollary, we find the first computational concentration (Etesami et al. SODA 2020) result for the Gaussian measure under the Euclidean distance with a dimension-independent transportation cost, resolving a question of Etesami et al. More precisely, for any set S of Gaussian measure ε, we map most of Φⁿ samples to S with Euclidean distance O(√{log 1/ε}) in time poly(n/ε).

Cite as

Salman Beigi, Omid Etesami, Mohammad Mahmoody, and Amir Najafi. New Algorithmic Directions in Optimal Transport and Applications for Product Spaces. In 36th International Symposium on Algorithms and Computation (ISAAC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 359, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{beigi_et_al:LIPIcs.ISAAC.2025.10,
  author =	{Beigi, Salman and Etesami, Omid and Mahmoody, Mohammad and Najafi, Amir},
  title =	{{New Algorithmic Directions in Optimal Transport and Applications for Product Spaces}},
  booktitle =	{36th International Symposium on Algorithms and Computation (ISAAC 2025)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-408-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{359},
  editor =	{Chen, Ho-Lin and Hon, Wing-Kai and Tsai, Meng-Tsung},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2025.10},
  URN =		{urn:nbn:de:0030-drops-249187},
  doi =		{10.4230/LIPIcs.ISAAC.2025.10},
  annote =	{Keywords: Optimal transport, Randomized algorithms, Concentration bounds}
}
Document
Short Paper
Temporal Considerations in DJ Mix Information Retrieval and Generation (Short Paper)

Authors: Alexander Williams, Gregor Meehan, Stefan Lattner, Johan Pauwels, and Mathieu Barthet

Published in: LIPIcs, Volume 355, 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)


Abstract
Music is the art of arranging sounds in time so as to produce a continuous, unified, and evocative composition. Electronic dance music (EDM) is a collection of musical sub-genres produced using computers and electronic instruments and often presented through the medium of DJing, where tracks are curated and mixed sequentially into a continuous stream of music to offer unique listening and dancing experiences over time periods ranging from several minutes to several hours. A DJ’s actions and decisions occur at several levels of temporal granularity, from real-time audio manipulation (e.g. of tempo) for smooth inter-track transitions to long-term planning of track selection and sequencing for mix content and flow. While human DJs can instinctively operate across these different temporal resolutions, replicating this capability in an end-to-end automated DJing system presents significant challenges. In this paper, we analyse existing works in DJ mix information retrieval and generation from this temporal perspective. We first explain the close link between DJing and the temporal notion of musical rhythm, then describe a framework for categorising DJing actions by temporal granularity. Using this framework, we summarise and contrast potential approaches for automating and augmenting sequential DJ decision making, and discuss the unique characteristics of DJ mix track selection as a sequential recommendation task. In doing so, we hope to facilitate the implementation of more robust and complete automated DJing systems in future research.

Cite as

Alexander Williams, Gregor Meehan, Stefan Lattner, Johan Pauwels, and Mathieu Barthet. Temporal Considerations in DJ Mix Information Retrieval and Generation (Short Paper). In 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 355, pp. 20:1-20:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{williams_et_al:LIPIcs.TIME.2025.20,
  author =	{Williams, Alexander and Meehan, Gregor and Lattner, Stefan and Pauwels, Johan and Barthet, Mathieu},
  title =	{{Temporal Considerations in DJ Mix Information Retrieval and Generation}},
  booktitle =	{32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)},
  pages =	{20:1--20:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-401-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{355},
  editor =	{Vidal, Thierry and Wa{\l}\k{e}ga, Przemys{\l}aw Andrzej},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2025.20},
  URN =		{urn:nbn:de:0030-drops-244662},
  doi =		{10.4230/LIPIcs.TIME.2025.20},
  annote =	{Keywords: Music Information Retrieval, Computational Creativity, Recommender Systems, Electronic Dance Music, DJ}
}
Document
Hardness Amplification for Real-Valued Functions

Authors: Yunqi Li and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
Given an integer-valued function f:{0,1}ⁿ → {0,1,… , m-1} that is mildly hard to compute on instances drawn from some distribution D over {0,1}ⁿ, we show that the function g(x_1, … , x_t) = f(x_1) + ⋯ + f(x_t) is strongly hard to compute on instances (x_1,… ,x_t) drawn from the product distribution D^t. We also show the same for the task of approximately computing real-valued functions f:{0,1}ⁿ → [0,m). Our theorems immediately imply hardness self-amplification for several natural problems including Max-Clique and Max-SAT, Approximate #SAT, Entropy Estimation, etc..

Cite as

Yunqi Li and Prashant Nalini Vasudevan. Hardness Amplification for Real-Valued Functions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 2:1-2:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2025.2,
  author =	{Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Hardness Amplification for Real-Valued Functions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{2:1--2:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.2},
  URN =		{urn:nbn:de:0030-drops-236967},
  doi =		{10.4230/LIPIcs.CCC.2025.2},
  annote =	{Keywords: Average-case complexity, hardness amplification}
}
Document
Counting Martingales for Measure and Dimension in Complexity Classes

Authors: John M. Hitchcock, Adewale Sekoni, and Hadi Shafei

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
This paper makes two primary contributions. First, we introduce the concept of counting martingales and use it to define counting measures and counting dimensions. Second, we apply these new tools to strengthen previous circuit lower bounds. Resource-bounded measure and dimension have traditionally focused on deterministic time and space bounds. We use counting complexity classes to develop resource-bounded counting measures and dimensions. Counting martingales are constructed using functions from the #𝖯, SpanP, and GapP complexity classes. We show that counting martingales capture many martingale constructions in complexity theory. The resulting counting measures and dimensions are intermediate in power between the standard time-bounded and space-bounded notions, enabling finer-grained analysis where space-bounded measures are known, but time-bounded measures remain open. For example, we show that BPP has #𝖯-dimension 0 and BQP has GapP-dimension 0, whereas the 𝖯-dimensions of these classes remain open. As our main application, we improve circuit-size lower bounds. Lutz (1992) strengthened Shannon’s classic (1-ε) 2ⁿ/n lower bound (1949) to PSPACE-measure, showing that almost all problems require circuits of size (2ⁿ/n)(1+(α log n)/n), for any α < 1. We extend this result to SpanP-measure, with a proof that uses a connection through the Minimum Circuit Size Problem (MCSP) to construct a counting martingale. Our results imply that the stronger lower bound holds within the third level of the exponential-time hierarchy, whereas previously, it was only known in ESPACE. Under a derandomization hypothesis, this lower bound holds within the second level of the exponential-time hierarchy, specifically in the class 𝖤^NP. We also study the #𝖯-dimension of classical circuit complexity classes and the GapP-dimension of quantum circuit complexity classes.

Cite as

John M. Hitchcock, Adewale Sekoni, and Hadi Shafei. Counting Martingales for Measure and Dimension in Complexity Classes. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 20:1-20:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hitchcock_et_al:LIPIcs.CCC.2025.20,
  author =	{Hitchcock, John M. and Sekoni, Adewale and Shafei, Hadi},
  title =	{{Counting Martingales for Measure and Dimension in Complexity Classes}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{20:1--20:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.20},
  URN =		{urn:nbn:de:0030-drops-237145},
  doi =		{10.4230/LIPIcs.CCC.2025.20},
  annote =	{Keywords: resource-bounded measure, resource-bounded dimension, counting martingales, counting complexity, circuit complexity, Kolmogorov complexity, quantum complexity, Minimum Circuit Size Problem}
}
Document
Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation

Authors: Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, and Yanheng Wang

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
Union volume estimation is a classical algorithmic problem. Given a family of objects O₁,…,O_n ⊂ ℝ^d, we want to approximate the volume of their union. In the special case where all objects are boxes (also called hyperrectangles) this is known as Klee’s measure problem. The state-of-the-art (1+ε)-approximation algorithm [Karp, Luby, Madras '89] for union volume estimation as well as Klee’s measure problem in constant dimension d uses a total of O(n/ε²) queries of three types: (i) determine the volume of O_i; (ii) sample a point uniformly at random from O_i; and (iii) ask whether a given point is contained in O_i. First, we show that if an algorithm learns about the objects only through these types of queries, then Ω(n/ε²) queries are necessary. In this sense, the complexity of [Karp, Luby, Madras '89] is optimal. Our lower bound holds even if the objects are equiponderous axis-aligned polygons in ℝ², if the containment query allows arbitrary (not necessarily sampled) points, and if the algorithm can spend arbitrary time and space examining the query responses. Second, we provide a more efficient approximation algorithm for Klee’s measure problem, which improves the running time from O(n/ε²) to O((n+1/ε²) ⋅ log^{O(d)} (n)). We circumvent our lower bound by exploiting the geometry of boxes in various ways: (1) We sort the boxes into classes of similar shapes after inspecting their corner coordinates. (2) With orthogonal range searching, we show how to sample points from the union of boxes in each class, and how to merge samples from different classes. (3) We bound the amount of wasted work by arguing that most pairs of classes have a small intersection.

Cite as

Karl Bringmann, Kasper Green Larsen, André Nusser, Eva Rotenberg, and Yanheng Wang. Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 25:1-25:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2025.25,
  author =	{Bringmann, Karl and Larsen, Kasper Green and Nusser, Andr\'{e} and Rotenberg, Eva and Wang, Yanheng},
  title =	{{Approximating Klee’s Measure Problem and a Lower Bound for Union Volume Estimation}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{25:1--25:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.25},
  URN =		{urn:nbn:de:0030-drops-231778},
  doi =		{10.4230/LIPIcs.SoCG.2025.25},
  annote =	{Keywords: approximation, volume of union, union of objects, query complexity}
}
Document
Extended Abstract
Cost over Content: Information Choice in Trade (Extended Abstract)

Authors: Kristóf Madarász and Marek Pycia

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
How much would buyers pay to have some control over what a seller knows about them? When deciding what information to provide to her counterpart, a privately-informed trader chooses between options that may differ both in their contents and in their costs. For a large class of static and dynamic trading environments where buyers choose from arbitrary sets of signal processes that reveal or obfuscate information to a seller, we establish a "cost-over-content" theorem. In equilibrium, buyers only choose cheapest processes, regardless of the information content they provide. Pooling on any cheapest process is an equilibrium. Our paper uncovers a general source of market failure linked to the direct cost of information choice with consequences for the role of information defaults. We explore applications to bargaining, signaling, disclosure, consumer privacy, and data trade. At the FORC presentation we focus on the privacy application. Our cost-over-content results explain why consumers may choose little protection of their private information even when protecting it would improve their welfare; we show that this so-called privacy paradox is an equilibrium phenomenon driven by market power. Our results also lead to predictions on how an online-shopping platform provider designs the information flows between sellers and the buyers. While the platform can charge both the buyer and the seller for its offering of information structures, following the logic of our cost-over-content theorem, we show that the platform neither charges nor compensates the buyer for choosing an information structure. At the same time, the platform charges the seller and maximizes the value of the information passed to the seller, as measured in the seller’s direct profit from interacting with the buyer. This asymmetry is relevant for the impact of the widespread ability to track consumers on economic outcomes and the distribution of gains from trade, and it is relevant for the effectiveness of various consumer-protection policies.

Cite as

Kristóf Madarász and Marek Pycia. Cost over Content: Information Choice in Trade (Extended Abstract). In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, p. 13:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{madarasz_et_al:LIPIcs.FORC.2025.13,
  author =	{Madar\'{a}sz, Krist\'{o}f and Pycia, Marek},
  title =	{{Cost over Content: Information Choice in Trade}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{13:1--13:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.13},
  URN =		{urn:nbn:de:0030-drops-231409},
  doi =		{10.4230/LIPIcs.FORC.2025.13},
  annote =	{Keywords: Communication Costs, Default-based Regulation, Data Trade, Dynamic Pricing, Privacy Paradox, Signaling, Disclosure, Efficiency, Online Platforms}
}
Document
Fault Detection and Identification by Autonomous Mobile Robots

Authors: Stefano Clemente and Caterina Feletti

Published in: LIPIcs, Volume 330, 4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025)


Abstract
The Look-Compute-Move model (LCM) is adopted to study swarms of mobile robots that have to solve a given problem. Robots are generally assumed to be autonomous, indistinguishable, anonymous, homogeneous and to move on the Euclidean plane. Different LCM sub-models have been theorized to study different settings and their computational power. Notably, the literature has focused on four base models (i.e., OBLOT, FSTA, FCOM, LUMI) that differ in memory and communication capabilities, and in different synchronization modes (e.g., fully synchronous FSYNCH, semi-synchronous SSYNCH). In this paper, we consider fault-prone models where robots can suffer from crash faults: each robot may irremediably stop working after an unpredictable time. We study the general Fault Detection (FD) problem which is solved by a swarm if it correctly detects whether a faulty robot exists in the swarm. The Fault Identification (FI) problem additionally requires identifying which robots are faulty. We consider 12 LCM sub-models (OBLOT, FSTA, FCOM, LUMI, combined with FSYNCH, SSYNCH, and the round-robin RROBIN) and we study the (im)possibility of designing reliable procedures to solve FD or FI. In particular, we propose three distributed algorithms so that a swarm can collectively solve FD under the models LUMI^FSYNCH, FCOM^FSYNCH, and LUMI^RROBIN.

Cite as

Stefano Clemente and Caterina Feletti. Fault Detection and Identification by Autonomous Mobile Robots. In 4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 330, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{clemente_et_al:LIPIcs.SAND.2025.10,
  author =	{Clemente, Stefano and Feletti, Caterina},
  title =	{{Fault Detection and Identification by Autonomous Mobile Robots}},
  booktitle =	{4th Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2025)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-368-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{330},
  editor =	{Meeks, Kitty and Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAND.2025.10},
  URN =		{urn:nbn:de:0030-drops-230639},
  doi =		{10.4230/LIPIcs.SAND.2025.10},
  annote =	{Keywords: Autonomous mobile robots, Faulty robots, Look-Compute-Move, Fault detection, Fault identification, Round-robin}
}
Document
Quantum Data Sketches

Authors: Qin Zhang and Mohsen Heidari

Published in: LIPIcs, Volume 328, 28th International Conference on Database Theory (ICDT 2025)


Abstract
Recent advancements in quantum technologies, particularly in quantum sensing and simulation, have facilitated the generation and analysis of inherently quantum data. This progress underscores the necessity for developing efficient and scalable quantum data management strategies. This goal faces immense challenges due to the exponential dimensionality of quantum data and its unique quantum properties such as no-cloning and measurement stochasticity. Specifically, classical storage and manipulation of an arbitrary n-qubit quantum state requires exponential space and time. Hence, there is a critical need to revisit foundational data management concepts and algorithms for quantum data. In this paper, we propose succinct quantum data sketches to support basic database operations such as search and selection. We view our work as an initial step towards the development of quantum data management model, opening up many possibilities for future research in this direction.

Cite as

Qin Zhang and Mohsen Heidari. Quantum Data Sketches. In 28th International Conference on Database Theory (ICDT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 328, pp. 16:1-16:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ICDT.2025.16,
  author =	{Zhang, Qin and Heidari, Mohsen},
  title =	{{Quantum Data Sketches}},
  booktitle =	{28th International Conference on Database Theory (ICDT 2025)},
  pages =	{16:1--16:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-364-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{328},
  editor =	{Roy, Sudeepa and Kara, Ahmet},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2025.16},
  URN =		{urn:nbn:de:0030-drops-229570},
  doi =		{10.4230/LIPIcs.ICDT.2025.16},
  annote =	{Keywords: quantum data representation, data sketching, query execution}
}
Document
The Randomness Complexity of Differential Privacy

Authors: Clément L. Canonne, Francis E. Su, and Salil P. Vadhan

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We initiate the study of the randomness complexity of differential privacy, i.e., how many random bits an algorithm needs in order to generate accurate differentially private releases. As a test case, we focus on the task of releasing the results of d counting queries, or equivalently all one-way marginals on a d-dimensional dataset with boolean attributes. While standard differentially private mechanisms for this task have randomness complexity that grows linearly with d, we show that, surprisingly, only log₂ d+O(1) random bits (in expectation) suffice to achieve an error that depends polynomially on d (and is independent of the size n of the dataset), and furthermore this is possible with pure, unbounded differential privacy and privacy-loss parameter ε = 1/poly(d). Conversely, we show that at least log₂ d-O(1) random bits are also necessary for nontrivial accuracy, even with approximate, bounded DP, provided the privacy-loss parameters satisfy ε,δ ≤ 1/poly(d). We obtain our results by establishing a close connection between the randomness complexity of differentially private mechanisms and the geometric notion of "deterministic rounding schemes" recently introduced and studied by Vander Woude et al. (2022, 2023).

Cite as

Clément L. Canonne, Francis E. Su, and Salil P. Vadhan. The Randomness Complexity of Differential Privacy. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 27:1-27:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{canonne_et_al:LIPIcs.ITCS.2025.27,
  author =	{Canonne, Cl\'{e}ment L. and Su, Francis E. and Vadhan, Salil P.},
  title =	{{The Randomness Complexity of Differential Privacy}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{27:1--27:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.27},
  URN =		{urn:nbn:de:0030-drops-226556},
  doi =		{10.4230/LIPIcs.ITCS.2025.27},
  annote =	{Keywords: differential privacy, randomness, geometry}
}
  • Refine by Type
  • 28 Document/PDF
  • 18 Document/HTML

  • Refine by Publication Year
  • 5 2026
  • 11 2025
  • 1 2024
  • 3 2023
  • 1 2021
  • Show More...

  • Refine by Author
  • 7 Pavan, A.
  • 6 Vinodchandran, N. V.
  • 4 Hitchcock, John M.
  • 3 Dixon, Peter
  • 2 Dell'Aglio, Daniele
  • Show More...

  • Refine by Series/Journal
  • 23 LIPIcs
  • 5 TGDK

  • Refine by Classification
  • 3 Information systems → Graph-based database models
  • 3 Theory of computation → Circuit complexity
  • 3 Theory of computation → Complexity classes
  • 3 Theory of computation → Computational complexity and cryptography
  • 3 Theory of computation → Probabilistic computation
  • Show More...

  • Refine by Keyword
  • 3 NP-completeness
  • 2 Knowledge Graphs
  • 2 Pseudodeterminism
  • 2 circuit complexity
  • 2 randomness
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail