18 Search Results for "Cook, Joshua"


Document
PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements

Authors: Scott Aaronson, Sabee Grewal, Vishnu Iyer, Simon C. Marshall, and Ronak Ramachandran

Published in: LIPIcs, Volume 360, 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)


Abstract
We define and study a variant of QMA (Quantum Merlin Arthur) in which Arthur can make multiple non-collapsing measurements to Merlin’s witness state, in addition to ordinary collapsing measurements. By analogy to the class PDQP defined by Aaronson, Bouland, Fitzsimons, and Lee (2014), we call this class PDQMA. Our main result is that PDQMA = NEXP; this result builds on the PCP theorem and complements the result of Aaronson (2018) that PDQP/qpoly = ALL. While the result has little to do with quantum mechanics, we also show a more "quantum" result: namely, that QMA with the ability to inspect the entire history of a hidden variable is equal to NEXP, under mild assumptions on the hidden-variable theory. We also observe that a quantum computer, augmented with quantum advice and the ability to inspect the history of a hidden variable, can solve any decision problem in polynomial time.

Cite as

Scott Aaronson, Sabee Grewal, Vishnu Iyer, Simon C. Marshall, and Ronak Ramachandran. PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.FSTTCS.2025.3,
  author =	{Aaronson, Scott and Grewal, Sabee and Iyer, Vishnu and Marshall, Simon C. and Ramachandran, Ronak},
  title =	{{PDQMA = DQMA = NEXP: QMA with Hidden Variables and Non-Collapsing Measurements}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.3},
  URN =		{urn:nbn:de:0030-drops-250828},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.3},
  annote =	{Keywords: quantum Merlin-Arthur, non-collapsing measurements, hidden-variable theories}
}
Document
Research
GraphRAG on Technical Documents - Impact of Knowledge Graph Schema

Authors: Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
Retrieval Augmented Generation (RAG) is seeing rapid adoption in industry to enable employees to query information captured in proprietary data for their organisation. In this work, we test the impact of domain-relevant knowledge graph schemas on the results of Microsoft’s GraphRAG pipeline. Our approach aims to address the poor quality of GraphRAG responses on technical reports rich in domain-specific terms. The use case involves technical reports about geology, chemistry and mineral processing published by the Minerals Research Institute of Western Australia (MRIWA). Four schemas are considered: a simple five-class minerals domain expert-developed schema, an expanded minerals domain schema, the Microsoft GraphRAG auto-generated schema, and a schema-less GraphRAG. These are compared to a conventional baseline RAG. Performance is evaluated using a scoring approach that accounts for the mix of correct, incorrect, additional, and missing content in RAG responses. The results show that the simple five-class minerals domain schema extracts approximately 10% more entities from the MRIWA reports than the other schema options. Additionally, both the five-class and the expanded eight-class minerals domain schemas produce the most factually correct answers and the fewest hallucinations. We attribute this to the minerals-specific schemas extracting more relevant, domain-specific information during the Indexing stage. As a result, the Query stage’s context window includes more high-value content. This contributes to the observed improvement in answer quality compared to the other pipelines. In contrast, pipelines with fewer domain-related entities in the KG retrieve less valuable information, leaving more room for irrelevant content in the context window. Baseline RAG responses were typically shorter, less complete, and contained more hallucinations compared to our GraphRAG pipelines. We provide a complete set of resources at https://github.com/nlp-tlp/GraphRAG-on-Minerals-Domain/tree/main. These resources include links to the MRIWA reports, a set of questions (from simple to challenging) along with domain-expert curated answers, schemas, and evaluations of the pipelines.

Cite as

Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke. GraphRAG on Technical Documents - Impact of Knowledge Graph Schema. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{scaffidi_et_al:TGDK.3.2.3,
  author =	{Scaffidi, Henri and Hodkiewicz, Melinda and Woods, Caitlin and Roocke, Nicole},
  title =	{{GraphRAG on Technical Documents - Impact of Knowledge Graph Schema}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.3},
  URN =		{urn:nbn:de:0030-drops-248131},
  doi =		{10.4230/TGDK.3.2.3},
  annote =	{Keywords: RAG, minerals, local search, global search, entity extraction, competency questions}
}
Document
DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs

Authors: Ali Ghaffaari, Alexander Schönhuth, and Tobias Marschall

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
Determining the distance between two loci within a genomic region is a recurrent operation in various tasks in computational genomics. A notable example of this task arises in paired-end read mapping as a form of validation of distances between multiple alignments. While straightforward for a single genome, graph-based reference structures render the operation considerably more involved. Given the sheer number of such queries in a typical read mapping experiment, an efficient algorithm for answering distance queries is crucial. In this paper, we introduce DiVerG, a compact data structure as well as a fast and scalable algorithm, for constructing distance indexes for general sequence graphs on multi-core CPU and many-core GPU architectures. DiVerG is based on PairG [Jain et al., 2019], but overcomes the limitations of PairG by exploiting the extensive potential for improvements in terms of scalability and space efficiency. As a consequence, DiVerG can process substantially larger datasets, such as whole human genomes, which are unmanageable by PairG. DiVerG offers faster index construction time and consistently faster query time with gains proportional to the size of the underlying compact data structure. We demonstrate that our method performs favorably on multiple real datasets at various scales. DiVerG achieves superior performance over PairG; e.g. resulting to 2.5-4x speed-up in query time, 44-340x smaller index size, and 3-50x faster construction time for the genome graph of the MHC region, as a particularly variable region of the human genome. The implementation is available at: https://github.com/cartoonist/diverg

Cite as

Ali Ghaffaari, Alexander Schönhuth, and Tobias Marschall. DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ghaffaari_et_al:LIPIcs.WABI.2025.10,
  author =	{Ghaffaari, Ali and Sch\"{o}nhuth, Alexander and Marschall, Tobias},
  title =	{{DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.10},
  URN =		{urn:nbn:de:0030-drops-239369},
  doi =		{10.4230/LIPIcs.WABI.2025.10},
  annote =	{Keywords: Sequence graph, distance index, read mapping, sparse matrix}
}
Document
Better Extension Variables in DQBF via Independence

Authors: Leroy Chew and Tomáš Peitl

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
We show that extension variables in (D)QBF can be generalised by conditioning on universal assignments. The benefit of this is that the dependency sets of such conditioned extension variables can be made smaller to allow easier refutations. This simple modification instantly solves many challenges in p-simulating the QBF expansion rule, which cannot be p-simulated in proof systems that have strategy extraction [Leroy Chew and Judith Clymo, 2020]. Simulating expansion is even more crucial in DQBF, where other methods are incomplete. In this paper we provide an overview of the strength of this new independent extension rule. We find that a new version of Extended Frege called IndExtFrege + ∀red can p-simulate a multitude of difficult QBF and DQBF techniques, even techniques that are difficult to approach with eFrege + ∀red. We show five p-simulations, that IndExtFrege + ∀red p-simulates QRAT, DQBF-IR-calc, IR(𝒟^rrs)-calc, Fork-Resolution and DQRAT which together underpin most DQBF solving and preprocessing techniques. The p-simulations work despite these systems using complicated rules and our new extension rule being relatively simple. Moreover, unlike recent p-simulations by eFrege + ∀red we can simulate the proof rules line by line, which allows us to mix QBF rules more easily with other inference steps.

Cite as

Leroy Chew and Tomáš Peitl. Better Extension Variables in DQBF via Independence. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 11:1-11:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chew_et_al:LIPIcs.SAT.2025.11,
  author =	{Chew, Leroy and Peitl, Tom\'{a}\v{s}},
  title =	{{Better Extension Variables in DQBF via Independence}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{11:1--11:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.11},
  URN =		{urn:nbn:de:0030-drops-237453},
  doi =		{10.4230/LIPIcs.SAT.2025.11},
  annote =	{Keywords: DQBF, QBF, Proof Systems, Dependency Schemes, RAT, Extended Frege, Skolem functions}
}
Document
Space-Bounded Quantum Interactive Proof Systems

Authors: François Le Gall, Yupan Liu, Harumichi Nishimura, and Qisheng Wang

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
We introduce two models of space-bounded quantum interactive proof systems, QIPL and QIP_{U}L. The QIP_{U}L model, a space-bounded variant of quantum interactive proofs (QIP) introduced by Watrous (CC 2003) and Kitaev and Watrous (STOC 2000), restricts verifier actions to unitary circuits. In contrast, QIPL allows logarithmically many pinching intermediate measurements per verifier action, making it the weakest model that encompasses the classical model of Condon and Ladner (JCSS 1995). We characterize the computational power of QIPL and QIP_{U}L. When the message number m is polynomially bounded, QIP_{U}L ⊊ QIPL unless P = NP: - QIPL^HC, a subclass of QIPL defined by a high-concentration condition on yes instances, exactly characterizes NP. - QIP_{U}L is contained in P and contains SAC¹ ∪ BQL, where SAC¹ denotes problems solvable by classical logarithmic-depth, semi-unbounded fan-in circuits. However, this distinction vanishes when m is constant. Our results further indicate that (pinching) intermediate measurements uniquely impact space-bounded quantum interactive proofs, unlike in space-bounded quantum computation, where BQL = BQ_{U}L. We also introduce space-bounded unitary quantum statistical zero-knowledge (QSZK_{U}L), a specific form of QIP_{U}L proof systems with statistical zero-knowledge against any verifier. This class is a space-bounded variant of quantum statistical zero-knowledge (QSZK) defined by Watrous (SICOMP 2009). We prove that QSZK_{U}L = BQL, implying that the statistical zero-knowledge property negates the computational advantage typically gained from the interaction.

Cite as

François Le Gall, Yupan Liu, Harumichi Nishimura, and Qisheng Wang. Space-Bounded Quantum Interactive Proof Systems. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{legall_et_al:LIPIcs.CCC.2025.17,
  author =	{Le Gall, Fran\c{c}ois and Liu, Yupan and Nishimura, Harumichi and Wang, Qisheng},
  title =	{{Space-Bounded Quantum Interactive Proof Systems}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.17},
  URN =		{urn:nbn:de:0030-drops-237115},
  doi =		{10.4230/LIPIcs.CCC.2025.17},
  annote =	{Keywords: Intermediate measurements, Quantum interactive proofs, Space-bounded quantum computation}
}
Document
Algebraic Pseudorandomness in VNC⁰

Authors: Robert Andrews

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
We study the arithmetic complexity of hitting set generators, which are pseudorandom objects used for derandomization of the polynomial identity testing problem. We give new explicit constructions of hitting set generators whose outputs are computable in VNC⁰, i.e., can be computed by arithmetic formulas of constant size. Unconditionally, we construct a VNC⁰-computable generator that hits arithmetic circuits of constant depth and polynomial size. We also give conditional constructions, under strong but plausible hardness assumptions, of VNC⁰-computable generators that hit arithmetic formulas and arithmetic branching programs of polynomial size, respectively. As a corollary of our constructions, we derive lower bounds for subsystems of the Geometric Ideal Proof System of Grochow and Pitassi. Constructions of such generators are implicit in prior work of Kayal on lower bounds for the degree of annihilating polynomials. Our main contribution is a construction whose correctness relies on circuit complexity lower bounds rather than degree lower bounds.

Cite as

Robert Andrews. Algebraic Pseudorandomness in VNC⁰. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andrews:LIPIcs.CCC.2025.15,
  author =	{Andrews, Robert},
  title =	{{Algebraic Pseudorandomness in VNC⁰}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.15},
  URN =		{urn:nbn:de:0030-drops-237092},
  doi =		{10.4230/LIPIcs.CCC.2025.15},
  annote =	{Keywords: Polynomial identity testing, Algebraic circuits, Ideal Proof System}
}
Document
Track A: Algorithms, Complexity and Games
New Bounds for the Ideal Proof System in Positive Characteristic

Authors: Amik Raj Behera, Nutan Limaye, Varun Ramanathan, and Srikanth Srinivasan

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
In this work, we prove upper and lower bounds over fields of positive characteristics for several fragments of the Ideal Proof System (IPS), an algebraic proof system introduced by Grochow and Pitassi (J. ACM 2018). Our results extend the works of Forbes, Shpilka, Tzameret, and Wigderson (Theory of Computing 2021) and also of Govindasamy, Hakoniemi, and Tzameret (FOCS 2022). These works primarily focused on proof systems over fields of characteristic 0, and we are able to extend these results to positive characteristic. The question of proving general IPS lower bounds over positive characteristic is motivated by the important question of proving AC⁰[p]-Frege lower bounds. This connection was observed by Grochow and Pitassi (J. ACM 2018). Additional motivation comes from recent developments in algebraic complexity theory due to Forbes (CCC 2024) who showed how to extend previous lower bounds over characteristic 0 to positive characteristic. In our work, we adapt the functional lower bound method of Forbes et al. (Theory of Computing 2021) to prove exponential-size lower bounds for various subsystems of IPS. In order to establish these size lower bounds, we first prove a tight degree lower bound for a variant of Subset Sum over positive characteristic. This forms the core of all our lower bounds. Additionally, we derive upper bounds for the instances presented above. We show that they have efficient constant-depth IPS refutations. This demonstrates that constant-depth IPS refutations are stronger than the proof systems considered above even in positive characteristic. We also show that constant-depth IPS can efficiently refute a general class of instances, namely all symmetric instances, thereby further uncovering the strength of these algebraic proofs in positive characteristic.

Cite as

Amik Raj Behera, Nutan Limaye, Varun Ramanathan, and Srikanth Srinivasan. New Bounds for the Ideal Proof System in Positive Characteristic. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{behera_et_al:LIPIcs.ICALP.2025.22,
  author =	{Behera, Amik Raj and Limaye, Nutan and Ramanathan, Varun and Srinivasan, Srikanth},
  title =	{{New Bounds for the Ideal Proof System in Positive Characteristic}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.22},
  URN =		{urn:nbn:de:0030-drops-233992},
  doi =		{10.4230/LIPIcs.ICALP.2025.22},
  annote =	{Keywords: Ideal Proof Systems, Algebraic Complexity, Positive Characteristic}
}
Document
Survey
Uncertainty Management in the Construction of Knowledge Graphs: A Survey

Authors: Lucas Jarnac, Yoan Chabot, and Miguel Couceiro

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
Knowledge Graphs (KGs) are a major asset for companies thanks to their great flexibility in data representation and their numerous applications, e.g., vocabulary sharing, Q&A or recommendation systems. To build a KG, it is a common practice to rely on automatic methods for extracting knowledge from various heterogeneous sources. However, in a noisy and uncertain world, knowledge may not be reliable and conflicts between data sources may occur. Integrating unreliable data would directly impact the use of the KG, therefore such conflicts must be resolved. This could be done manually by selecting the best data to integrate. This first approach is highly accurate, but costly and time-consuming. That is why recent efforts focus on automatic approaches, which represent a challenging task since it requires handling the uncertainty of extracted knowledge throughout its integration into the KG. We survey state-of-the-art approaches in this direction and present constructions of both open and enterprise KGs. We then describe different knowledge extraction methods and discuss downstream tasks after knowledge acquisition, including KG completion using embedding models, knowledge alignment, and knowledge fusion in order to address the problem of knowledge uncertainty in KG construction. We conclude with a discussion on the remaining challenges and perspectives when constructing a KG taking into account uncertainty.

Cite as

Lucas Jarnac, Yoan Chabot, and Miguel Couceiro. Uncertainty Management in the Construction of Knowledge Graphs: A Survey. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 3:1-3:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{jarnac_et_al:TGDK.3.1.3,
  author =	{Jarnac, Lucas and Chabot, Yoan and Couceiro, Miguel},
  title =	{{Uncertainty Management in the Construction of Knowledge Graphs: A Survey}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:48},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.3},
  URN =		{urn:nbn:de:0030-drops-233733},
  doi =		{10.4230/TGDK.3.1.3},
  annote =	{Keywords: Knowledge reconciliation, Uncertainty, Heterogeneous sources, Knowledge graph construction}
}
Document
Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing

Authors: Deeparnab Chakrabarty and C. Seshadhri

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
Motivated by applications to monotonicity testing, Lehman and Ron (JCTA, 2001) proved the existence of a collection of vertex disjoint paths between comparable sub-level sets in the directed hypercube. The main technical contribution of this paper is a new proof method that yields a generalization of their theorem: we prove the existence of two edge-disjoint collections of vertex disjoint paths. Our main conceptual contributions are conjectures on directed hypercube flows with simultaneous vertex and edge capacities of which our generalized Lehman-Ron theorem is a special case. We show that these conjectures imply directed isoperimetric theorems, and in particular, the robust directed Talagrand inequality due to Khot, Minzer, and Safra (SIAM J. on Comp, 2018). These isoperimetric inequalities, that relate the directed surface area (of a set in the hypercube) to its distance to monotonicity, have been crucial in obtaining the best monotonicity testers for Boolean functions. We believe our conjectures pave the way towards combinatorial proofs of these directed isoperimetry theorems.

Cite as

Deeparnab Chakrabarty and C. Seshadhri. Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ITCS.2025.34,
  author =	{Chakrabarty, Deeparnab and Seshadhri, C.},
  title =	{{Directed Hypercube Routing, a Generalized Lehman-Ron Theorem, and Monotonicity Testing}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.34},
  URN =		{urn:nbn:de:0030-drops-226623},
  doi =		{10.4230/LIPIcs.ITCS.2025.34},
  annote =	{Keywords: Monotonicity testing, isoperimetric inequalities, hypercube routing}
}
Document
Catalytic Communication

Authors: Edward Pyne, Nathan S. Sheffield, and William Wang

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
The study of space-bounded computation has drawn extensively from ideas and results in the field of communication complexity. Catalytic Computation (Buhrman, Cleve, Koucký, Loff and Speelman, STOC 2013) studies the power of bounded space augmented with a pre-filled hard drive that can be used non-destructively during the computation. Presently, many structural questions in this model remain open. Towards a better understanding of catalytic space, we define a model of catalytic communication complexity and prove new upper and lower bounds. In our model, Alice and Bob share a blackboard with a tiny number of free bits, and a larger section with an arbitrary initial configuration. They must jointly compute a function of their inputs, communicating only via the blackboard, and must always reset the blackboard to its initial configuration. We prove several upper and lower bounds: 1) We characterize the simplest nontrivial model, that of one bit of free space and three rounds, in terms of 𝔽₂ rank. In particular, we give natural problems that are solvable with a minimal-sized blackboard that require near-maximal (randomized) communication complexity, and vice versa. 2) We show that allowing constantly many free bits, as opposed to one, allows an exponential improvement on the size of the blackboard for natural problems. To do so, we connect the problem to existence questions in extremal graph theory. 3) We give tight connections between our model and standard notions of non-uniform catalytic computation. Using this connection, we show that with an arbitrary constant number of rounds and bits of free space, one can compute all functions in TC⁰. We view this model as a step toward understanding the value of filled space in computation.

Cite as

Edward Pyne, Nathan S. Sheffield, and William Wang. Catalytic Communication. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 79:1-79:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pyne_et_al:LIPIcs.ITCS.2025.79,
  author =	{Pyne, Edward and Sheffield, Nathan S. and Wang, William},
  title =	{{Catalytic Communication}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{79:1--79:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.79},
  URN =		{urn:nbn:de:0030-drops-227076},
  doi =		{10.4230/LIPIcs.ITCS.2025.79},
  annote =	{Keywords: Catalytic computation, Branching programs, Communication complexity}
}
Document
Explicit Time and Space Efficient Encoders Exist Only with Random Access

Authors: Joshua Cook and Dana Moshkovitz

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We give the first explicit constant rate, constant relative distance, linear codes with an encoder that runs in time n^{1 + o(1)} and space polylog(n) provided random access to the message. Prior to this work, the only such codes were non-explicit, for instance repeat accumulate codes [Divsalar et al., 1998] and the codes described in [Gál et al., 2013]. To construct our codes, we also give explicit, efficiently invertible, lossless condensers with constant entropy gap and polylogarithmic seed length. In contrast to encoders with random access to the message, we show that encoders with sequential access to the message can not run in almost linear time and polylogarithmic space. Our notion of sequential access is much stronger than streaming access.

Cite as

Joshua Cook and Dana Moshkovitz. Explicit Time and Space Efficient Encoders Exist Only with Random Access. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 5:1-5:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.CCC.2024.5,
  author =	{Cook, Joshua and Moshkovitz, Dana},
  title =	{{Explicit Time and Space Efficient Encoders Exist Only with Random Access}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{5:1--5:54},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.5},
  URN =		{urn:nbn:de:0030-drops-204015},
  doi =		{10.4230/LIPIcs.CCC.2024.5},
  annote =	{Keywords: Time-Space Trade Offs, Error Correcting Codes, Encoders, Explicit Constructions, Streaming Lower Bounds, Sequential Access, Time-Space Lower Bounds, Lossless Condensers, Invertible Condensers, Condensers}
}
Document
The Entangled Quantum Polynomial Hierarchy Collapses

Authors: Sabee Grewal and Justin Yirka

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We introduce the entangled quantum polynomial hierarchy, QEPH, as the class of problems that are efficiently verifiable given alternating quantum proofs that may be entangled with each other. We prove QEPH collapses to its second level. In fact, we show that a polynomial number of alternations collapses to just two. As a consequence, QEPH = QRG(1), the class of problems having one-turn quantum refereed games, which is known to be contained in PSPACE. This is in contrast to the unentangled quantum polynomial hierarchy, QPH, which contains QMA(2). We also introduce DistributionQCPH, a generalization of the quantum-classical polynomial hierarchy QCPH where the provers send probability distributions over strings (instead of strings). We prove DistributionQCPH = QCPH, suggesting that only quantum superposition (not classical probability) increases the computational power of these hierarchies. To prove this equality, we generalize a game-theoretic result of Lipton and Young (1994) which says that, without loss of generality, the provers can send uniform distributions over a polynomial-size support. We also prove the analogous result for the polynomial hierarchy, i.e., DistributionPH = PH. Finally, we show that PH and QCPH are contained in QPH, resolving an open question of Gharibian et al. (2022).

Cite as

Sabee Grewal and Justin Yirka. The Entangled Quantum Polynomial Hierarchy Collapses. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grewal_et_al:LIPIcs.CCC.2024.6,
  author =	{Grewal, Sabee and Yirka, Justin},
  title =	{{The Entangled Quantum Polynomial Hierarchy Collapses}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.6},
  URN =		{urn:nbn:de:0030-drops-204028},
  doi =		{10.4230/LIPIcs.CCC.2024.6},
  annote =	{Keywords: Polynomial hierarchy, Entangled proofs, Correlated proofs, Minimax}
}
Document
Vision
Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges

Authors: Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
The graph model is nowadays largely adopted to model a wide range of knowledge and data, spanning from social networks to knowledge graphs (KGs), representing a successful paradigm of how symbolic and transparent AI can scale on the World Wide Web. However, due to their unprecedented volume, they are generally tackled by Machine Learning (ML) and mostly numeric based methods such as graph embedding models (KGE) and deep neural networks (DNNs). The latter methods have been proved lately very efficient, leading the current AI spring. In this vision paper, we introduce some of the main existing methods for combining KGs and ML, divided into two categories: those using ML to improve KGs, and those using KGs to improve results on ML tasks. From this introduction, we highlight research gaps and perspectives that we deem promising and currently under-explored for the involved research communities, spanning from KG support for LLM prompting, integration of KG semantics in ML models to symbol-based methods, interpretability of ML models, and the need for improved benchmark datasets. In our opinion, such perspectives are stepping stones in an ultimate view of KGs as central assets for neuro-symbolic and explainable AI.

Cite as

Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou. Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 8:1-8:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{damato_et_al:TGDK.1.1.8,
  author =	{d'Amato, Claudia and Mahon, Louis and Monnin, Pierre and Stamou, Giorgos},
  title =	{{Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{8:1--8:35},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.8},
  URN =		{urn:nbn:de:0030-drops-194824},
  doi =		{10.4230/TGDK.1.1.8},
  annote =	{Keywords: Graph-based Learning, Knowledge Graph Embeddings, Large Language Models, Explainable AI, Knowledge Graph Completion \& Curation}
}
Document
Survey
Knowledge Graph Embeddings: Open Challenges and Opportunities

Authors: Russa Biswas, Lucie-Aimée Kaffee, Michael Cochez, Stefania Dumbrava, Theis E. Jendal, Matteo Lissandrini, Vanessa Lopez, Eneldo Loza Mencía, Heiko Paulheim, Harald Sack, Edlira Kalemi Vakaj, and Gerard de Melo

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
While Knowledge Graphs (KGs) have long been used as valuable sources of structured knowledge, in recent years, KG embeddings have become a popular way of deriving numeric vector representations from them, for instance, to support knowledge graph completion and similarity search. This study surveys advances as well as open challenges and opportunities in this area. For instance, the most prominent embedding models focus primarily on structural information. However, there has been notable progress in incorporating further aspects, such as semantics, multi-modal, temporal, and multilingual features. Most embedding techniques are assessed using human-curated benchmark datasets for the task of link prediction, neglecting other important real-world KG applications. Many approaches assume a static knowledge graph and are unable to account for dynamic changes. Additionally, KG embeddings may encode data biases and lack interpretability. Overall, this study provides an overview of promising research avenues to learn improved KG embeddings that can address a more diverse range of use cases.

Cite as

Russa Biswas, Lucie-Aimée Kaffee, Michael Cochez, Stefania Dumbrava, Theis E. Jendal, Matteo Lissandrini, Vanessa Lopez, Eneldo Loza Mencía, Heiko Paulheim, Harald Sack, Edlira Kalemi Vakaj, and Gerard de Melo. Knowledge Graph Embeddings: Open Challenges and Opportunities. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 4:1-4:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{biswas_et_al:TGDK.1.1.4,
  author =	{Biswas, Russa and Kaffee, Lucie-Aim\'{e}e and Cochez, Michael and Dumbrava, Stefania and Jendal, Theis E. and Lissandrini, Matteo and Lopez, Vanessa and Menc{\'\i}a, Eneldo Loza and Paulheim, Heiko and Sack, Harald and Vakaj, Edlira Kalemi and de Melo, Gerard},
  title =	{{Knowledge Graph Embeddings: Open Challenges and Opportunities}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:32},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.4},
  URN =		{urn:nbn:de:0030-drops-194783},
  doi =		{10.4230/TGDK.1.1.4},
  annote =	{Keywords: Knowledge Graphs, KG embeddings, Link prediction, KG applications}
}
Document
RANDOM
Efficient Interactive Proofs for Non-Deterministic Bounded Space

Authors: Joshua Cook and Ron D. Rothblum

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
The celebrated IP = PSPACE Theorem gives an efficient interactive proof for any bounded-space algorithm. In this work we study interactive proofs for non-deterministic bounded space computations. While Savitch’s Theorem shows that nondeterministic bounded-space algorithms can be simulated by deterministic bounded-space algorithms, this simulation has a quadratic overhead. We give interactive protocols for nondeterministic algorithms directly to get faster verifiers. More specifically, for any non-deterministic space S algorithm, we construct an interactive proof in which the verifier runs in time Õ(n+S²). This improves on the best previous bound of Õ(n+S³) and matches the result for deterministic space bounded algorithms, up to polylog(S) factors. We further generalize to alternating bounded space algorithms. For any language L decided by a time T, space S algorithm that uses d alternations, we construct an interactive proof in which the verifier runs in time Õ(n + S log(T) + S d) and the prover runs in time 2^O(S). For d = O(log(T)), this matches the best known interactive proofs for deterministic algorithms, up to polylog(S) factors, and improves on the previous best verifier time for nondeterministic algorithms by a factor of log(T). We also improve the best prior verifier time for unbounded alternations by a factor of S. Using known connections of bounded alternation algorithms to bounded depth circuits, we also obtain faster verifiers for bounded depth circuits with unbounded fan-in.

Cite as

Joshua Cook and Ron D. Rothblum. Efficient Interactive Proofs for Non-Deterministic Bounded Space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 47:1-47:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.APPROX/RANDOM.2023.47,
  author =	{Cook, Joshua and Rothblum, Ron D.},
  title =	{{Efficient Interactive Proofs for Non-Deterministic Bounded Space}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{47:1--47:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.47},
  URN =		{urn:nbn:de:0030-drops-188727},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.47},
  annote =	{Keywords: Interactive Proofs, Alternating Algorithms, AC0\lbrack2\rbrack, Doubly Efficient Proofs}
}
  • Refine by Type
  • 18 Document/PDF
  • 10 Document/HTML

  • Refine by Publication Year
  • 10 2025
  • 2 2024
  • 4 2023
  • 1 2022
  • 1 2020

  • Refine by Author
  • 5 Cook, Joshua
  • 2 Grewal, Sabee
  • 2 Moshkovitz, Dana
  • 1 Aaronson, Scott
  • 1 Andrews, Robert
  • Show More...

  • Refine by Series/Journal
  • 14 LIPIcs
  • 4 TGDK

  • Refine by Classification
  • 5 Theory of computation → Interactive proof systems
  • 4 Theory of computation → Complexity classes
  • 3 Theory of computation → Quantum complexity theory
  • 2 Theory of computation → Algebraic complexity theory
  • 2 Theory of computation → Circuit complexity
  • Show More...

  • Refine by Keyword
  • 2 Interactive Proofs
  • 1 AC0
  • 1 AC0[2]
  • 1 Algebraic Complexity
  • 1 Algebraic circuits
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail