12 Search Results for "Yan, Chao"


Document
Shortest Path Separators in Unit Disk Graphs

Authors: Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We introduce a new balanced separator theorem for unit-disk graphs involving two shortest paths combined with the 1-hop neighbours of those paths and two other vertices. This answers an open problem of Yan, Xiang and Dragan [CGTA '12] and improves their result that requires removing the 3-hop neighbourhood of two shortest paths. Our proof uses very different ideas, including Delaunay triangulations and a generalization of the celebrated balanced separator theorem of Lipton and Tarjan [J. Appl. Math. '79] to systems of non-intersecting paths.

Cite as

Elfarouk Harb, Zhengcheng Huang, and Da Wei Zheng. Shortest Path Separators in Unit Disk Graphs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 66:1-66:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{harb_et_al:LIPIcs.ESA.2024.66,
  author =	{Harb, Elfarouk and Huang, Zhengcheng and Zheng, Da Wei},
  title =	{{Shortest Path Separators in Unit Disk Graphs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{66:1--66:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.66},
  URN =		{urn:nbn:de:0030-drops-211375},
  doi =		{10.4230/LIPIcs.ESA.2024.66},
  annote =	{Keywords: Balanced shortest path separators, unit disk graphs, crossings}
}
Document
RANDOM
Hilbert Functions and Low-Degree Randomness Extractors

Authors: Alexander Golovnev, Zeyu Guo, Pooya Hatami, Satyajeet Nagargoje, and Chao Yan

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
For S ⊆ 𝔽ⁿ, consider the linear space of restrictions of degree-d polynomials to S. The Hilbert function of S, denoted h_S(d,𝔽), is the dimension of this space. We obtain a tight lower bound on the smallest value of the Hilbert function of subsets S of arbitrary finite grids in 𝔽ⁿ with a fixed size |S|. We achieve this by proving that this value coincides with a combinatorial quantity, namely the smallest number of low Hamming weight points in a down-closed set of size |S|. Understanding the smallest values of Hilbert functions is closely related to the study of degree-d closure of sets, a notion introduced by Nie and Wang (Journal of Combinatorial Theory, Series A, 2015). We use bounds on the Hilbert function to obtain a tight bound on the size of degree-d closures of subsets of 𝔽_qⁿ, which answers a question posed by Doron, Ta-Shma, and Tell (Computational Complexity, 2022). We use the bounds on the Hilbert function and degree-d closure of sets to prove that a random low-degree polynomial is an extractor for samplable randomness sources. Most notably, we prove the existence of low-degree extractors and dispersers for sources generated by constant-degree polynomials and polynomial-size circuits. Until recently, even the existence of arbitrary deterministic extractors for such sources was not known.

Cite as

Alexander Golovnev, Zeyu Guo, Pooya Hatami, Satyajeet Nagargoje, and Chao Yan. Hilbert Functions and Low-Degree Randomness Extractors. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 41:1-41:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.APPROX/RANDOM.2024.41,
  author =	{Golovnev, Alexander and Guo, Zeyu and Hatami, Pooya and Nagargoje, Satyajeet and Yan, Chao},
  title =	{{Hilbert Functions and Low-Degree Randomness Extractors}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{41:1--41:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.41},
  URN =		{urn:nbn:de:0030-drops-210345},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.41},
  annote =	{Keywords: Extractors, Dispersers, Circuits, Hilbert Function, Randomness, Low Degree Polynomials}
}
Document
Cross Ledger Transaction Consistency for Financial Auditing

Authors: Vlasis Koutsos, Xiangan Tian, Dimitrios Papadopoulos, and Dimitris Chatzopoulos

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Auditing throughout a fiscal year is integral to organizations with transactional activity. Organizations transact with each other and record the details for all their economical activities so that a regulatory committee can verify the lawfulness and legitimacy of their activity. However, it is computationally infeasible for the committee to perform all necessary checks for each organization. To overcome this, auditors assist in this process: organizations give access to all their internal data to their auditors, who then produce reports regarding the consistency of the organization’s data, alerting the committee to any inconsistencies. Despite this, numerous issues that result in fines annually revolve around such inconsistencies in bookkeeping across organizations. Notably, committees wishing to verify the correctness of auditor-provided reports need to redo all their calculations; a process which is computationally proportional to the number of organizations. In fact, it becomes prohibitive when considering real-world settings with thousands of organizations. In this work, we propose two protocols, CLOSC and CLOLC, whose goals are to enable auditors and a committee to verify the consistency of transactions across different ledgers. Both protocols ensure that for every transaction recorded in an organization’s ledger, there exists a dual one in the ledger of another organization while safeguarding against other potential attacks. Importantly, we minimize the information leakage to auditors and other organizations and guarantee three crucial security and privacy properties that we propose: (i) transaction amount privacy, (ii) organization-auditor unlinkability, and (iii) transacting organizations unlinkability. At the core of our protocols lies a two-tier ledger architecture alongside a suite of cryptographic tools. To demonstrate the practicality and scalability of our designs, we provide extensive performance evaluation for both CLOSC and CLOLC. Our numbers are promising, i.e., all computation and verification times lie in the range of seconds, even for millions of transactions, while the on-chain storage costs for an auditing epoch are encouraging i.e. in the range of GB for millions of transactions and thousands of organizations.

Cite as

Vlasis Koutsos, Xiangan Tian, Dimitrios Papadopoulos, and Dimitris Chatzopoulos. Cross Ledger Transaction Consistency for Financial Auditing. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 4:1-4:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{koutsos_et_al:LIPIcs.AFT.2024.4,
  author =	{Koutsos, Vlasis and Tian, Xiangan and Papadopoulos, Dimitrios and Chatzopoulos, Dimitris},
  title =	{{Cross Ledger Transaction Consistency for Financial Auditing}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{4:1--4:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.4},
  URN =		{urn:nbn:de:0030-drops-209409},
  doi =		{10.4230/LIPIcs.AFT.2024.4},
  annote =	{Keywords: Financial auditing, Two-tier ledger architecture, Smart contracts, Transaction privacy, Financial entity unlinkability}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
HOBBIT: Hashed OBject Based InTegrity

Authors: Matthias Bernad and Stefan Brunthaler

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
C vulnerabilities usually hold verbatim for C++ programs. The counterfeit-object-oriented programming attack demonstrated that this relation is asymmetric, i.e., it only applies to C++. The problem pinpointed by this COOP attack is that C++ does not validate the integrity of its objects. By injecting malicious objects with manipulated virtual function table pointers, attackers can hijack control-flow of programs. The software security community addressed the COOP-problem in the years following its discovery, but together with the emergence of transient-execution attacks, such as Spectre, researchers also shifted their attention. We present Hobbit, a software-only solution to prevent COOP attacks by validating object integrity for virtual function pointer tables. Hobbit does not require any hardware specific features, scales to multi-million lines of C++ source code, and our LLVM-based implementation offers a configurable performance impact between 121.63% and 2.80% on compute-intensive SPEC CPU C++ benchmarks. Hobbit’s security analysis indicates strong resistance to brute forcing attacks and demonstrates additional benefits of using execute-only memory.

Cite as

Matthias Bernad and Stefan Brunthaler. HOBBIT: Hashed OBject Based InTegrity. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 7:1-7:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bernad_et_al:LIPIcs.ECOOP.2024.7,
  author =	{Bernad, Matthias and Brunthaler, Stefan},
  title =	{{HOBBIT: Hashed OBject Based InTegrity}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{7:1--7:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.7},
  URN =		{urn:nbn:de:0030-drops-208566},
  doi =		{10.4230/LIPIcs.ECOOP.2024.7},
  annote =	{Keywords: software security, code-reuse attacks, language-based security, counterfeit-object-oriented programming, object integrity, compiler security}
}
Document
Simulation of the Abstract Tile Assembly Model Using Crisscross Slats

Authors: Phillip Drake, Daniel Hader, and Matthew J. Patitz

Published in: LIPIcs, Volume 314, 30th International Conference on DNA Computing and Molecular Programming (DNA 30) (2024)


Abstract
The abstract Tile Assembly Model (aTAM) provides an excellent foundation for the mathematical study of DNA-tile-based self-assembling systems, especially those wherein logic is embedded within the designs of the tiles so that they follow prescribed algorithms. While such algorithmic self-assembling systems are theoretically powerful, being computationally universal and capable of building complex shapes using information-theoretically optimal numbers of tiles, physical DNA-based implementations of these systems still encounter formidable error rates and undesired nucleation that hinder this theoretical potential. Slat-based self-assembly is a recent development wherein DNA forms long slats that combine together in 2 layers, rather than square tiles in a plane. In this approach, the length of the slats is key; while tiles typically only bind to 2 neighboring tiles at a time, slats may bind to dozens of other slats. This increased coordination between slats means that several mismatched slats must coincidentally meet in just the right way for errors to persist, unlike tiles where only a few are required. Consequently, while still a novel technology, large slat-based DNA constructions have been successfully implemented in the lab with resilience to many tile-based construction problems. These improved error characteristics come at a cost however, as slat-based systems are often more difficult to design and simulate than tile-based ones. Moreover, it has not been clear whether slats, with their larger sizes and different geometries, have the same theoretical capabilities as tiles. In this paper, we show that slats are capable of doing anything that tiles can, at least at scale. We demonstrate that any aTAM system may be converted to and simulated by an effectively equivalent system of slats. Furthermore, we show that these simulating slat systems can be made more efficiently, using shorter slats and a smaller scale factor, if the simulated tile system avoids certain uncommon growth patterns. Specifically, we consider 5 classes of aTAM systems with increasing complexity, from zig-zag systems which grow in a rigid pattern to the full class of all aTAM systems, and show how they may be converted to equivalent slat systems. We show that the simplest class may be simulated by slats at only a 2c × 2c scale, where c is the freely chosen coordination number of the slats, and further show that the full class of aTAM systems can be simulated at only a 5c × 5c scale. These results prove that slats have the full theoretical power of aTAM tiles while also providing constructions that are compact enough for potential DNA-based implementations of slat systems that are both capable of powerful algorithmic self-assembly and possessing of the strong error resilience of slats.

Cite as

Phillip Drake, Daniel Hader, and Matthew J. Patitz. Simulation of the Abstract Tile Assembly Model Using Crisscross Slats. In 30th International Conference on DNA Computing and Molecular Programming (DNA 30). Leibniz International Proceedings in Informatics (LIPIcs), Volume 314, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{drake_et_al:LIPIcs.DNA.30.3,
  author =	{Drake, Phillip and Hader, Daniel and Patitz, Matthew J.},
  title =	{{Simulation of the Abstract Tile Assembly Model Using Crisscross Slats}},
  booktitle =	{30th International Conference on DNA Computing and Molecular Programming (DNA 30)},
  pages =	{3:1--3:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-344-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{314},
  editor =	{Seki, Shinnosuke and Stewart, Jaimie Marie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.30.3},
  URN =		{urn:nbn:de:0030-drops-209315},
  doi =		{10.4230/LIPIcs.DNA.30.3},
  annote =	{Keywords: DNA origami, tile-assembly, self-assembly, aTAM, kinetic modeling, computational modeling}
}
Document
Learning Lagrangian Multipliers for the Travelling Salesman Problem

Authors: Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the traveling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric traveling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.

Cite as

Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau. Learning Lagrangian Multipliers for the Travelling Salesman Problem. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{parjadis_et_al:LIPIcs.CP.2024.22,
  author =	{Parjadis, Augustin and Cappart, Quentin and Dilkina, Bistra and Ferber, Aaron and Rousseau, Louis-Martin},
  title =	{{Learning Lagrangian Multipliers for the Travelling Salesman Problem}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.22},
  URN =		{urn:nbn:de:0030-drops-207076},
  doi =		{10.4230/LIPIcs.CP.2024.22},
  annote =	{Keywords: Lagrangian relaxation, unsupervised learning, graph neural network}
}
Document
Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests

Authors: Lukas Hübner and Alexandros Stamatakis

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
The field of population genetics attempts to advance our understanding of evolutionary processes. It has applications, for example, in medical research, wildlife conservation, and - in conjunction with recent advances in ancient DNA sequencing technology - studying human migration patterns over the past few thousand years. The basic toolbox of population genetics includes genealogical trees, which describe the shared evolutionary history among individuals of the same species. They are calculated on the basis of genetic variations. However, in recombining organisms, a single tree is insufficient to describe the evolutionary history of the whole genome. Instead, a collection of correlated trees can be used, where each describes the evolutionary history of a consecutive region of the genome. The current corresponding state of-the-art data structure, tree sequences, compresses these genealogical trees via edit operations when moving from one tree to the next along the genome instead of storing the full, often redundant, description for each tree. We propose a new data structure, genealogical forests, which compresses the set of genealogical trees into a DAG. In this DAG identical subtrees that are shared across the input trees are encoded only once, thereby allowing for straight-forward memoization of intermediate results. Additionally, we provide a C++ implementation of our proposed data structure, called gfkit, which is 2.1 to 11.2 (median 4.0) times faster than the state-of-the-art tool on empirical and simulated datasets at computing important population genetics statistics such as the Allele Frequency Spectrum, Patterson’s f, the Fixation Index, Tajima’s D, pairwise Lowest Common Ancestors, and others. On Lowest Common Ancestor queries with more than two samples as input, gfkit scales asymptotically better than the state-of-the-art, and is thus up to 990 times faster. In conclusion, our proposed data structure compresses genealogical trees by storing shared subtrees only once, thereby enabling straight-forward memoization of intermediate results, yielding a substantial runtime reduction and a potentially more intuitive data representation over the state-of-the-art. Our improvements will boost the development of novel analyses and models in the field of population genetics and increases scalability to ever-growing genomic datasets.

Cite as

Lukas Hübner and Alexandros Stamatakis. Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hubner_et_al:LIPIcs.WABI.2024.5,
  author =	{H\"{u}bner, Lukas and Stamatakis, Alexandros},
  title =	{{Memoization on Shared Subtrees Accelerates Computations on Genealogical Forests}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{5:1--5:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.5},
  URN =		{urn:nbn:de:0030-drops-206499},
  doi =		{10.4230/LIPIcs.WABI.2024.5},
  annote =	{Keywords: bioinformatics, population genetics, algorithms}
}
Document
Pure-DP Aggregation in the Shuffle Model: Error-Optimal and Communication-Efficient

Authors: Badih Ghazi, Ravi Kumar, and Pasin Manurangsi

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
We obtain a new protocol for binary counting in the ε-DP_shuffle model with error O(1/ε) and expected communication Õ((log n)/ε) messages per user. Previous protocols incur either an error of O(1/ε^1.5) with O_ε(log n) messages per user (Ghazi et al., ITC 2020) or an error of O(1/ε) with O_ε(n²) messages per user (Cheu and Yan, TPDP 2022). Using the new protocol, we obtained improved ε-DP_shuffle protocols for real summation and histograms.

Cite as

Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. Pure-DP Aggregation in the Shuffle Model: Error-Optimal and Communication-Efficient. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITC.2024.4,
  author =	{Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin},
  title =	{{Pure-DP Aggregation in the Shuffle Model: Error-Optimal and Communication-Efficient}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.4},
  URN =		{urn:nbn:de:0030-drops-205127},
  doi =		{10.4230/LIPIcs.ITC.2024.4},
  annote =	{Keywords: Differential Privacy, Shuffle Model, Aggregation, Pure Differential Privacy}
}
Document
Track A: Algorithms, Complexity and Games
Non-Linear Paging

Authors: Ilan Doron-Arad and Joseph (Seffi) Naor

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We formulate and study non-linear paging - a broad model of online paging where the size of subsets of pages is determined by a monotone non-linear set function of the pages. This model captures the well-studied classic weighted paging and generalized paging problems, and also submodular and supermodular paging, studied here for the first time, that have a range of applications from virtual memory to machine learning. Unlike classic paging, the cache threshold parameter k does not yield good competitive ratios for non-linear paging. Instead, we introduce a novel parameter 𝓁 that generalizes the notion of cache size to the non-linear setting. We obtain a tight deterministic 𝓁-competitive algorithm for general non-linear paging and a o(log²𝓁)-competitive lower bound for randomized algorithms. Our algorithm is based on a new generic LP for the problem that captures both submodular and supermodular paging, in contrast to LPs used for submodular cover settings. We finally focus on the supermodular paging problem, which is a variant of online set cover and online submodular cover, where sets are repeatedly requested to be removed from the cover. We obtain polylogarithmic lower and upper bounds and an offline approximation algorithm.

Cite as

Ilan Doron-Arad and Joseph (Seffi) Naor. Non-Linear Paging. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 57:1-57:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{doronarad_et_al:LIPIcs.ICALP.2024.57,
  author =	{Doron-Arad, Ilan and Naor, Joseph (Seffi)},
  title =	{{Non-Linear Paging}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{57:1--57:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.57},
  URN =		{urn:nbn:de:0030-drops-202000},
  doi =		{10.4230/LIPIcs.ICALP.2024.57},
  annote =	{Keywords: paging, competitive analysis, non-linear paging, submodular and supermodular functions}
}
Document
Survey
Semantic Web: Past, Present, and Future

Authors: Ansgar Scherp, Gerd Groener, Petr Škoda, Katja Hose, and Maria-Esther Vidal

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Ever since the vision was formulated, the Semantic Web has inspired many generations of innovations. Semantic technologies have been used to share vast amounts of information on the Web, enhance them with semantics to give them meaning, and enable inference and reasoning on them. Throughout the years, semantic technologies, and in particular knowledge graphs, have been used in search engines, data integration, enterprise settings, and machine learning. In this paper, we recap the classical concepts and foundations of the Semantic Web as well as modern and recent concepts and applications, building upon these foundations. The classical topics we cover include knowledge representation, creating and validating knowledge on the Web, reasoning and linking, and distributed querying. We enhance this classical view of the so-called "Semantic Web Layer Cake" with an update of recent concepts that include provenance, security and trust, as well as a discussion of practical impacts from industry-led contributions. We conclude with an outlook on the future directions of the Semantic Web. This is a living document. If you like to contribute, please contact the first author and visit: https://github.com/ascherp/semantic-web-primer

Cite as

Ansgar Scherp, Gerd Groener, Petr Škoda, Katja Hose, and Maria-Esther Vidal. Semantic Web: Past, Present, and Future. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 3:1-3:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{scherp_et_al:TGDK.2.1.3,
  author =	{Scherp, Ansgar and Groener, Gerd and \v{S}koda, Petr and Hose, Katja and Vidal, Maria-Esther},
  title =	{{Semantic Web: Past, Present, and Future}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:37},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.3},
  URN =		{urn:nbn:de:0030-drops-198607},
  doi =		{10.4230/TGDK.2.1.3},
  annote =	{Keywords: Linked Open Data, Semantic Web Graphs, Knowledge Graphs}
}
Document
Necessary Conditions in Multi-Server Differential Privacy

Authors: Albert Cheu and Chao Yan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We consider protocols where users communicate with multiple servers to perform a computation on the users' data. An adversary exerts semi-honest control over many of the parties but its view is differentially private with respect to honest users. Prior work described protocols that required multiple rounds of interaction or offered privacy against a computationally bounded adversary. Our work presents limitations of non-interactive protocols that offer privacy against unbounded adversaries. We prove that these protocols require exponentially more samples than centrally private counterparts to solve some learning, testing, and estimation tasks. This means sample-efficiency demands interactivity or computational differential privacy, or both.

Cite as

Albert Cheu and Chao Yan. Necessary Conditions in Multi-Server Differential Privacy. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{cheu_et_al:LIPIcs.ITCS.2023.36,
  author =	{Cheu, Albert and Yan, Chao},
  title =	{{Necessary Conditions in Multi-Server Differential Privacy}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.36},
  URN =		{urn:nbn:de:0030-drops-175395},
  doi =		{10.4230/LIPIcs.ITCS.2023.36},
  annote =	{Keywords: Differential Privacy, Parity Learning, Multi-server}
}
  • Refine by Author
  • 2 Yan, Chao
  • 1 Bernad, Matthias
  • 1 Brunthaler, Stefan
  • 1 Cappart, Quentin
  • 1 Chatzopoulos, Dimitris
  • Show More...

  • Refine by Classification
  • 2 Security and privacy
  • 2 Security and privacy → Information-theoretic techniques
  • 1 Applied computing → Bioinformatics
  • 1 Applied computing → Computational genomics
  • 1 Applied computing → Molecular sequence analysis
  • Show More...

  • Refine by Keyword
  • 2 Differential Privacy
  • 1 Aggregation
  • 1 Balanced shortest path separators
  • 1 Circuits
  • 1 DNA origami
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 11 2024
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail