450 Search Results for "S�nchez, C�sar"


Document
Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291)

Authors: Karthik C. S., Parinya Chalermsook, Joachim Spoerhase, Meirav Zehavi, and Martin Herold

Published in: Dagstuhl Reports, Volume 13, Issue 7 (2024)


Abstract
Parameterization and approximation are two established approaches of coping with intractability in combinatorial optimization. In this Dagstuhl Seminar, we studied parameterized approximation as a relatively new algorithmic paradigm that combines these two popular research areas. In particular, we analyzed the solution quality (approximation ratio) as well as the running time of an algorithm in terms of a parameter that captures the "complexity" of a problem instance. While the field has grown and yielded some promising results, our understanding of the area is rather ad-hoc compared to our knowledge in approximation or parameterized algorithms alone. In this seminar, we brought together researchers from both communities in order to bridge this gap by accommodating the exchange and unification of scientific knowledge.

Cite as

Karthik C. S., Parinya Chalermsook, Joachim Spoerhase, Meirav Zehavi, and Martin Herold. Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291). In Dagstuhl Reports, Volume 13, Issue 7, pp. 96-107, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{c.s._et_al:DagRep.13.7.96,
  author =	{C. S., Karthik and Chalermsook, Parinya and Spoerhase, Joachim and Zehavi, Meirav and Herold, Martin},
  title =	{{Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291)}},
  pages =	{96--107},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{7},
  editor =	{C. S., Karthik and Chalermsook, Parinya and Spoerhase, Joachim and Zehavi, Meirav and Herold, Martin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.7.96},
  URN =		{urn:nbn:de:0030-drops-197764},
  doi =		{10.4230/DagRep.13.7.96},
  annote =	{Keywords: approximation algorithms, Hardness of approximation, Parameterized algorithms}
}
Document
Invited Talk
High-Level Synthesis Developments in the Context of European Space Technology Research (Invited Talk)

Authors: Fabrizio Ferrandi, Michele Fiorito, Claudio Barone, Giovanni Gozzi, and Serena Curzel

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
European efforts to boost competitiveness in the space services sector promote the research and development of advanced software and hardware solutions. The EU-funded HERMES project contributes to the effort by qualifying radiation-hardened, high-performance programmable microprocessors and developing a software ecosystem that facilitates the deployment of complex applications on such platforms. The main objectives of the project include reaching a technology readiness level of 6 (i.e., validated and demonstrated in relevant environment) for the rad-hard NG-ULTRA FPGA with its ceramic hermetic package CGA 1752, developed within projects of the European Space Agency, French National Centre for Space Studies and the European Union. An equally important share of the project is dedicated to the development and validation of tools that support multicore software programming and FPGA acceleration. The HERMES project selected the Bambu High-Level Synthesis tool to integrate capabilities to translate C/C++ code into Verilog/VHDL in its development ecosystem. In HERMES, Bambu has been and will be extended to support new FPGA targets, architectural models, model-based design, and input applications. The increased performance offered by FPGAs is thus made available also to software developers who do not have hardware design expertise.

Cite as

Fabrizio Ferrandi, Michele Fiorito, Claudio Barone, Giovanni Gozzi, and Serena Curzel. High-Level Synthesis Developments in the Context of European Space Technology Research (Invited Talk). In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 1:1-1:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferrandi_et_al:OASIcs.PARMA-DITAM.2024.1,
  author =	{Ferrandi, Fabrizio and Fiorito, Michele and Barone, Claudio and Gozzi, Giovanni and Curzel, Serena},
  title =	{{High-Level Synthesis Developments in the Context of European Space Technology Research}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{1:1--1:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.1},
  URN =		{urn:nbn:de:0030-drops-196951},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.1},
  annote =	{Keywords: High-Level Synthesis, rad-hard FPGAs}
}
Document
Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures

Authors: Alireza Amirshahi, Giovanni Ansaloni, and David Atienza

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
The increasing complexity of transformer models in artificial intelligence expands their computational costs, memory usage, and energy consumption. Hardware acceleration tackles the ensuing challenges by designing processors and accelerators tailored for transformer models, supporting their computation hotspots with high efficiency. However, memory bandwidth can hinder improvements in hardware accelerators. Against this backdrop, in this paper we propose a novel memory arrangement strategy, governed by the hardware accelerator’s kernel size, which effectively minimizes off-chip data access. This arrangement is particularly beneficial for end-to-end transformer model inference, where most of the computation is based on general matrix multiplication (GEMM) operations. Additionally, we address the overhead of non-GEMM operations in transformer models within the scope of this memory data arrangement. Our study explores the implementation and effectiveness of the proposed accelerator-driven data arrangement approach in both single- and multi-core systems. Our evaluation demonstrates that our approach can achieve up to a 2.7x speed increase when executing inferences employing state-of-the-art transformers.

Cite as

Alireza Amirshahi, Giovanni Ansaloni, and David Atienza. Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{amirshahi_et_al:OASIcs.PARMA-DITAM.2024.2,
  author =	{Amirshahi, Alireza and Ansaloni, Giovanni and Atienza, David},
  title =	{{Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{2:1--2:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.2},
  URN =		{urn:nbn:de:0030-drops-196960},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.2},
  annote =	{Keywords: Memory arrangement, Data layout, Hardware accelerators, Transformer models, Multi-core, System simulation}
}
Document
Precision Tuning the Rust Memory-Safe Programming Language

Authors: Gabriele Magnani, Lev Denisov, Daniele Cattaneo, Giovanni Agosta, and Stefano Cherubin

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
Precision tuning is an increasingly common approach for exploiting the tradeoff between energy efficiency or speedup, and accuracy. Its effectiveness is particularly strong whenever the maximum performance must be extracted from a computing system, such as embedded platforms. In these contexts, current engineering practice sees a dominance of memory-unsafe programming languages such as C and C++. However, the unsafe nature of these languages has come under great scrutiny as it leads to significant software vulnerabilities. Hence, safer programming languages which prevent memory-related bugs by design have been proposed as a replacement. Amongst these safer programming languages, one of the most popular has been Rust. In this work we adapt a state-of-the-art precision tuning tool, TAFFO, to operate on Rust code. By porting the PolyBench/C benchmark suite to Rust, we show that the effectiveness of the precision tuning is not affected by the use of a safer programming language, and moreover the safety properties of the language can be successfully preserved. Specifically, using TAFFO and Rust we achieved up to a 15× speedup over the base Rust code, thanks to the use of precision tuning.

Cite as

Gabriele Magnani, Lev Denisov, Daniele Cattaneo, Giovanni Agosta, and Stefano Cherubin. Precision Tuning the Rust Memory-Safe Programming Language. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{magnani_et_al:OASIcs.PARMA-DITAM.2024.4,
  author =	{Magnani, Gabriele and Denisov, Lev and Cattaneo, Daniele and Agosta, Giovanni and Cherubin, Stefano},
  title =	{{Precision Tuning the Rust Memory-Safe Programming Language}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{4:1--4:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.4},
  URN =		{urn:nbn:de:0030-drops-196989},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.4},
  annote =	{Keywords: Approximate Computing, Memory Safety, Precision Tuning}
}
Document
Embedded Multi-Core Code Generation with Cross-Layer Parallelization

Authors: Oliver Oey, Michael Huebner, Timo Stripf, and Juergen Becker

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
In this paper, we present a method for optimizing C code for embedded multi-core systems using cross-layer parallelization. The method has two phases. The first is to develop the algorithm without any optimization for the target platform. Then, the second step is to optimize and parallelize the code across four defined layers which are the algorithm, code, task, and data layers, for efficient execution on the target hardware. Each layer is focused on selected hardware characteristics. By using an iterative approach, individual kernels and composite algorithms can be very well adapted to execution on the hardware without further adaptation of the algorithm itself. The realization of this cross-layer parallelization consists of algorithm recognition, code transformations, task distribution, and insertion of synchronization and communication statements. The method is evaluated first on a common kernel and then on a sample image processing algorithm to showcase the benefits of the approach. Compared to other methods that only rely on two or three of these layers, 20 to 30 % of additional performance gain can be achieved.

Cite as

Oliver Oey, Michael Huebner, Timo Stripf, and Juergen Becker. Embedded Multi-Core Code Generation with Cross-Layer Parallelization. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{oey_et_al:OASIcs.PARMA-DITAM.2024.5,
  author =	{Oey, Oliver and Huebner, Michael and Stripf, Timo and Becker, Juergen},
  title =	{{Embedded Multi-Core Code Generation with Cross-Layer Parallelization}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{5:1--5:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.5},
  URN =		{urn:nbn:de:0030-drops-196990},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.5},
  annote =	{Keywords: Parallelization, multi-core Processors, model-based Development, Code Generation}
}
Document
Confluence of Conditional Rewriting Modulo

Authors: Salvador Lucas

Published in: LIPIcs, Volume 288, 32nd EACSL Annual Conference on Computer Science Logic (CSL 2024)


Abstract
We investigate confluence of rewriting with Equational Generalized Term Rewriting Systems R, consisting of Horn clauses, some of them defining conditional equations s = t ⇐ c and rewriting rules 𝓁 → r ⇐ c. In both cases, c is a sequence of atoms, possibly defined by using additional Horn clauses. Such systems include Equational Term Rewriting Systems and (join, oriented, and semi-equational) Conditional Term Rewriting Systems. A set of equations E defines an equivalence =_E and quotient set T(F,X)/=_E of terms, where reductions s →_{R/E}t using rules in R occur. For such systems, we obtain a finite set of conditional pairs π, which can be viewed as logical sentences, to prove and disprove confluence of →_{R/E} by (dis)proving joinability of such conditional pairs π.

Cite as

Salvador Lucas. Confluence of Conditional Rewriting Modulo. In 32nd EACSL Annual Conference on Computer Science Logic (CSL 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 288, pp. 37:1-37:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lucas:LIPIcs.CSL.2024.37,
  author =	{Lucas, Salvador},
  title =	{{Confluence of Conditional Rewriting Modulo}},
  booktitle =	{32nd EACSL Annual Conference on Computer Science Logic (CSL 2024)},
  pages =	{37:1--37:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-310-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{288},
  editor =	{Murano, Aniello and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2024.37},
  URN =		{urn:nbn:de:0030-drops-196809},
  doi =		{10.4230/LIPIcs.CSL.2024.37},
  annote =	{Keywords: Conditional rewriting, Confluence, Program analysis}
}
Document
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors. Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation. Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation à with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff. Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ITCS.2024.13,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Musco, Cameron and Ray, Archan and Sachdeva, Sushant and Woodruff, David P.},
  title =	{{Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.13},
  URN =		{urn:nbn:de:0030-drops-195415},
  doi =		{10.4230/LIPIcs.ITCS.2024.13},
  annote =	{Keywords: sublinear algorithms, randomized linear algebra, spectral sparsification, expanders}
}
Document
One-Way Functions vs. TFNP: Simpler and Improved

Authors: Lukáš Folwarczný, Mika Göös, Pavel Hubáček, Gilbert Maystre, and Weiqiang Yuan

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Simon (1998) proved that it is impossible to construct collision-resistant hash functions from one-way functions using a black-box reduction. It is conjectured more generally that one-way functions do not imply, via a black-box reduction, the hardness of any total NP search problem (collision-resistant hash functions being just one such example). We make progress towards this conjecture by ruling out a large class of "single-query" reductions. In particular, we improve over the prior work of Hubáček et al. (2020) in two ways: our result is established via a novel simpler combinatorial technique and applies to a broader class of semi black-box reductions.

Cite as

Lukáš Folwarczný, Mika Göös, Pavel Hubáček, Gilbert Maystre, and Weiqiang Yuan. One-Way Functions vs. TFNP: Simpler and Improved. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{folwarczny_et_al:LIPIcs.ITCS.2024.50,
  author =	{Folwarczn\'{y}, Luk\'{a}\v{s} and G\"{o}\"{o}s, Mika and Hub\'{a}\v{c}ek, Pavel and Maystre, Gilbert and Yuan, Weiqiang},
  title =	{{One-Way Functions vs. TFNP: Simpler and Improved}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.50},
  URN =		{urn:nbn:de:0030-drops-195788},
  doi =		{10.4230/LIPIcs.ITCS.2024.50},
  annote =	{Keywords: TFNP, One-Way Functions, Oracle, Separation, Black-Box}
}
Document
Budget-Feasible Mechanism Design: Simpler, Better Mechanisms and General Payment Constraints

Authors: Rian Neogi, Kanstantsin Pashkovich, and Chaitanya Swamy

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
In budget-feasible mechanism design, a buyer wishes to procure a set of items of maximum value from self-interested rational players. We are given an item-set U and a nonnegative valuation function v: 2^U ↦ ℝ_+. Each item e is held by a player who incurs a private cost c_e for supplying item e. The goal is to devise a truthful mechanism such that the total payment made to the players is at most some given budget B, and the value of the set returned is a good approximation to OPT: = max {v(S): c(S) ≤ B, S ⊆ U}. We call such a mechanism a budget-feasible mechanism. More generally, there may be additional side constraints requiring that the set returned lies in some downwards-monotone family ℐ ⊆ 2^U. Budget-feasible mechanisms have been widely studied, but there are still significant gaps in our understanding of these mechanisms, both in terms of what kind of oracle access to the valuation is required to obtain good approximation ratios, and the best approximation ratio that can be achieved. We substantially advance the state of the art of budget-feasible mechanisms by devising mechanisms that are simpler, and also better, both in terms of requiring weaker oracle access and the approximation factors they obtain. For XOS valuations, we devise the first polytime O(1)-approximation budget-feasible mechanism using only demand oracles, and also significantly improve the approximation factor. For subadditive valuations, we give the first explicit construction of an O(1)-approximation mechanism, where previously only an existential result was known. We also introduce a fairly rich class of mechanism-design problems that we dub using the umbrella term generalized budget-feasible mechanism design, which allow one to capture payment constraints that are much-more nuanced than a single constraint on the total payment doled out. We demonstrate the versatility of our ideas by showing that our constructions can be adapted to yield approximation guarantees in such general settings as well. A prominent insight to emerge from our work is the usefulness of a property called nobossiness, which allows us to nicely decouple the truthfulness + approximation, and budget-feasibility requirements. Some of our constructions can be viewed as reductions showing that an O(1)-approximation budget-feasible mechanism can be obtained provided we have a (randomized) truthful mechanism satisfying nobossiness that returns a (random) feasible set having (expected) value Ω(OPT).

Cite as

Rian Neogi, Kanstantsin Pashkovich, and Chaitanya Swamy. Budget-Feasible Mechanism Design: Simpler, Better Mechanisms and General Payment Constraints. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 84:1-84:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{neogi_et_al:LIPIcs.ITCS.2024.84,
  author =	{Neogi, Rian and Pashkovich, Kanstantsin and Swamy, Chaitanya},
  title =	{{Budget-Feasible Mechanism Design: Simpler, Better Mechanisms and General Payment Constraints}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{84:1--84:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.84},
  URN =		{urn:nbn:de:0030-drops-196128},
  doi =		{10.4230/LIPIcs.ITCS.2024.84},
  annote =	{Keywords: Algorithmic mechanism design, Approximation algorithms, Budget-feasible mechanisms}
}
Document
On Generalized Corners and Matrix Multiplication

Authors: Kevin Pratt

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Suppose that S ⊆ [n]² contains no three points of the form (x,y), (x,y+δ), (x+δ,y'), where δ ≠ 0. How big can S be? Trivially, n ≤ |S| ≤ n². Slight improvements on these bounds are obtained from Shkredov’s upper bound for the corners problem [Shkredov, 2006], which shows that |S| ≤ O(n²/(log log n)^c) for some small c > 0, and a construction due to Petrov [Fedor Petrov, 2023], which shows that |S| ≥ Ω(n log n/√{log log n}). Could it be that for all ε > 0, |S| ≤ O(n^{1+ε})? We show that if so, this would rule out obtaining ω = 2 using a large family of abelian groups in the group-theoretic framework of [Cohn and Umans, 2003; Cohn et al., 2005] (which is known to capture the best bounds on ω to date), for which no barriers are currently known. Furthermore, an upper bound of O(n^{4/3 - ε}) for any fixed ε > 0 would rule out a conjectured approach to obtain ω = 2 of [Cohn et al., 2005]. Along the way, we encounter several problems that have much stronger constraints and that would already have these implications.

Cite as

Kevin Pratt. On Generalized Corners and Matrix Multiplication. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 89:1-89:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{pratt:LIPIcs.ITCS.2024.89,
  author =	{Pratt, Kevin},
  title =	{{On Generalized Corners and Matrix Multiplication}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{89:1--89:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.89},
  URN =		{urn:nbn:de:0030-drops-196174},
  doi =		{10.4230/LIPIcs.ITCS.2024.89},
  annote =	{Keywords: Algebraic computation, fast matrix multiplication, additive combinatorics}
}
Document
Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness

Authors: Iddo Tzameret and Lu-Ming Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich’s original work [S. Rudich, 1997], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following: Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [S. Rudich, 1997]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [A. A. Razborov and S. Rudich, 1997]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b:{0,1}ⁿ → {0,1}^{n+1} can be stretched into sublinear many demi-bits b':{0,1}ⁿ → {0,1}^{n+n^{c}}, for every constant 0 < c < 1. Average-case hardness: Using work by Santhanam [Rahul Santhanam, 2020], we apply our results to obtain new average-case Kolmogorov complexity results: we show that K^{poly}[n-O(1)] is zero-error average-case hard against NP/poly machines iff K^{poly}[n-o(n)] is, where for a function s(n):ℕ → ℕ, K^{poly}[s(n)] denotes the languages of all strings x ∈ {0,1}ⁿ for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x. Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [Yao, 1982] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("next-bit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪-unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪-unpredictability. Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [Goldreich and Levin, 1989; Håstad et al., 1999], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core.

Cite as

Iddo Tzameret and Lu-Ming Zhang. Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 95:1-95:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{tzameret_et_al:LIPIcs.ITCS.2024.95,
  author =	{Tzameret, Iddo and Zhang, Lu-Ming},
  title =	{{Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{95:1--95:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.95},
  URN =		{urn:nbn:de:0030-drops-196234},
  doi =		{10.4230/LIPIcs.ITCS.2024.95},
  annote =	{Keywords: Pseudorandomness, Cryptography, Natural Proofs, Nondeterminism, Lower bounds}
}
Document
Advanced Composition Theorems for Differential Obliviousness

Authors: Mingxun Zhou, Mengshi Zhao, T-H. Hubert Chan, and Elaine Shi

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Differential obliviousness (DO) is a privacy notion which mandates that the access patterns of a program satisfy differential privacy. Earlier works have shown that in numerous applications, differential obliviousness allows us to circumvent fundamental barriers pertaining to fully oblivious algorithms, resulting in asymptotical (and sometimes even polynomial) performance improvements. Although DO has been applied to various contexts, including the design of algorithms, data structures, and protocols, its compositional properties are not explored until the recent work of Zhou et al. (Eurocrypt'23). Specifically, Zhou et al. showed that the original DO notion is not composable. They then proposed a refinement of DO called neighbor-preserving differential obliviousness (NPDO), and proved a basic composition for NPDO. In Zhou et al.’s basic composition theorem for NPDO, the privacy loss is linear in k for k-fold composition. In comparison, for standard differential privacy, we can enjoy roughly √k loss for k-fold composition by applying the well-known advanced composition theorem given an appropriate parameter range. Therefore, a natural question left open by their work is whether we can also prove an analogous advanced composition for NPDO. In this paper, we answer this question affirmatively. As a key step in proving an advanced composition theorem for NPDO, we define a more operational notion called symmetric NPDO which we prove to be equivalent to NPDO. Using symmetric NPDO as a stepping stone, we also show how to generalize NPDO to more general notions of divergence, resulting in Rényi-NPDO, zero-concentrated-NPDO, Gassian-NPDO, and g-NPDO notions. We also prove composition theorems for these generalized notions of NPDO.

Cite as

Mingxun Zhou, Mengshi Zhao, T-H. Hubert Chan, and Elaine Shi. Advanced Composition Theorems for Differential Obliviousness. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 103:1-103:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhou_et_al:LIPIcs.ITCS.2024.103,
  author =	{Zhou, Mingxun and Zhao, Mengshi and Chan, T-H. Hubert and Shi, Elaine},
  title =	{{Advanced Composition Theorems for Differential Obliviousness}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{103:1--103:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.103},
  URN =		{urn:nbn:de:0030-drops-196315},
  doi =		{10.4230/LIPIcs.ITCS.2024.103},
  annote =	{Keywords: Differential Privacy, Oblivious Algorithms}
}
Document
Difference Determines the Degree: Structural Kernelizations of Component Order Connectivity

Authors: Sriram Bhyravarapu, Satyabrata Jana, Saket Saurabh, and Roohani Sharma

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
We consider the question of polynomial kernelization of a generalization of the classical Vertex Cover problem parameterized by a parameter that is provably smaller than the solution size. In particular, we focus on the c-Component Order Connectivity problem (c-COC) where given an undirected graph G and a non-negative integer t, the objective is to test whether there exists a set S of size at most t such that every component of G-S contains at most c vertices. Such a set S is called a c-coc set. It is known that c-COC admits a kernel with {O}(ct) vertices. Observe that for c = 1, this corresponds to the Vertex Cover problem. We study the c-Component Order Connectivity problem parameterized by the size of a d-coc set (c-COC/d-COC), where c,d ∈ ℕ with c ≤ d. In particular, the input is an undirected graph G, a positive integer t and a set M of at most k vertices of G, such that the size of each connected component in G - M is at most d. The question is to find a set S of vertices of size at most t, such that the size of each connected component in G - S is at most c. In this paper, we give a kernel for c-COC/d-COC with O(k^{d-c+1}) vertices and O(k^{d-c+2}) edges. Our result exhibits that the difference in d and c, and not their absolute values, determines the exact degree of the polynomial in the kernel size. When c = d = 1, the c-COC/d-COC problem is exactly the Vertex Cover problem parameterized by the solution size, which has a kernel with O(k) vertices and O(k²) edges, and this is asymptotically tight [Dell & Melkebeek, JACM 2014]. We also show that the dependence of d-c in the exponent of the kernel size cannot be avoided under reasonable complexity assumptions.

Cite as

Sriram Bhyravarapu, Satyabrata Jana, Saket Saurabh, and Roohani Sharma. Difference Determines the Degree: Structural Kernelizations of Component Order Connectivity. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bhyravarapu_et_al:LIPIcs.IPEC.2023.5,
  author =	{Bhyravarapu, Sriram and Jana, Satyabrata and Saurabh, Saket and Sharma, Roohani},
  title =	{{Difference Determines the Degree: Structural Kernelizations of Component Order Connectivity}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.5},
  URN =		{urn:nbn:de:0030-drops-194241},
  doi =		{10.4230/LIPIcs.IPEC.2023.5},
  annote =	{Keywords: Kernelization, Component Order Connectivity, Vertex Cover, Structural Parameterizations}
}
Document
Treewidth Is NP-Complete on Cubic Graphs

Authors: Hans L. Bodlaender, Édouard Bonnet, Lars Jaffke, Dušan Knop, Paloma T. Lima, Martin Milanič, Sebastian Ordyniak, Sukanya Pandey, and Ondřej Suchý

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
In this paper, we show that Treewidth is NP-complete for cubic graphs, thereby improving the result by Bodlaender and Thilikos from 1997 that Treewidth is NP-complete on graphs with maximum degree at most 9. We add a new and simpler proof of the NP-completeness of treewidth, and show that Treewidth remains NP-complete on subcubic induced subgraphs of the infinite 3-dimensional grid.

Cite as

Hans L. Bodlaender, Édouard Bonnet, Lars Jaffke, Dušan Knop, Paloma T. Lima, Martin Milanič, Sebastian Ordyniak, Sukanya Pandey, and Ondřej Suchý. Treewidth Is NP-Complete on Cubic Graphs. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 7:1-7:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bodlaender_et_al:LIPIcs.IPEC.2023.7,
  author =	{Bodlaender, Hans L. and Bonnet, \'{E}douard and Jaffke, Lars and Knop, Du\v{s}an and Lima, Paloma T. and Milani\v{c}, Martin and Ordyniak, Sebastian and Pandey, Sukanya and Such\'{y}, Ond\v{r}ej},
  title =	{{Treewidth Is NP-Complete on Cubic Graphs}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{7:1--7:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.7},
  URN =		{urn:nbn:de:0030-drops-194263},
  doi =		{10.4230/LIPIcs.IPEC.2023.7},
  annote =	{Keywords: Treewidth, cubic graphs, degree, NP-completeness}
}
Document
Minimum Separator Reconfiguration

Authors: Guilherme C. M. Gomes, Clément Legrand-Duchesne, Reem Mahmoud, Amer E. Mouawad, Yoshio Okamoto, Vinicius F. dos Santos, and Tom C. van der Zanden

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
We study the problem of reconfiguring one minimum s-t-separator A into another minimum s-t-separator B in some n-vertex graph G containing two non-adjacent vertices s and t. We consider several variants of the problem as we focus on both the token sliding and token jumping models. Our first contribution is a polynomial-time algorithm that computes (if one exists) a minimum-length sequence of slides transforming A into B. We additionally establish that the existence of a sequence of jumps (which need not be of minimum length) can be decided in polynomial time (by an algorithm that also outputs a witnessing sequence when one exists). In contrast, and somewhat surprisingly, we show that deciding if a sequence of at most 𝓁 jumps can transform A into B is an NP-complete problem. To complement this negative result, we investigate the parameterized complexity of what we believe to be the two most natural parameterized counterparts of the latter problem; in particular, we study the problem of computing a minimum-length sequence of jumps when parameterized by the size k of the minimum s-t-separators and when parameterized by the number 𝓁 of jumps. For the first parameterization, we show that the problem is fixed-parameter tractable, but does not admit a polynomial kernel unless NP ⊆ coNP/poly. We complete the picture by designing a kernel with 𝒪(𝓁²) vertices and edges for the length 𝓁 of the sequence as a parameter.

Cite as

Guilherme C. M. Gomes, Clément Legrand-Duchesne, Reem Mahmoud, Amer E. Mouawad, Yoshio Okamoto, Vinicius F. dos Santos, and Tom C. van der Zanden. Minimum Separator Reconfiguration. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{c.m.gomes_et_al:LIPIcs.IPEC.2023.9,
  author =	{C. M. Gomes, Guilherme and Legrand-Duchesne, Cl\'{e}ment and Mahmoud, Reem and Mouawad, Amer E. and Okamoto, Yoshio and F. dos Santos, Vinicius and C. van der Zanden, Tom},
  title =	{{Minimum Separator Reconfiguration}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.9},
  URN =		{urn:nbn:de:0030-drops-194288},
  doi =		{10.4230/LIPIcs.IPEC.2023.9},
  annote =	{Keywords: minimum separators, combinatorial reconfiguration, parameterized complexity, kernelization}
}
  • Refine by Author
  • 23 Oliveira, Bruno C. d. S.
  • 11 Gonçalo Oliveira, Hugo
  • 8 Huang, Xuejing
  • 7 C. S., Karthik
  • 7 Henriques, Pedro Rangel
  • Show More...

  • Refine by Classification
  • 17 Mathematics of computing → Graph algorithms
  • 17 Theory of computation → Problems, reductions and completeness
  • 16 Theory of computation → Approximation algorithms analysis
  • 16 Theory of computation → Computational geometry
  • 16 Theory of computation → Design and analysis of algorithms
  • Show More...

  • Refine by Keyword
  • 10 approximation algorithms
  • 10 parameterized complexity
  • 7 Approximation Algorithms
  • 6 Kernelization
  • 6 intersection types
  • Show More...

  • Refine by Type
  • 450 document

  • Refine by Publication Year
  • 66 2023
  • 63 2019
  • 52 2021
  • 50 2020
  • 46 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail