22 Search Results for "Li, Yao"


Document
A Technique for Hardness Amplification Against AC⁰

Authors: William M. Hoza

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for AC⁰ circuits. First, we investigate the extent to which AC⁰ circuits of depth d can approximate AC⁰ circuits of some larger depth d + k. The case k = 1 is resolved by Håstad, Rossman, Servedio, and Tan’s celebrated average-case depth hierarchy theorem (JACM 2017). Our contribution is a significantly stronger correlation bound when k ≥ 3. Specifically, we show that there exists a linear-size AC⁰_{d + k} circuit h : {0, 1}ⁿ → {0, 1} such that for every AC⁰_d circuit g, either g has size exp(n^{Ω(1/d)}), or else g agrees with h on at most a (1/2 + ε)-fraction of inputs where ε = exp(-(1/d) ⋅ Ω(log n)^{k-1}). For comparison, Håstad, Rossman, Servedio, and Tan’s result has ε = n^{-Θ(1/d)}. Second, we consider the majority function. It is well known that the majority function is moderately hard for AC⁰ circuits (and stronger classes). Our contribution is a stronger correlation bound for the XOR of t copies of the n-bit majority function, denoted MAJ_n^{⊕ t}. We show that if g is an AC⁰_d circuit of size S, then g agrees with MAJ_n^{⊕ t} on at most a (1/2 + ε)-fraction of inputs, where ε = (O(log S)^{d - 1} / √n)^t. To prove these results, we develop a hardness amplification technique that is tailored to a specific type of circuit lower bound proof. In particular, one way to show that a function h is moderately hard for AC⁰ circuits is to (a) design some distribution over random restrictions or random projections, (b) show that AC⁰ circuits simplify to shallow decision trees under these restrictions/projections, and finally (c) show that after applying the restriction/projection, h is moderately hard for shallow decision trees with respect to an appropriate distribution. We show that (roughly speaking) if h can be proven to be moderately hard by a proof with that structure, then XORing multiple copies of h amplifies its hardness. Our analysis involves a new kind of XOR lemma for decision trees, which might be of independent interest.

Cite as

William M. Hoza. A Technique for Hardness Amplification Against AC⁰. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2024.1,
  author =	{Hoza, William M.},
  title =	{{A Technique for Hardness Amplification Against AC⁰}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.1},
  URN =		{urn:nbn:de:0030-drops-203977},
  doi =		{10.4230/LIPIcs.CCC.2024.1},
  annote =	{Keywords: Bounded-depth circuits, average-case lower bounds, hardness amplification, XOR lemmas}
}
Document
A Strong Direct Sum Theorem for Distributional Query Complexity

Authors: Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Consider the expected query complexity of computing the k-fold direct product f^{⊗ k} of a function f to error ε with respect to a distribution μ^k. One strategy is to sequentially compute each of the k copies to error ε/k with respect to μ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem that holds for all functions in the standard query model had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new resilience lemma that accompanies it, showing that the hardcore of f^{⊗k} is likely to remain dense under arbitrary partitions of the input space.

Cite as

Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan. A Strong Direct Sum Theorem for Distributional Query Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 16:1-16:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2024.16,
  author =	{Blanc, Guy and Koch, Caleb and Strassle, Carmen and Tan, Li-Yang},
  title =	{{A Strong Direct Sum Theorem for Distributional Query Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{16:1--16:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.16},
  URN =		{urn:nbn:de:0030-drops-204123},
  doi =		{10.4230/LIPIcs.CCC.2024.16},
  annote =	{Keywords: Query complexity, direct product theorem, hardcore theorem}
}
Document
The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise

Authors: Yangjing Dong, Honghao Fu, Anand Natarajan, Minglong Qin, Haochen Xu, and Penghui Yao

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
The class MIP^* of quantum multiprover interactive proof systems with entanglement is much more powerful than its classical counterpart MIP [Babai et al., 1991; Zhengfeng Ji et al., 2020; Zhengfeng Ji et al., 2020]: while MIP = NEXP, the quantum class MIP^* is equal to RE, a class including the halting problem. This is because the provers in MIP^* can share unbounded quantum entanglement. However, recent works [Qin and Yao, 2021; Qin and Yao, 2023] have shown that this advantage is significantly reduced if the provers' shared state contains noise. This paper attempts to exactly characterize the effect of noise on the computational power of quantum multiprover interactive proof systems. We investigate the quantum two-prover one-round interactive system MIP^*[poly,O(1)], where the verifier sends polynomially many bits to the provers and the provers send back constantly many bits. We show noise completely destroys the computational advantage given by shared entanglement in this model. Specifically, we show that if the provers are allowed to share arbitrarily many EPR states, where each EPR state is affected by an arbitrarily small constant amount of noise, the resulting complexity class is equivalent to NEXP = MIP. This improves significantly on the previous best-known bound of NEEEXP (nondeterministic triply exponential time) [Qin and Yao, 2021]. We also show that this collapse in power is due to the noise, rather than the O(1) answer size, by showing that allowing for noiseless EPR states gives the class the full power of RE = MIP^*[poly, poly]. Along the way, we develop two technical tools of independent interest. First, we give a new, deterministic tester for the positivity of an exponentially large matrix, provided it has a low-degree Fourier decomposition in terms of Pauli matrices. Secondly, we develop a new invariance principle for smooth matrix functions having bounded third-order Fréchet derivatives or which are Lipschitz continuous.

Cite as

Yangjing Dong, Honghao Fu, Anand Natarajan, Minglong Qin, Haochen Xu, and Penghui Yao. The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 30:1-30:71, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dong_et_al:LIPIcs.CCC.2024.30,
  author =	{Dong, Yangjing and Fu, Honghao and Natarajan, Anand and Qin, Minglong and Xu, Haochen and Yao, Penghui},
  title =	{{The Computational Advantage of MIP^∗ Vanishes in the Presence of Noise}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{30:1--30:71},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.30},
  URN =		{urn:nbn:de:0030-drops-204263},
  doi =		{10.4230/LIPIcs.CCC.2024.30},
  annote =	{Keywords: Interactive proofs, Quantum complexity theory, Quantum entanglement, Fourier analysis, Matrix analysis, Invariance principle, Derandomization, PCP, Locally testable code, Positivity testing}
}
Document
Solving the Optimal Experiment Design Problem with Mixed-Integer Convex Methods

Authors: Deborah Hendrych, Mathieu Besançon, and Sebastian Pokutta

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
We tackle the Optimal Experiment Design Problem, which consists of choosing experiments to run or observations to select from a finite set to estimate the parameters of a system. The objective is to maximize some measure of information gained about the system from the observations, leading to a convex integer optimization problem. We leverage Boscia.jl, a recent algorithmic framework, which is based on a nonlinear branch-and-bound algorithm with node relaxations solved to approximate optimality using Frank-Wolfe algorithms. One particular advantage of the method is its efficient utilization of the polytope formed by the original constraints which is preserved by the method, unlike alternative methods relying on epigraph-based formulations. We assess our method against both generic and specialized convex mixed-integer approaches. Computational results highlight the performance of our proposed method, especially on large and challenging instances.

Cite as

Deborah Hendrych, Mathieu Besançon, and Sebastian Pokutta. Solving the Optimal Experiment Design Problem with Mixed-Integer Convex Methods. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hendrych_et_al:LIPIcs.SEA.2024.16,
  author =	{Hendrych, Deborah and Besan\c{c}on, Mathieu and Pokutta, Sebastian},
  title =	{{Solving the Optimal Experiment Design Problem with Mixed-Integer Convex Methods}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.16},
  URN =		{urn:nbn:de:0030-drops-203810},
  doi =		{10.4230/LIPIcs.SEA.2024.16},
  annote =	{Keywords: Mixed-Integer Non-Linear Optimization, Optimal Experiment Design, Frank-Wolfe, Boscia}
}
Document
Convex Relaxation for the Generalized Maximum-Entropy Sampling Problem

Authors: Gabriel Ponte, Marcia Fampa, and Jon Lee

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
The generalized maximum-entropy sampling problem (GMESP) is to select an order-s principal submatrix from an order-n covariance matrix, to maximize the product of its t greatest eigenvalues, 0 < t ≤ s < n. It is a problem that specializes to two fundamental problems in statistical design theory: (i) maximum-entropy sampling problem (MESP); (ii) binary D-optimality (D-Opt). In the general case, it is motivated by a selection problem in the context of PCA (principal component analysis). We introduce the first convex-optimization based relaxation for GMESP, study its behavior, compare it to an earlier spectral bound, and demonstrate its use in a branch-and-bound scheme. We find that such an approach is practical when s-t is very small.

Cite as

Gabriel Ponte, Marcia Fampa, and Jon Lee. Convex Relaxation for the Generalized Maximum-Entropy Sampling Problem. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ponte_et_al:LIPIcs.SEA.2024.25,
  author =	{Ponte, Gabriel and Fampa, Marcia and Lee, Jon},
  title =	{{Convex Relaxation for the Generalized Maximum-Entropy Sampling Problem}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.25},
  URN =		{urn:nbn:de:0030-drops-203901},
  doi =		{10.4230/LIPIcs.SEA.2024.25},
  annote =	{Keywords: maximum-entropy sampling, D-optimality, convex relaxation, branch-and-bound, integer nonlinear optimization, principal component analysis}
}
Document
Improved Cut Strategy for Tensor Network Contraction Orders

Authors: Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
In the field of quantum computing, simulating quantum systems on classical computers is crucial. Tensor networks are fundamental in simulating quantum systems. A tensor network is a collection of tensors, that need to be contracted into a result tensor. Tensor contraction is a generalization of matrix multiplication to higher order tensors. The contractions can be performed in different orders, and the order has a significant impact on the number of floating point operations (flops) needed to get the result tensor. It is known that finding an optimal contraction order is NP-hard. The current state-of-the-art approach for finding efficient contraction orders is to combinine graph partitioning with a greedy strategy. Although heavily used in practice, the current approach ignores so-called free indices, chooses node weights without regarding previous computations, and requires numerous hyperparameters that need to be tuned at runtime. In this paper, we address these shortcomings by developing a novel graph cut strategy. The proposed modifications yield contraction orders that significantly reduce the number of flops in the tensor contractions compared to the current state of the art. Moreover, by removing the need for hyperparameter tuning at runtime, our approach converges to an efficient solution faster, which reduces the required optimization time by at least an order of magnitude.

Cite as

Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen. Improved Cut Strategy for Tensor Network Contraction Orders. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{staudt_et_al:LIPIcs.SEA.2024.27,
  author =	{Staudt, Christoph and Blacher, Mark and Klaus, Julien and Lippmann, Farin and Giesen, Joachim},
  title =	{{Improved Cut Strategy for Tensor Network Contraction Orders}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.27},
  URN =		{urn:nbn:de:0030-drops-203924},
  doi =		{10.4230/LIPIcs.SEA.2024.27},
  annote =	{Keywords: tensor network, contraction order, graph partitioniong, quantum simulation}
}
Document
Invited Talk
Abstraction-Based Decision Making for Statistical Properties (Invited Talk)

Authors: Filip Cano, Thomas A. Henzinger, Bettina Könighofer, Konstantin Kueffner, and Kaushik Mallik

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
Sequential decision-making in probabilistic environments is a fundamental problem with many applications in AI and economics. In this paper, we present an algorithm for synthesizing sequential decision-making agents that optimize statistical properties such as maximum and average response times. In the general setting of sequential decision-making, the environment is modeled as a random process that generates inputs. The agent responds to each input, aiming to maximize rewards and minimize costs within a specified time horizon. The corresponding synthesis problem is known to be PSPACE-hard. We consider the special case where the input distribution, reward, and cost depend on input-output statistics specified by counter automata. For such problems, this paper presents the first PTIME synthesis algorithms. We introduce the notion of statistical abstraction, which clusters statistically indistinguishable input-output sequences into equivalence classes. This abstraction allows for a dynamic programming algorithm whose complexity grows polynomially with the considered horizon, making the statistical case exponentially more efficient than the general case. We evaluate our algorithm on three different application scenarios of a client-server protocol, where multiple clients compete via bidding to gain access to the service offered by the server. The synthesized policies optimize profit while guaranteeing that none of the server’s clients is disproportionately starved of the service.

Cite as

Filip Cano, Thomas A. Henzinger, Bettina Könighofer, Konstantin Kueffner, and Kaushik Mallik. Abstraction-Based Decision Making for Statistical Properties (Invited Talk). In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cano_et_al:LIPIcs.FSCD.2024.2,
  author =	{Cano, Filip and Henzinger, Thomas A. and K\"{o}nighofer, Bettina and Kueffner, Konstantin and Mallik, Kaushik},
  title =	{{Abstraction-Based Decision Making for Statistical Properties}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.2},
  URN =		{urn:nbn:de:0030-drops-203310},
  doi =		{10.4230/LIPIcs.FSCD.2024.2},
  annote =	{Keywords: Abstract interpretation, Sequential decision making, Counter machines}
}
Document
The Omnivisor: A Real-Time Static Partitioning Hypervisor Extension for Heterogeneous Core Virtualization over MPSoCs

Authors: Daniele Ottaviano, Francesco Ciraolo, Renato Mancuso, and Marcello Cinque

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Following the needs of industrial applications, virtualization has emerged as one of the most effective approaches for the consolidation of mixed-criticality systems while meeting tight constraints in terms of space, weight, power, and cost (SWaP-C). In embedded platforms with homogeneous processors, a wealth of works have proposed designs and techniques to enforce spatio-temporal isolation by leveraging well-understood virtualization support. Unfortunately, achieving the same goal on heterogeneous MultiProcessor Systems-on-Chip (MPSoCs) has been largely overlooked. Modern hypervisors are designed to operate exclusively on main cores, with little or no consideration given to other co-processors within the system, such as small microcontroller-level CPUs or soft-cores deployed on programmable logic (FPGA). Typically, hypervisors consider co-processors as I/O devices allocated to virtual machines that run on primary cores, yielding full control and responsibility over them. Nevertheless, inadequate management of these resources can lead to spatio-temporal isolation issues within the system. In this paper, we propose the Omnivisor model as a paradigm for the holistic management of heterogeneous platforms. The model generalizes the features of real-time static partitioning hypervisors to enable the execution of virtual machines on processors with different Instruction Set Architectures (ISAs) within the same MPSoC. Moreover, the Omnivisor ensures temporal and spatial isolation between virtual machines by integrating and leveraging a variety of hardware and software protection mechanisms. The presented approach not only expands the scope of virtualization in MPSoCs but also enhances the overall system reliability and real-time performance for mixed-criticality applications. A full open-source reference implementation of the Omnivisor based on the Jailhouse hypervisor is provided, targeting ARM real-time processing units and RISC-V soft-cores on FPGA. Experimental results on real hardware show the benefits of the solution, including enabling the seamless launch of virtual machines on different ISAs and extending spatial/temporal isolation to heterogenous cores with enhanced regulation policies.

Cite as

Daniele Ottaviano, Francesco Ciraolo, Renato Mancuso, and Marcello Cinque. The Omnivisor: A Real-Time Static Partitioning Hypervisor Extension for Heterogeneous Core Virtualization over MPSoCs. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 7:1-7:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ottaviano_et_al:LIPIcs.ECRTS.2024.7,
  author =	{Ottaviano, Daniele and Ciraolo, Francesco and Mancuso, Renato and Cinque, Marcello},
  title =	{{The Omnivisor: A Real-Time Static Partitioning Hypervisor Extension for Heterogeneous Core Virtualization over MPSoCs}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{7:1--7:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.7},
  URN =		{urn:nbn:de:0030-drops-203107},
  doi =		{10.4230/LIPIcs.ECRTS.2024.7},
  annote =	{Keywords: Mixed-Criticality, Embedded Virtualization, Real-Time Systems, MPSoCs}
}
Document
Crêpe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices

Authors: Eva Dengler and Peter Wägemann

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
The domain of energy-constrained real-time systems that are operated on modern embedded system-on-chip (SoC) platforms brings numerous novel challenges for optimal resource minimization. These modern hardware platforms offer a heterogeneous variety of features to configure the tradeoff between temporal performance and energy efficiency, which goes beyond the state-of-the-art of existing dynamic-voltage-frequency-scaling (DVFS) scheduling schemes. The control center for configuring this tradeoff on platforms are complex clock subsystems that are intertwined with requirements of the SoC’s components (e.g., transceiver/memory/sensor devices). That is, several devices have precedence constraints with respect to specific clock sources and their settings. The challenge of dynamically adapting the various clock sources to select resource-optimal configurations becomes especially challenging in the presence of asynchronous preemptions, which are inherent to systems that use devices. In this paper, we present Crêpe, an approach to clock-reconfiguration-aware preemption control: Crêpe has an understanding of the target platform’s clock subsystem, its sleep states, and penalties to reconfigure clock sources for adapting clock frequencies. Crêpe’s hardware model is combined with an awareness of the application’s device requirements for each executed task, as well as possible interrupts that cause preemptions during runtime. Using these software/hardware constraints, Crêpe employs, in its offline phase, a mathematical formalization in order to select energy-minimal configurations while meeting given deadlines. This optimizing formalization, processed by standard mathematical solver tools, accounts for potentially occurring interrupts and the respective clock reconfigurations, which are then forwarded as alternative schedules to Crêpe’s runtime system. During runtime, the dispatcher assesses these offline-determined alternative schedules and reconfigures the clock sources for energy minimization. We developed an implementation based on a widely-used SoC platform (i.e., ESP32-C3) and an automated testbed for comprehensive energy-consumption evaluations to validate Crêpe’s claim of selecting resource-optimal settings under worst-case considerations.

Cite as

Eva Dengler and Peter Wägemann. Crêpe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 10:1-10:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dengler_et_al:LIPIcs.ECRTS.2024.10,
  author =	{Dengler, Eva and W\"{a}gemann, Peter},
  title =	{{Cr\^{e}pe: Clock-Reconfiguration-Aware Preemption Control in Real-Time Systems with Devices}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{10:1--10:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.10},
  URN =		{urn:nbn:de:0030-drops-203135},
  doi =		{10.4230/LIPIcs.ECRTS.2024.10},
  annote =	{Keywords: energy-constrained real-time systems, time/energy tradeoff, system-on-chip, energy-aware real-time scheduling, resource minimization, preemption control, worst-case energy consumption (WCEC), worst-case execution time (WCET), static whole-system analysis}
}
Document
Autonomy Today: Many Delay-Prone Black Boxes

Authors: Sizhe Liu, Rohan Wagle, James H. Anderson, Ming Yang, Chi Zhang, and Yunhua Li

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Machine-learning (ML) technology has been a key enabler in the push towards realizing ever more sophisticated autonomous-driving features. In deploying such technology, the automotive industry has relied heavily on using "black-box" software and hardware components that were originally intended for non-safety-critical contexts, without a full understanding of their real-time capabilities. A prime example of such a component is CUDA, which is fundamental to the acceleration of ML algorithms using NVIDIA GPUs. In this paper, evidence is presented demonstrating that CUDA can cause unbounded task delays. Such delays are the result of CUDA’s usage of synchronization mechanisms in the POSIX thread (pthread) library, so the latter is implicated as a delay-prone component as well. Such synchronization delays are shown to be the source of a system failure that occurred in an actual autonomous vehicle system during testing at WeRide. Motivated by these findings, a broader experimental study is presented that demonstrates several real-time deficiencies in CUDA, the glibc pthread library, Linux, and the POSIX interface of the safety-certified QNX Operating System for Safety. Partial mitigations for these deficiencies are presented and further actions are proposed for real-time researchers and developers to integrate more complete mitigations.

Cite as

Sizhe Liu, Rohan Wagle, James H. Anderson, Ming Yang, Chi Zhang, and Yunhua Li. Autonomy Today: Many Delay-Prone Black Boxes. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 12:1-12:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ECRTS.2024.12,
  author =	{Liu, Sizhe and Wagle, Rohan and Anderson, James H. and Yang, Ming and Zhang, Chi and Li, Yunhua},
  title =	{{Autonomy Today: Many Delay-Prone Black Boxes}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{12:1--12:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.12},
  URN =		{urn:nbn:de:0030-drops-203152},
  doi =		{10.4230/LIPIcs.ECRTS.2024.12},
  annote =	{Keywords: autonomous driving, CUDA programming, locking protocols, POSIX thread, operating systems, machine learning systems, real-time systems}
}
Document
Track A: Algorithms, Complexity and Games
NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials

Authors: Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
An s-sparse polynomial has at most s monomials with nonzero coefficients. The Equivalence Testing problem for sparse polynomials (ETsparse) asks to decide if a given polynomial f is equivalent to (i.e., in the orbit of) some s-sparse polynomial. In other words, given f ∈ 𝔽[𝐱] and s ∈ ℕ, ETsparse asks to check if there exist A ∈ GL(|𝐱|, 𝔽) and 𝐛 ∈ 𝔽^|𝐱| such that f(A𝐱 + 𝐛) is s-sparse. We show that ETsparse is NP-hard over any field 𝔽, if f is given in the sparse representation, i.e., as a list of nonzero coefficients and exponent vectors. This answers a question posed by Gupta, Saha and Thankey (SODA 2023) and also, more explicitly, by Baraskar, Dewan and Saha (STACS 2024). The result implies that the Minimum Circuit Size Problem (MCSP) is NP-hard for a dense subclass of depth-3 arithmetic circuits if the input is given in sparse representation. We also show that approximating the smallest s₀ such that a given s-sparse polynomial f is in the orbit of some s₀-sparse polynomial to within a factor of s^{1/3 - ε} is NP-hard for any ε > 0; observe that s-factor approximation is trivial as the input is s-sparse. Finally, we show that for any constant σ ≥ 6, checking if a polynomial (given in sparse representation) is in the orbit of some support-σ polynomial is NP-hard. Support of a polynomial f is the maximum number of variables present in any monomial of f. These results are obtained via direct reductions from the 3-SAT problem.

Cite as

Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha. NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 16:1-16:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baraskar_et_al:LIPIcs.ICALP.2024.16,
  author =	{Baraskar, Omkar and Dewan, Agrim and Saha, Chandan and Sinha, Pulkit},
  title =	{{NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{16:1--16:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.16},
  URN =		{urn:nbn:de:0030-drops-201598},
  doi =		{10.4230/LIPIcs.ICALP.2024.16},
  annote =	{Keywords: Equivalence testing, MCSP, sparse polynomials, 3SAT}
}
Document
Track A: Algorithms, Complexity and Games
Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number

Authors: Boris Klemz and Marie Diana Sieper

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The problem Level Planarity asks for a crossing-free drawing of a graph in the plane such that vertices are placed at prescribed y-coordinates (called levels) and such that every edge is realized as a y-monotone curve. In the variant Constrained Level Planarity, each level y is equipped with a partial order ≺_y on its vertices and in the desired drawing the left-to-right order of vertices on level y has to be a linear extension of ≺_y. Constrained Level Planarity is known to be a remarkably difficult problem: previous results by Klemz and Rote [ACM Trans. Alg.'19] and by Brückner and Rutter [SODA'17] imply that it remains NP-hard even when restricted to graphs whose tree-depth and feedback vertex set number are bounded by a constant and even when the instances are additionally required to be either proper, meaning that each edge spans two consecutive levels, or ordered, meaning that all given partial orders are total orders. In particular, these results rule out the existence of FPT-time (even XP-time) algorithms with respect to these and related graph parameters (unless P=NP). However, the parameterized complexity of Constrained Level Planarity with respect to the vertex cover number of the input graph remained open. In this paper, we show that Constrained Level Planarity can be solved in FPT-time when parameterized by the vertex cover number. In view of the previous intractability statements, our result is best-possible in several regards: a speed-up to polynomial time or a generalization to the aforementioned smaller graph parameters is not possible, even if restricting to proper or ordered instances.

Cite as

Boris Klemz and Marie Diana Sieper. Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 99:1-99:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{klemz_et_al:LIPIcs.ICALP.2024.99,
  author =	{Klemz, Boris and Sieper, Marie Diana},
  title =	{{Constrained Level Planarity Is FPT with Respect to the Vertex Cover Number}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{99:1--99:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.99},
  URN =		{urn:nbn:de:0030-drops-202428},
  doi =		{10.4230/LIPIcs.ICALP.2024.99},
  annote =	{Keywords: Parameterized Complexity, Graph Drawing, Planar Poset Diagram, Level Planarity, Constrained Level Planarity, Vertex Cover, FPT, Computational Geometry}
}
Document
Position
Grounding Stream Reasoning Research

Authors: Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic. In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream. This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Cite as

Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer. Grounding Stream Reasoning Research. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 2:1-2:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{bonte_et_al:TGDK.2.1.2,
  author =	{Bonte, Pieter and Calbimonte, Jean-Paul and de Leng, Daniel and Dell'Aglio, Daniele and Della Valle, Emanuele and Eiter, Thomas and Giannini, Federico and Heintz, Fredrik and Schekotihin, Konstantin and Le-Phuoc, Danh and Mileo, Alessandra and Schneider, Patrik and Tommasini, Riccardo and Urbani, Jacopo and Ziffer, Giacomo},
  title =	{{Grounding Stream Reasoning Research}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:47},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.2},
  URN =		{urn:nbn:de:0030-drops-198597},
  doi =		{10.4230/TGDK.2.1.2},
  annote =	{Keywords: Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases}
}
Document
Position
Standardizing Knowledge Engineering Practices with a Reference Architecture

Authors: Bradley P. Allen and Filip Ilievski

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used given the importance of high-quality knowledge for reliable intelligent agents. Meanwhile, the scope of knowledge engineering, as apparent from its target tasks and use cases, has been shifting, together with its paradigms such as expert systems, semantic web, and language modeling. The intended use cases and supported user requirements between these paradigms have not been analyzed globally, as new paradigms often satisfy prior pain points while possibly introducing new ones. The recent abstraction of systemic patterns into a boxology provides an opening for aligning the requirements and use cases of knowledge engineering with the systems, components, and software that can satisfy them best, however, this direction has not been explored to date. This paper proposes a vision of harmonizing the best practices in the field of knowledge engineering by leveraging the software engineering methodology of creating reference architectures. We describe how a reference architecture can be iteratively designed and implemented to associate user needs with recurring systemic patterns, building on top of existing knowledge engineering workflows and boxologies. We provide a six-step roadmap that can enable the development of such an architecture, consisting of scope definition, selection of information sources, architectural analysis, synthesis of an architecture based on the information source analysis, evaluation through instantiation, and, ultimately, instantiation into a concrete software architecture. We provide an initial design and outcome of the definition of architectural scope, selection of information sources, and analysis. As the remaining steps of design, evaluation, and instantiation of the architecture are largely use-case specific, we provide a detailed description of their procedures and point to relevant examples. We expect that following through on this vision will lead to well-grounded reference architectures for knowledge engineering, will advance the ongoing initiatives of organizing the neurosymbolic knowledge engineering space, and will build new links to the software architectures and data science communities.

Cite as

Bradley P. Allen and Filip Ilievski. Standardizing Knowledge Engineering Practices with a Reference Architecture. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 5:1-5:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{allen_et_al:TGDK.2.1.5,
  author =	{Allen, Bradley P. and Ilievski, Filip},
  title =	{{Standardizing Knowledge Engineering Practices with a Reference Architecture}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:23},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.5},
  URN =		{urn:nbn:de:0030-drops-198623},
  doi =		{10.4230/TGDK.2.1.5},
  annote =	{Keywords: knowledge engineering, knowledge graphs, quality attributes, software architectures, sociotechnical systems}
}
Document
Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness

Authors: Iddo Tzameret and Lu-Ming Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich’s original work [S. Rudich, 1997], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following: Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [S. Rudich, 1997]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [A. A. Razborov and S. Rudich, 1997]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b:{0,1}ⁿ → {0,1}^{n+1} can be stretched into sublinear many demi-bits b':{0,1}ⁿ → {0,1}^{n+n^{c}}, for every constant 0 < c < 1. Average-case hardness: Using work by Santhanam [Rahul Santhanam, 2020], we apply our results to obtain new average-case Kolmogorov complexity results: we show that K^{poly}[n-O(1)] is zero-error average-case hard against NP/poly machines iff K^{poly}[n-o(n)] is, where for a function s(n):ℕ → ℕ, K^{poly}[s(n)] denotes the languages of all strings x ∈ {0,1}ⁿ for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x. Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [Yao, 1982] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("next-bit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪-unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪-unpredictability. Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [Goldreich and Levin, 1989; Håstad et al., 1999], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core.

Cite as

Iddo Tzameret and Lu-Ming Zhang. Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 95:1-95:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{tzameret_et_al:LIPIcs.ITCS.2024.95,
  author =	{Tzameret, Iddo and Zhang, Lu-Ming},
  title =	{{Stretching Demi-Bits and Nondeterministic-Secure Pseudorandomness}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{95:1--95:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.95},
  URN =		{urn:nbn:de:0030-drops-196234},
  doi =		{10.4230/LIPIcs.ITCS.2024.95},
  annote =	{Keywords: Pseudorandomness, Cryptography, Natural Proofs, Nondeterminism, Lower bounds}
}
  • Refine by Author
  • 2 Li, Jian
  • 2 Li, Yao
  • 2 Zhan, Wei
  • 1 Allen, Bradley P.
  • 1 Anderson, James H.
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Computational complexity and cryptography
  • 1 Applied computing → Physics
  • 1 Computer systems organization → Embedded systems
  • 1 Computer systems organization → Real-time operating systems
  • 1 Computer systems organization → Real-time system architecture
  • Show More...

  • Refine by Keyword
  • 1 3SAT
  • 1 Abstract interpretation
  • 1 Boscia
  • 1 Bounded-depth circuits
  • 1 CUDA programming
  • Show More...

  • Refine by Type
  • 22 document

  • Refine by Publication Year
  • 15 2024
  • 2 2016
  • 1 2018
  • 1 2020
  • 1 2021
  • Show More...