6 Search Results for "Wu, Yi"


Document
Track A: Algorithms, Complexity and Games
Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints

Authors: Yanlin Chen and Ronald de Wolf

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Lasso and Ridge are important minimization problems in machine learning and statistics. They are versions of linear regression with squared loss where the vector θ ∈ ℝ^d of coefficients is constrained in either 𝓁₁-norm (for Lasso) or in 𝓁₂-norm (for Ridge). We study the complexity of quantum algorithms for finding ε-minimizers for these minimization problems. We show that for Lasso we can get a quadratic quantum speedup in terms of d by speeding up the cost-per-iteration of the Frank-Wolfe algorithm, while for Ridge the best quantum algorithms are linear in d, as are the best classical algorithms. As a byproduct of our quantum lower bound for Lasso, we also prove the first classical lower bound for Lasso that is tight up to polylog-factors.

Cite as

Yanlin Chen and Ronald de Wolf. Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 38:1-38:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2023.38,
  author =	{Chen, Yanlin and de Wolf, Ronald},
  title =	{{Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.38},
  URN =		{urn:nbn:de:0030-drops-180907},
  doi =		{10.4230/LIPIcs.ICALP.2023.38},
  annote =	{Keywords: Quantum algorithms, Regularized linear regression, Lasso, Ridge, Lower bounds}
}
Document
HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology

Authors: Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele

Published in: LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1


Abstract
Convolutional neural networks (CNNs) have produced unprecedented accuracy for many computer vision problems in the recent past. In power and compute-constrained embedded platforms, deploying modern CNNs can present many challenges. Most CNN architectures do not run in real-time due to the high number of computational operations involved during the inference phase. This emphasizes the role of CNN optimization techniques in early design space exploration. To estimate their efficacy in satisfying the target constraints, existing techniques are either hardware (HW) agnostic, pseudo-HW-aware by considering parameter and operation counts, or HW-aware through inflexible hardware-in-the-loop (HIL) setups. In this work, we introduce HW-Flow, a framework for optimizing and exploring CNN models based on three levels of hardware abstraction: Coarse, Mid and Fine. Through these levels, CNN design and optimization can be iteratively refined towards efficient execution on the target hardware platform. We present HW-Flow in the context of CNN pruning by augmenting a reinforcement learning agent with key metrics to understand the influence of its pruning actions on the inference hardware. With 2× reduction in energy and latency, we prune ResNet56, ResNet50, and DeepLabv3 with minimal accuracy degradation on the CIFAR-10, ImageNet, and CityScapes datasets, respectively.

Cite as

Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele. HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 03:1-03:30, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{vemparala_et_al:LITES.8.1.3,
  author =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  title =	{{HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology}},
  booktitle =	{LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision},
  pages =	{03:1--03:30},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  editor =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.3},
  doi =		{10.4230/LITES.8.1.3},
  annote =	{Keywords: Convolutional Neural Networks, Optimization, Hardware Modeling, Pruning}
}
Document
Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication

Authors: Kristin Krüger, Nils Vreman, Richard Pates, Martina Maggio, Marcus Völp, and Gerhard Fohler

Published in: LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1


Abstract
Time-triggered real-time systems achieve deterministic behavior using schedules that are constructed offline, based on scheduling constraints. Their deterministic behavior makes time-triggered systems suitable for usage in safety-critical environments, like avionics. However, this determinism also allows attackers to fine-tune attacks that can be carried out after studying the behavior of the system through side channels, targeting safety-critical victim tasks. Replication -- i.e., the execution of task variants across different cores -- is inherently able to tolerate both accidental and malicious faults (i.e. attacks) as long as these faults are independent of one another. Yet, targeted attacks on the timing behavior of tasks which utilize information gained about the system behavior violate the fault independence assumption fault tolerance is based on. This violation may give attackers the opportunity to compromise all replicas simultaneously, in particular if they can mount the attack from already compromised components. In this paper, we analyze vulnerabilities of time-triggered systems, focusing on safety-certified multicore real-time systems. We introduce two runtime mitigation strategies to withstand directed timing inference based attacks: (i) schedule randomization at slot level, and (ii) randomization within a set of offline constructed schedules. We evaluate these mitigation strategies with synthetic experiments and a real case study to show their effectiveness and practicality.

Cite as

Kristin Krüger, Nils Vreman, Richard Pates, Martina Maggio, Marcus Völp, and Gerhard Fohler. Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication. In LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1, pp. 01:1-01:29, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{kruger_et_al:LITES.7.1.1,
  author =	{Kr\"{u}ger, Kristin and Vreman, Nils and Pates, Richard and Maggio, Martina and V\"{o}lp, Marcus and Fohler, Gerhard},
  title =	{{Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication}},
  booktitle =	{LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security},
  pages =	{01:1--01:29},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2021},
  volume =	{7},
  number =	{1},
  editor =	{Kr\"{u}ger, Kristin and Vreman, Nils and Pates, Richard and Maggio, Martina and V\"{o}lp, Marcus and Fohler, Gerhard},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.7.1.1},
  doi =		{10.4230/LITES.7.1.1},
  annote =	{Keywords: real-time systems, time-triggered systems, security}
}
Document
The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard

Authors: Matthew LeMay, Yi-Chieh Wu, and Ran Libeskind-Hadas

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
The maximum parsimony phylogenetic reconciliation problem seeks to explain incongruity between a gene phylogeny and a species phylogeny with respect to a set of evolutionary events. While the reconciliation problem is well-studied for species and gene trees subject to events such as duplication, transfer, loss, and deep coalescence, recent work has examined species phylogenies that incorporate hybridization and are thus represented by networks rather than trees. In this paper, we show that the problem of computing a maximum parsimony reconciliation for a gene tree and species network is NP-hard even when only considering deep coalescence. This result suggests that future work on maximum parsimony reconciliation for species networks should explore approximation algorithms and heuristics.

Cite as

Matthew LeMay, Yi-Chieh Wu, and Ran Libeskind-Hadas. The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 1:1-1:10, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lemay_et_al:LIPIcs.WABI.2021.1,
  author =	{LeMay, Matthew and Wu, Yi-Chieh and Libeskind-Hadas, Ran},
  title =	{{The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{1:1--1:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.1},
  URN =		{urn:nbn:de:0030-drops-143546},
  doi =		{10.4230/LIPIcs.WABI.2021.1},
  annote =	{Keywords: phylogenetics, reconciliation, deep coalescence, hybridization, NP-hardness}
}
Document
A Survey on Static Cache Analysis for Real-Time Systems

Authors: Mingsong Lv, Nan Guan, Jan Reineke, Reinhard Wilhelm, and Wang Yi

Published in: LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1


Abstract
Real-time systems are reactive computer systems that must produce their reaction to a stimulus within given time bounds. A vital verification requirement is to estimate the Worst-Case Execution Time (WCET) of programs. These estimates are then used to predict the timing behavior of the overall system. The execution time of a program heavily depends on the underlying hardware, among which cache has the biggest influence. Analyzing cache behavior is very challenging due to the versatile cache features and complex execution environment. This article provides a survey on static cache analysis for real-time systems. We first present the challenges and static analysis techniques for independent programs with respect to different cache features. Then, the discussion is extended to cache analysis in complex execution environment, followed by a survey of existing tools based on static techniques for cache analysis. An outlook for future research is provided at last.

Cite as

Mingsong Lv, Nan Guan, Jan Reineke, Reinhard Wilhelm, and Wang Yi. A Survey on Static Cache Analysis for Real-Time Systems. In LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1, pp. 05:1-05:48, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{lv_et_al:LITES-v003-i001-a005,
  author =	{Lv, Mingsong and Guan, Nan and Reineke, Jan and Wilhelm, Reinhard and Yi, Wang},
  title =	{{A Survey on Static Cache Analysis for Real-Time Systems}},
  booktitle =	{LITES, Volume 3, Issue 1 (2016)},
  pages =	{05:1--05:48},
  journal =	{Leibniz Transactions on Embedded Systems},
  ISSN =	{2199-2002},
  year =	{2016},
  volume =	{3},
  number =	{1},
  editor =	{Lv, Mingsong and Guan, Nan and Reineke, Jan and Wilhelm, Reinhard and Yi, Wang},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v003-i001-a005},
  doi =		{10.4230/LITES-v003-i001-a005},
  annote =	{Keywords: Hard real-time, Cache analysis, Worst-case execution time}
}
Document
Computational Complexity of Certifying Restricted Isometry Property

Authors: Abhiram Natarajan and Yi Wu

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Given a matrix A with n rows, a number k < n, and 0 < delta < 1, A is (k,delta)-RIP (Restricted Isometry Property) if, for any vector x in R^n, with at most k non-zero co-ordinates, (1-delta)|x|^2 <= |Ax|^2 <= (1+delta)|Ax|^2. In many applications, such as compressed sensing and sparse recovery, it is desirable to construct RIP matrices with a large k and a small delta;. Given the efficacy of random constructions in generating useful RIP matrices, the problem of certifying the RIP parameters of a matrix has become important. In this paper, we prove that it is hard to approximate the RIP parameters of a matrix assuming the Small-Set-Expansion-Hypothesis. Specifically, we prove that for any arbitrarily large constant C>0 and any arbitrarily small constant 0<delta<1, there exists some k such that given a matrix M, it is SSE-Hard to distinguish the following two cases: i) (Highly RIP) M is (k,delta)-RIP; ii) (Far away from RIP) M is not (k/C,1-delta)-RIP. Most of the previous results on the topic of hardness of RIP certification only hold for certification when delta=o(1). In practice, it is of interest to understand the complexity of certifying a matrix with delta being close to sqrt(2) - 1, as it suffices for many real applications to have matrices with delta=sqrt(2)-1. Our hardness result holds for any constant delta. Specifically, our result proves that even if delta is indeed very small, i.e. the matrix is in fact strongly RIP, certifying that the matrix exhibits weak RIP itself is SSE-Hard. In order to prove the hardness result, we prove a variant of the Cheeger's Inequality for sparse vectors.

Cite as

Abhiram Natarajan and Yi Wu. Computational Complexity of Certifying Restricted Isometry Property. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 371-380, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:LIPIcs.APPROX-RANDOM.2014.371,
  author =	{Natarajan, Abhiram and Wu, Yi},
  title =	{{Computational Complexity of Certifying Restricted Isometry Property}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{371--380},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  URN =		{urn:nbn:de:0030-drops-47092},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  annote =	{Keywords: Restricted Isometry Property, RIP, RIP Certification, Sparse Cheeger Inequality, SSE Hard}
}
  • Refine by Author
  • 1 Camalleri, Manfredi
  • 1 Chen, Yanlin
  • 1 Fasfous, Nael
  • 1 Fohler, Gerhard
  • 1 Frickenstein, Alexander
  • Show More...

  • Refine by Classification
  • 1 Applied computing → Computational biology
  • 1 Computer systems organization → Real-time systems
  • 1 Computer systems organization → Redundancy
  • 1 Computing methodologies → Artificial intelligence
  • 1 General and reference → Surveys and overviews
  • Show More...

  • Refine by Keyword
  • 1 Cache analysis
  • 1 Convolutional Neural Networks
  • 1 Hard real-time
  • 1 Hardware Modeling
  • 1 Lasso
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 2 2021
  • 1 2014
  • 1 2016
  • 1 2022
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail