2 Search Results for "Le, Tien-Nam"


Document
Lossy Kernelization for (Implicit) Hitting Set Problems

Authors: Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We re-visit the complexity of polynomial time pre-processing (kernelization) for the d-Hitting Set problem. This is one of the most classic problems in Parameterized Complexity by itself, and, furthermore, it encompasses several other of the most well-studied problems in this field, such as Vertex Cover, Feedback Vertex Set in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, d-Hitting Set encompasses any deletion problem to a hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size, the kernelization complexity of d-Hitting Set is essentially settled: there exists a kernel with 𝒪(k^d) bits (𝒪(k^d) sets and 𝒪(k^{d-1}) elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. Still, the question of whether there exists a kernel for d-Hitting Set with fewer elements has remained one of the most major open problems in Kernelization. In this paper, we first show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed d. Further, based on this, we present our main result: we show that there exist approximate Turing kernelizations for d-Hitting Set that even beat the established bit-size lower bounds for exact kernelizations - in fact, we use a constant number of oracle calls, each with "near linear" (𝒪(k^{1+ε})) bit size, that is, almost the best one could hope for. Lastly, for two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the "best of both worlds" type of results - (1+ε)-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments.

Cite as

Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi. Lossy Kernelization for (Implicit) Hitting Set Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 49:1-49:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2023.49,
  author =	{Fomin, Fedor V. and Le, Tien-Nam and Lokshtanov, Daniel and Saurabh, Saket and Thomass\'{e}, St\'{e}phan and Zehavi, Meirav},
  title =	{{Lossy Kernelization for (Implicit) Hitting Set Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.49},
  URN =		{urn:nbn:de:0030-drops-187020},
  doi =		{10.4230/LIPIcs.ESA.2023.49},
  annote =	{Keywords: Hitting Set, Lossy Kernelization}
}
Document
HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology

Authors: Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele

Published in: LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1


Abstract
Convolutional neural networks (CNNs) have produced unprecedented accuracy for many computer vision problems in the recent past. In power and compute-constrained embedded platforms, deploying modern CNNs can present many challenges. Most CNN architectures do not run in real-time due to the high number of computational operations involved during the inference phase. This emphasizes the role of CNN optimization techniques in early design space exploration. To estimate their efficacy in satisfying the target constraints, existing techniques are either hardware (HW) agnostic, pseudo-HW-aware by considering parameter and operation counts, or HW-aware through inflexible hardware-in-the-loop (HIL) setups. In this work, we introduce HW-Flow, a framework for optimizing and exploring CNN models based on three levels of hardware abstraction: Coarse, Mid and Fine. Through these levels, CNN design and optimization can be iteratively refined towards efficient execution on the target hardware platform. We present HW-Flow in the context of CNN pruning by augmenting a reinforcement learning agent with key metrics to understand the influence of its pruning actions on the inference hardware. With 2× reduction in energy and latency, we prune ResNet56, ResNet50, and DeepLabv3 with minimal accuracy degradation on the CIFAR-10, ImageNet, and CityScapes datasets, respectively.

Cite as

Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Emanuele Valpreda, Manfredi Camalleri, Qi Zhao, Christian Unger, Naveen-Shankar Nagaraja, Maurizio Martina, and Walter Stechele. HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology. In LITES, Volume 8, Issue 1 (2022): Special Issue on Embedded Systems for Computer Vision. Leibniz Transactions on Embedded Systems, Volume 8, Issue 1, pp. 03:1-03:30, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{vemparala_et_al:LITES.8.1.3,
  author =	{Vemparala, Manoj-Rohit and Fasfous, Nael and Frickenstein, Alexander and Valpreda, Emanuele and Camalleri, Manfredi and Zhao, Qi and Unger, Christian and Nagaraja, Naveen-Shankar and Martina, Maurizio and Stechele, Walter},
  title =	{{HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{03:1--03:30},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.1.3},
  doi =		{10.4230/LITES.8.1.3},
  annote =	{Keywords: Convolutional Neural Networks, Optimization, Hardware Modeling, Pruning}
}
  • Refine by Author
  • 1 Camalleri, Manfredi
  • 1 Fasfous, Nael
  • 1 Fomin, Fedor V.
  • 1 Frickenstein, Alexander
  • 1 Le, Tien-Nam
  • Show More...

  • Refine by Classification
  • 1 Computing methodologies → Artificial intelligence
  • 1 Theory of computation → Parameterized complexity and exact algorithms

  • Refine by Keyword
  • 1 Convolutional Neural Networks
  • 1 Hardware Modeling
  • 1 Hitting Set
  • 1 Lossy Kernelization
  • 1 Optimization
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2022
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail