3 Search Results for "Chr�tien, St�phane"


Document
Lossy Kernelization for (Implicit) Hitting Set Problems

Authors: Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We re-visit the complexity of polynomial time pre-processing (kernelization) for the d-Hitting Set problem. This is one of the most classic problems in Parameterized Complexity by itself, and, furthermore, it encompasses several other of the most well-studied problems in this field, such as Vertex Cover, Feedback Vertex Set in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, d-Hitting Set encompasses any deletion problem to a hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size, the kernelization complexity of d-Hitting Set is essentially settled: there exists a kernel with 𝒪(k^d) bits (𝒪(k^d) sets and 𝒪(k^{d-1}) elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. Still, the question of whether there exists a kernel for d-Hitting Set with fewer elements has remained one of the most major open problems in Kernelization. In this paper, we first show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed d. Further, based on this, we present our main result: we show that there exist approximate Turing kernelizations for d-Hitting Set that even beat the established bit-size lower bounds for exact kernelizations - in fact, we use a constant number of oracle calls, each with "near linear" (𝒪(k^{1+ε})) bit size, that is, almost the best one could hope for. Lastly, for two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the "best of both worlds" type of results - (1+ε)-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments.

Cite as

Fedor V. Fomin, Tien-Nam Le, Daniel Lokshtanov, Saket Saurabh, Stéphan Thomassé, and Meirav Zehavi. Lossy Kernelization for (Implicit) Hitting Set Problems. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ESA.2023.49,
  author =	{Fomin, Fedor V. and Le, Tien-Nam and Lokshtanov, Daniel and Saurabh, Saket and Thomass\'{e}, St\'{e}phan and Zehavi, Meirav},
  title =	{{Lossy Kernelization for (Implicit) Hitting Set Problems}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.49},
  URN =		{urn:nbn:de:0030-drops-187020},
  doi =		{10.4230/LIPIcs.ESA.2023.49},
  annote =	{Keywords: Hitting Set, Lossy Kernelization}
}
Document
A New Proof-Theoretical Linear Semantics for CHR

Authors: Igor Stéphan

Published in: OASIcs, Volume 64, Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)


Abstract
Constraint handling rules are a committed-choice language consisting of multiple-heads guarded rules that rewrite constraints into simpler ones until they are solved. We propose a new proof-theoretical declarative linear semantics for Constraint Handling Rules. We demonstrate completeness and soundness of our semantics w.r.t. operational omega_t. semantics. We propose also a translation from this semantics to linear logic.

Cite as

Igor Stéphan. A New Proof-Theoretical Linear Semantics for CHR. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{stephan:OASIcs.ICLP.2018.4,
  author =	{St\'{e}phan, Igor},
  title =	{{A New Proof-Theoretical Linear Semantics for CHR}},
  booktitle =	{Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)},
  pages =	{4:1--4:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-090-3},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{64},
  editor =	{Dal Palu', Alessandro and Tarau, Paul and Saeedloei, Neda and Fodor, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2018.4},
  URN =		{urn:nbn:de:0030-drops-98707},
  doi =		{10.4230/OASIcs.ICLP.2018.4},
  annote =	{Keywords: Constraint Handling Rules, Linear Logic}
}
Document
l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies

Authors: Stéphane Chrétien, Christophe Guyeux, and Serge Moulin

Published in: LIPIcs, Volume 113, 18th International Workshop on Algorithms in Bioinformatics (WABI 2018)


Abstract
Qualitative but ordered random variables, such as severity of a pathology, are of paramount importance in biostatistics and medicine. Understanding the conditional distribution of such qualitative variables as a function of other explanatory variables can be performed using a specific regression model known as ordinal polytomous regression. Variable selection in the ordinal polytomous regression model is a computationally difficult combinatorial optimisation problem which is however crucial when practitioners need to understand which covariates are physically related to the output and which covariates are not. One easy way to circumvent the computational hardness of variable selection is to introduce a penalised maximum likelihood estimator based on some well chosen non-smooth penalisation function such as, e.g., the l_1-norm. In the case of the Gaussian linear model, the l_1-penalised least-squares estimator, also known as LASSO estimator, has attracted a lot of attention in the last decade, both from the theoretical and algorithmic viewpoints. However, even in the Gaussian linear model, accurate calibration of the relaxation parameter, i.e., the relative weight of the penalisation term in the estimation cost function is still considered a difficult problem that has to be addressed with caution. In the present paper, we apply l_1-penalisation to the ordinal polytomous regression model and compare several hyper-parameter calibration strategies. Our main contributions are: (a) a useful and simple l_1 penalised estimator for ordinal polytomous regression and a thorough description of how to apply Nesterov's accelerated gradient and the online Frank-Wolfe methods to the problem of computing this estimator, (b) a new hyper-parameter calibration method for the proposed model, based on the QUT idea of Giacobino et al. and (c) a code which can be freely used that implements the proposed estimation procedure.

Cite as

Stéphane Chrétien, Christophe Guyeux, and Serge Moulin. l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies. In 18th International Workshop on Algorithms in Bioinformatics (WABI 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 113, pp. 17:1-17:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chretien_et_al:LIPIcs.WABI.2018.17,
  author =	{Chr\'{e}tien, St\'{e}phane and Guyeux, Christophe and Moulin, Serge},
  title =	{{l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies}},
  booktitle =	{18th International Workshop on Algorithms in Bioinformatics (WABI 2018)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-082-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{113},
  editor =	{Parida, Laxmi and Ukkonen, Esko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2018.17},
  URN =		{urn:nbn:de:0030-drops-93199},
  doi =		{10.4230/LIPIcs.WABI.2018.17},
  annote =	{Keywords: LASSO, ordinal polytomous regression, Quantile Universal Threshold, Frank-Wolfe algorithm, Nesterov algorithm}
}
  • Refine by Author
  • 1 Chrétien, Stéphane
  • 1 Fomin, Fedor V.
  • 1 Guyeux, Christophe
  • 1 Le, Tien-Nam
  • 1 Lokshtanov, Daniel
  • Show More...

  • Refine by Classification
  • 1 Mathematics of computing → Regression analysis
  • 1 Theory of computation → Constraint and logic programming
  • 1 Theory of computation → Parameterized complexity and exact algorithms

  • Refine by Keyword
  • 1 Constraint Handling Rules
  • 1 Frank-Wolfe algorithm
  • 1 Hitting Set
  • 1 LASSO
  • 1 Linear Logic
  • Show More...

  • Refine by Type
  • 3 document

  • Refine by Publication Year
  • 2 2018
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail