Search Results

Documents authored by Popa, Alexandru


Document
Faster Algorithms for Computing the Hairpin Completion Distance and Minimum Ancestor

Authors: Itai Boneh, Dvir Fried, Adrian Miclăuş, and Alexandru Popa

Published in: LIPIcs, Volume 259, 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)


Abstract
Hairpin completion is an operation on formal languages that has been inspired by hairpin formation in DNA biochemistry and has many applications especially in DNA computing. Consider s to be a string over the alphabet {A, C, G, T} such that a prefix/suffix of it matches the reversed complement of a substring of s. Then, in a hairpin completion operation the reversed complement of this prefix/suffix is added to the start/end of s forming a new string. In this paper we study two problems related to the hairpin completion. The first problem asks the minimum number of hairpin operations necessary to transform one string into another, number that is called the hairpin completion distance. For this problem we show an algorithm of running time O(n²), where n is the maximum length of the two strings. Our algorithm improves on the algorithm of Manea (TCS 2010), that has running time O(n² log n). In the minimum distance common hairpin completion ancestor problem we want to find, for two input strings x and y, a string w that minimizes the sum of the hairpin completion distances to x and y. Similarly, we present an algorithm with running time O(n²) that improves by a O(log n) factor the algorithm of Manea (TCS 2010).

Cite as

Itai Boneh, Dvir Fried, Adrian Miclăuş, and Alexandru Popa. Faster Algorithms for Computing the Hairpin Completion Distance and Minimum Ancestor. In 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 259, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{boneh_et_al:LIPIcs.CPM.2023.5,
  author =	{Boneh, Itai and Fried, Dvir and Micl\u{a}u\c{s}, Adrian and Popa, Alexandru},
  title =	{{Faster Algorithms for Computing the Hairpin Completion Distance and Minimum Ancestor}},
  booktitle =	{34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-276-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{259},
  editor =	{Bulteau, Laurent and Lipt\'{a}k, Zsuzsanna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2023.5},
  URN =		{urn:nbn:de:0030-drops-179592},
  doi =		{10.4230/LIPIcs.CPM.2023.5},
  annote =	{Keywords: dynamic programming, incremental trees, exact algorithm}
}
Document
String Factorization via Prefix Free Families

Authors: Matan Kraus, Moshe Lewenstein, Alexandru Popa, Ely Porat, and Yonathan Sadia

Published in: LIPIcs, Volume 259, 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)


Abstract
A factorization of a string S is a partition of w into substrings u_1,… ,u_k such that S = u_1 u_2 ⋯ u_k. Such a partition is called equality-free if no two factors are equal: u_i ≠ u_j, ∀ i,j with i ≠ j. The maximum equality-free factorization problem is to find for a given string S, the largest integer k for which S admits an equality-free factorization with k factors. Equality-free factorizations have lately received attention because of their applications in DNA self-assembly. The best approximation algorithm known for the problem is the natural greedy algorithm, that chooses iteratively from left to right the shortest factor that does not appear before. This algorithm has a √n approximation ratio (SOFSEM 2020) and it is an open problem whether there is a better solution. Our main result is to show that the natural greedy algorithm is a Θ(n^{1/4}) approximation algorithm for the maximum equality-free factorization problem. Thus, we disprove one of the conjectures of Mincu and Popa (SOFSEM 2020) according to which the greedy algorithm is a Θ(√n) approximation. The most challenging part of the proof is to show that the greedy algorithm is an O(n^{1/4}) approximation. We obtain this algorithm via prefix free factor families, i.e. a set of non-overlapping factors of the string which are pairwise non-prefixes of each other. In the paper we show the relation between prefix free factor families and the maximum equality-free factorization. Moreover, as a byproduct we present another approximation algorithm that achieves an approximation ratio of O(n^{1/4}) that we believe is of independent interest and may lead to improved algorithms. We then show that the natural greedy algorithm has an approximation ratio that is Ω(n^{1/4}) via a clever analysis which shows that the greedy algorithm is Θ(n^{1/4}) for the maximum equality-free factorization problem.

Cite as

Matan Kraus, Moshe Lewenstein, Alexandru Popa, Ely Porat, and Yonathan Sadia. String Factorization via Prefix Free Families. In 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 259, pp. 19:1-19:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kraus_et_al:LIPIcs.CPM.2023.19,
  author =	{Kraus, Matan and Lewenstein, Moshe and Popa, Alexandru and Porat, Ely and Sadia, Yonathan},
  title =	{{String Factorization via Prefix Free Families}},
  booktitle =	{34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)},
  pages =	{19:1--19:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-276-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{259},
  editor =	{Bulteau, Laurent and Lipt\'{a}k, Zsuzsanna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2023.19},
  URN =		{urn:nbn:de:0030-drops-179738},
  doi =		{10.4230/LIPIcs.CPM.2023.19},
  annote =	{Keywords: string factorization, NP-hard problem, approximation algorithm}
}
Document
Efficient Algorithms for Counting Gapped Palindromes

Authors: Andrei Popa and Alexandru Popa

Published in: LIPIcs, Volume 191, 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)


Abstract
A gapped palindrome is a string uvu^{R}, where u^{R} represents the reverse of string u. In this paper we show three efficient algorithms for counting the occurrences of gapped palindromes in a given string S of length N. First, we present a solution in O(N) time for counting all gapped palindromes without additional constraints. Then, in the case where the length of v is constrained to be in an interval [g, G], we show an algorithm with running time O(N log N). Finally, we show an algorithm in O(N log² N) time for a more general case where we count gapped palindromes uvu^{R}, where u^{R} starts at position i with g(i) ≤ v ≤ G(i), for all positions i.

Cite as

Andrei Popa and Alexandru Popa. Efficient Algorithms for Counting Gapped Palindromes. In 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 191, pp. 23:1-23:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{popa_et_al:LIPIcs.CPM.2021.23,
  author =	{Popa, Andrei and Popa, Alexandru},
  title =	{{Efficient Algorithms for Counting Gapped Palindromes}},
  booktitle =	{32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)},
  pages =	{23:1--23:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-186-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{191},
  editor =	{Gawrychowski, Pawe{\l} and Starikovskaya, Tatiana},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2021.23},
  URN =		{urn:nbn:de:0030-drops-139746},
  doi =		{10.4230/LIPIcs.CPM.2021.23},
  annote =	{Keywords: pattern matching, gapped palindromes, suffix tree}
}
Document
Complexity of Computing the Anti-Ramsey Numbers for Paths

Authors: Saeed Akhoondian Amiri, Alexandru Popa, Mohammad Roghani, Golnoosh Shahkarami, Reza Soltani, and Hossein Vahidi

Published in: LIPIcs, Volume 170, 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)


Abstract
The anti-Ramsey numbers are a fundamental notion in graph theory, introduced in 1978, by Erdös, Simonovits and Sós. For given graphs G and H the anti-Ramsey number ar(G,H) is defined to be the maximum number k such that there exists an assignment of k colors to the edges of G in which every copy of H in G has at least two edges with the same color. Usually, combinatorists study extremal values of anti-Ramsey numbers for various classes of graphs. There are works on the computational complexity of the problem when H is a star. Along this line of research, we study the complexity of computing the anti-Ramsey number ar(G,P_k), where P_k is a path of length k. First, we observe that when k is close to n, the problem is hard; hence, the challenging part is the computational complexity of the problem when k is a fixed constant. We provide a characterization of the problem for paths of constant length. Our first main contribution is to prove that computing ar(G,P_k) for every integer k > 2 is NP-hard. We obtain this by providing several structural properties of such coloring in graphs. We investigate further and show that approximating ar(G,P₃) to a factor of n^{-1/2 - ε} is hard already in 3-partite graphs, unless P = NP. We also study the exact complexity of the precolored version and show that there is no subexponential algorithm for the problem unless ETH fails for any fixed constant k. Given the hardness of approximation and parametrization of the problem, it is natural to study the problem on restricted graph families. Along this line, we first introduce the notion of color connected coloring, and, employing this structural property, we obtain a linear time algorithm to compute ar(G,P_k), for every integer k, when the host graph, G, is a tree.

Cite as

Saeed Akhoondian Amiri, Alexandru Popa, Mohammad Roghani, Golnoosh Shahkarami, Reza Soltani, and Hossein Vahidi. Complexity of Computing the Anti-Ramsey Numbers for Paths. In 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 170, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{akhoondianamiri_et_al:LIPIcs.MFCS.2020.6,
  author =	{Akhoondian Amiri, Saeed and Popa, Alexandru and Roghani, Mohammad and Shahkarami, Golnoosh and Soltani, Reza and Vahidi, Hossein},
  title =	{{Complexity of Computing the Anti-Ramsey Numbers for Paths}},
  booktitle =	{45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-159-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{170},
  editor =	{Esparza, Javier and Kr\'{a}l', Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2020.6},
  URN =		{urn:nbn:de:0030-drops-126781},
  doi =		{10.4230/LIPIcs.MFCS.2020.6},
  annote =	{Keywords: Coloring, Anti-Ramsey, Approximation, NP-hard, Algorithm, ETH}
}
Document
The Use of a Pruned Modular Decomposition for Maximum Matching Algorithms on Some Graph Classes

Authors: Guillaume Ducoffe and Alexandru Popa

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
We address the following general question: given a graph class C on which we can solve Maximum Matching in (quasi) linear time, does the same hold true for the class of graphs that can be modularly decomposed into C? As a way to answer this question for distance-hereditary graphs and some other superclasses of cographs, we study the combined effect of modular decomposition with a pruning process over the quotient subgraphs. We remove sequentially from all such subgraphs their so-called one-vertex extensions (i.e., pendant, anti-pendant, twin, universal and isolated vertices). Doing so, we obtain a "pruned modular decomposition", that can be computed in quasi linear time. Our main result is that if all the pruned quotient subgraphs have bounded order then a maximum matching can be computed in linear time. The latter result strictly extends a recent framework in (Coudert et al., SODA'18). Our work is the first to explain why the existence of some nice ordering over the modules of a graph, instead of just over its vertices, can help to speed up the computation of maximum matchings on some graph classes.

Cite as

Guillaume Ducoffe and Alexandru Popa. The Use of a Pruned Modular Decomposition for Maximum Matching Algorithms on Some Graph Classes. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ducoffe_et_al:LIPIcs.ISAAC.2018.6,
  author =	{Ducoffe, Guillaume and Popa, Alexandru},
  title =	{{The Use of a Pruned Modular Decomposition for Maximum Matching Algorithms on Some Graph Classes}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.6},
  URN =		{urn:nbn:de:0030-drops-99549},
  doi =		{10.4230/LIPIcs.ISAAC.2018.6},
  annote =	{Keywords: maximum matching, FPT in P, modular decomposition, pruned graphs, one-vertex extensions, P\underline4-structure}
}
Document
The b-Matching Problem in Distance-Hereditary Graphs and Beyond

Authors: Guillaume Ducoffe and Alexandru Popa

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
We make progress on the fine-grained complexity of Maximum-Cardinality Matching on graphs of bounded clique-width. Quasi linear-time algorithms for this problem have been recently proposed for the important subclasses of bounded-treewidth graphs (Fomin et al., SODA'17) and graphs of bounded modular-width (Coudert et al., SODA'18). We present such algorithm for bounded split-width graphs - a broad generalization of graphs of bounded modular-width, of which an interesting subclass are the distance-hereditary graphs. Specifically, we solve Maximum-Cardinality Matching in O((k log^2{k})*(m+n) * log{n})-time on graphs with split-width at most k. We stress that the existence of such algorithm was not even known for distance-hereditary graphs until our work. Doing so, we improve the state of the art (Dragan, WG'97) and we answer an open question of (Coudert et al., SODA'18). Our work brings more insights on the relationships between matchings and splits, a.k.a., join operations between two vertex-subsets in different connected components. Furthermore, our analysis can be extended to the more general (unit cost) b-Matching problem. On the way, we introduce new tools for b-Matching and dynamic programming over split decompositions, that can be of independent interest.

Cite as

Guillaume Ducoffe and Alexandru Popa. The b-Matching Problem in Distance-Hereditary Graphs and Beyond. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 30:1-30:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ducoffe_et_al:LIPIcs.ISAAC.2018.30,
  author =	{Ducoffe, Guillaume and Popa, Alexandru},
  title =	{{The b-Matching Problem in Distance-Hereditary Graphs and Beyond}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{30:1--30:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.30},
  URN =		{urn:nbn:de:0030-drops-99783},
  doi =		{10.4230/LIPIcs.ISAAC.2018.30},
  annote =	{Keywords: maximum-cardinality matching, b-matching, FPT in P, split decomposition, distance-hereditary graphs}
}
Document
Making "Fast" Atomic Operations Computationally Tractable

Authors: Antonio Fernández Anta, Nicolas Nicolaou, and Alexandru Popa

Published in: LIPIcs, Volume 46, 19th International Conference on Principles of Distributed Systems (OPODIS 2015)


Abstract
Communication overhead is the most commonly used performance metric for the operation complexity of distributed algorithms in message-passing environments. However, aside with communication, many distributed operations utilize complex computations to reach their desired outcomes. Therefore, a most accurate operation latency measure should account of both computation and communication metrics. In this paper we focus on the efficiency of read and write operations in an atomic read/write shared memory emulation in the message-passing environment. We examine the operation complexity of the best known atomic register algorithm, that allows all read and write operations to complete in a single communication round-trip. Such operations are called fast. At its heart, the algorithm utilizes a predicate to allow processes to compute their outcome. We show that the predicate used is computationally hard, by devising a computationally equivalent problem and reducing that to Maximum Biclique, a known NP-hard problem. To improve the computational complexity of the algorithm we derive a new predicate that leads to a new algorithm, we call ccFast, and has the following properties: (i) can be computed in polynomial time, rendering each read operation in ccFast tractable compared to the read operations in the original algorithm, (ii) the messages used in ccFast are reduced in size, compared to the original algorithm, by almost a linear factor, (iii) allows all operations in ccFast to be fast, and (iv) allows ccFast to preserve atomicity. A linear time}algorithm for the computation of the new predicate is presented along with an analysis of the message complexity of the new algorithm. We believe that the new algorithm redefines the term fast capturing both the communication and the computation metrics of each operation.

Cite as

Antonio Fernández Anta, Nicolas Nicolaou, and Alexandru Popa. Making "Fast" Atomic Operations Computationally Tractable. In 19th International Conference on Principles of Distributed Systems (OPODIS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 46, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fernandezanta_et_al:LIPIcs.OPODIS.2015.19,
  author =	{Fern\'{a}ndez Anta, Antonio and Nicolaou, Nicolas and Popa, Alexandru},
  title =	{{Making "Fast" Atomic Operations Computationally Tractable}},
  booktitle =	{19th International Conference on Principles of Distributed Systems (OPODIS 2015)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-98-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{46},
  editor =	{Anceaume, Emmanuelle and Cachin, Christian and Potop-Butucaru, Maria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2015.19},
  URN =		{urn:nbn:de:0030-drops-66108},
  doi =		{10.4230/LIPIcs.OPODIS.2015.19},
  annote =	{Keywords: atomicity, read/write objects, shared memory, computational complexity}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail