Search Results

Documents authored by Pettie, Seth


Document
Fraud Detection for Random Walks

Authors: Varsha Dani, Thomas P. Hayes, Seth Pettie, and Jared Saia

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Traditional fraud detection is often based on finding statistical anomalies in data sets and transaction histories. A sophisticated fraudster, aware of the exact kinds of tests being deployed, might be difficult or impossible to catch. We are interested in paradigms for fraud detection that are provably robust against any adversary, no matter how sophisticated. In other words, the detection strategy should rely on signals in the data that are inherent in the goals the adversary is trying to achieve. Specifically, we consider a fraud detection game centered on a random walk on a graph. We assume this random walk is implemented by having a player at each vertex, who can be honest or not. In particular, when the random walk reaches a vertex owned by an honest player, it proceeds to a uniformly random neighbor at the next timestep. However, when the random walk reaches a dishonest player, it instead proceeds to an arbitrary neighbor chosen by an omniscient Adversary. The game is played between the Adversary and a Referee who sees the trajectory of the random walk. At any point during the random walk, if the Referee determines that a {specific} vertex is controlled by a dishonest player, the Referee accuses that player, and therefore wins the game. The Referee is allowed to make the occasional incorrect accusation, but must follow a policy that makes such mistakes with small probability of error. The goal of the adversary is to make the cover time large, ideally infinite, i.e., the walk should never reach at least one vertex. We consider the following basic question: how much can the omniscient Adversary delay the cover time without getting caught? Our main result is a tight upper bound on this delay factor. We also discuss possible applications of our results to settings such as Rotor Walks, Leader Election, and Sybil Defense.

Cite as

Varsha Dani, Thomas P. Hayes, Seth Pettie, and Jared Saia. Fraud Detection for Random Walks. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dani_et_al:LIPIcs.ITCS.2024.36,
  author =	{Dani, Varsha and Hayes, Thomas P. and Pettie, Seth and Saia, Jared},
  title =	{{Fraud Detection for Random Walks}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.36},
  URN =		{urn:nbn:de:0030-drops-195645},
  doi =		{10.4230/LIPIcs.ITCS.2024.36},
  annote =	{Keywords: Fraud detection, random processes, Markov chains}
}
Document
Wake up and Join Me! an Energy-Efficient Algorithm for Maximal Matching in Radio Networks

Authors: Varsha Dani, Aayush Gupta, Thomas P. Hayes, and Seth Pettie

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
We consider networks of small, autonomous devices that communicate with each other wirelessly. Minimizing energy usage is an important consideration in designing algorithms for such networks, as battery life is a crucial and limited resource. Working in a model where both sending and listening for messages deplete energy, we consider the problem of finding a maximal matching of the nodes in a radio network of arbitrary and unknown topology. We present a distributed randomized algorithm that produces, with high probability, a maximal matching. The maximum energy cost per node is O(log² n), and the time complexity is O(Δ log n). Here n is any upper bound on the number of nodes, and Δ is any upper bound on the maximum degree; n and Δ are parameters of our algorithm that we assume are known a priori to all the processors. We note that there exist families of graphs for which our bounds on energy cost and time complexity are simultaneously optimal up to polylog factors, so any significant improvement would need additional assumptions about the network topology. We also consider the related problem of assigning, for each node in the network, a neighbor to back up its data in case of eventual node failure. Here, a key goal is to minimize the maximum load, defined as the number of nodes assigned to a single node. We present an efficient decentralized low-energy algorithm that finds a neighbor assignment whose maximum load is at most a polylog(n) factor bigger that the optimum.

Cite as

Varsha Dani, Aayush Gupta, Thomas P. Hayes, and Seth Pettie. Wake up and Join Me! an Energy-Efficient Algorithm for Maximal Matching in Radio Networks. In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, pp. 19:1-19:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dani_et_al:LIPIcs.DISC.2021.19,
  author =	{Dani, Varsha and Gupta, Aayush and Hayes, Thomas P. and Pettie, Seth},
  title =	{{Wake up and Join Me! an Energy-Efficient Algorithm for Maximal Matching in Radio Networks}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.19},
  URN =		{urn:nbn:de:0030-drops-148219},
  doi =		{10.4230/LIPIcs.DISC.2021.19},
  annote =	{Keywords: Distributed Algorithms, Energy-Aware Computation, Radio Networks, Maximal Matching, Sensor Networks}
}
Document
Incremental SCC Maintenance in Sparse Graphs

Authors: Aaron Bernstein, Aditi Dudeja, and Seth Pettie

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In the incremental cycle detection problem, edges are added to a directed graph (initially empty), and the algorithm has to report the presence of the first cycle, once it is formed. A closely related problem is the incremental topological sort problem, where edges are added to an acyclic graph, and the algorithm is required to maintain a valid topological ordering. Since these problems arise naturally in many applications such as scheduling tasks, pointer analysis, and circuit evaluation, they have been studied extensively in the last three decades. Motivated by the fact that in many of these applications, the presence of a cycle is not fatal, we study a generalization of these problems, incremental maintenance of strongly connected components (incremental SCC). Several incremental algorithms in the literature which do cycle detection and topological sort in directed acyclic graphs, such as those by [Michael A. Bender et al., 2016] and [Haeupler et al., 2012], also generalize to maintain strongly connected components and their topological sort in general directed graphs. The algorithms of [Haeupler et al., 2012] and [Michael A. Bender et al., 2016] have a total update time of O(m^{3/2}) and O(m⋅ min{m^{1/2},n^{2/3}}) respectively, and this is the state of the art for incremental SCC. But the most recent algorithms for incremental cycle detection and topological sort ([Bernstein and Chechik, 2018] and [Bhattacharya and Kulkarni, 2020]), which yield total (randomized) update time Õ(min{m^{4/3}, n²}), do not extend to incremental SCC. Thus, there is a gap between the best known algorithms for these two closely related problems. In this paper, we bridge this gap by extending the framework of [Bhattacharya and Kulkarni, 2020] to general directed graphs. More concretely, we give a Las Vegas algorithm for incremental SCCs with an expected total update time of Õ(m^{4/3}). A key ingredient in the algorithm of [Bhattacharya and Kulkarni, 2020] is a structural theorem (first introduced in [Bernstein and Chechik, 2018]) that bounds the number of "equivalent" vertices. Unfortunately, this theorem only applies to DAGs. We show a natural way to extend this structural theorem to general directed graphs, and along the way we develop a significantly simpler and more intuitive proof of this theorem.

Cite as

Aaron Bernstein, Aditi Dudeja, and Seth Pettie. Incremental SCC Maintenance in Sparse Graphs. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ESA.2021.14,
  author =	{Bernstein, Aaron and Dudeja, Aditi and Pettie, Seth},
  title =	{{Incremental SCC Maintenance in Sparse Graphs}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.14},
  URN =		{urn:nbn:de:0030-drops-145950},
  doi =		{10.4230/LIPIcs.ESA.2021.14},
  annote =	{Keywords: Directed Graphs, Strongly Connected Components, Dynamic Graph Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Non-Mergeable Sketching for Cardinality Estimation

Authors: Seth Pettie, Dingyu Wang, and Longhui Yin

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Cardinality estimation is perhaps the simplest non-trivial statistical problem that can be solved via sketching. Industrially-deployed sketches like HyperLogLog, MinHash, and PCSA are mergeable, which means that large data sets can be sketched in a distributed environment, and then merged into a single sketch of the whole data set. In the last decade a variety of sketches have been developed that are non-mergeable, but attractive for other reasons. They are simpler, their cardinality estimates are strictly unbiased, and they have substantially lower variance. We evaluate sketching schemes on a reasonably level playing field, in terms of their memory-variance product (MVP). E.g., a sketch that occupies 5m bits and whose relative variance is 2/m (standard error √{2/m}) has an MVP of 10. Our contributions are as follows. - Cohen [Edith Cohen, 2015] and Ting [Daniel Ting, 2014] independently discovered what we call the {Martingale transform} for converting a mergeable sketch into a non-mergeable sketch. We present a simpler way to analyze the limiting MVP of Martingale-type sketches. - Pettie and Wang proved that the Fishmonger sketch [Seth Pettie and Dingyu Wang, 2021] has the best MVP, H₀/I₀ ≈ 1.98, among a class of mergeable sketches called "linearizable" sketches. (H₀ and I₀ are precisely defined constants.) We prove that the Martingale transform is optimal in the non-mergeable world, and that Martingale Fishmonger in particular is optimal among linearizable sketches, with an MVP of H₀/2 ≈ 1.63. E.g., this is circumstantial evidence that to achieve 1% standard error, we cannot do better than a 2 kilobyte sketch. - Martingale Fishmonger is neither simple nor practical. We develop a new mergeable sketch called Curtain that strikes a nice balance between simplicity and efficiency, and prove that Martingale Curtain has limiting MVP≈ 2.31. It can be updated with O(1) memory accesses and it has lower empirical variance than Martingale LogLog, a practical non-mergeable version of HyperLogLog.

Cite as

Seth Pettie, Dingyu Wang, and Longhui Yin. Non-Mergeable Sketching for Cardinality Estimation. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 104:1-104:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pettie_et_al:LIPIcs.ICALP.2021.104,
  author =	{Pettie, Seth and Wang, Dingyu and Yin, Longhui},
  title =	{{Non-Mergeable Sketching for Cardinality Estimation}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{104:1--104:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.104},
  URN =		{urn:nbn:de:0030-drops-141731},
  doi =		{10.4230/LIPIcs.ICALP.2021.104},
  annote =	{Keywords: Cardinality Estimation, Sketching}
}
Document
Track A: Algorithms, Complexity and Games
The Structure of Minimum Vertex Cuts

Authors: Seth Pettie and Longhui Yin

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
In this paper we continue a long line of work on representing the cut structure of graphs. We classify the types of minimum vertex cuts, and the possible relationships between multiple minimum vertex cuts. As a consequence of these investigations, we exhibit a simple O(κ n)-space data structure that can quickly answer pairwise (κ+1)-connectivity queries in a κ-connected graph. We also show how to compute the "closest" κ-cut to every vertex in near linear Õ(m+poly(κ)n) time.

Cite as

Seth Pettie and Longhui Yin. The Structure of Minimum Vertex Cuts. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 105:1-105:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pettie_et_al:LIPIcs.ICALP.2021.105,
  author =	{Pettie, Seth and Yin, Longhui},
  title =	{{The Structure of Minimum Vertex Cuts}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{105:1--105:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.105},
  URN =		{urn:nbn:de:0030-drops-141746},
  doi =		{10.4230/LIPIcs.ICALP.2021.105},
  annote =	{Keywords: Graph theory, vertex connectivity, data structures}
}
Document
Simple Contention Resolution via Multiplicative Weight Updates

Authors: Yi-Jun Chang, Wenyu Jin, and Seth Pettie

Published in: OASIcs, Volume 69, 2nd Symposium on Simplicity in Algorithms (SOSA 2019)


Abstract
We consider the classic contention resolution problem, in which devices conspire to share some common resource, for which they each need temporary and exclusive access. To ground the discussion, suppose (identical) devices wake up at various times, and must send a single packet over a shared multiple-access channel. In each time step they may attempt to send their packet; they receive ternary feedback {0,1,2^+} from the channel, 0 indicating silence (no one attempted transmission), 1 indicating success (one device successfully transmitted), and 2^+ indicating noise. We prove that a simple strategy suffices to achieve a channel utilization rate of 1/e-O(epsilon), for any epsilon>0. In each step, device i attempts to send its packet with probability p_i, then applies a rudimentary multiplicative weight-type update to p_i. p_i <- { p_i * e^{epsilon} upon hearing silence (0), p_i upon hearing success (1), p_i * e^{-epsilon/(e-2)} upon hearing noise (2^+) }. This scheme works well even if the introduction of devices/packets is adversarial, and even if the adversary can jam time slots (make noise) at will. We prove that if the adversary jams J time slots, then this scheme will achieve channel utilization 1/e-epsilon, excluding O(J) wasted slots. Results similar to these (Bender, Fineman, Gilbert, Young, SODA 2016) were already achieved, but with a lower constant efficiency (less than 0.05) and a more complex algorithm.

Cite as

Yi-Jun Chang, Wenyu Jin, and Seth Pettie. Simple Contention Resolution via Multiplicative Weight Updates. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chang_et_al:OASIcs.SOSA.2019.16,
  author =	{Chang, Yi-Jun and Jin, Wenyu and Pettie, Seth},
  title =	{{Simple Contention Resolution via Multiplicative Weight Updates}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{16:1--16:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.16},
  URN =		{urn:nbn:de:0030-drops-100426},
  doi =		{10.4230/OASIcs.SOSA.2019.16},
  annote =	{Keywords: Contention resolution, multiplicative weight update method}
}
Document
Fine-grained Lower Bounds on Cops and Robbers

Authors: Sebastian Brandt, Seth Pettie, and Jara Uitto

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
Cops and Robbers is a classic pursuit-evasion game played between a group of g cops and one robber on an undirected N-vertex graph G. We prove that the complexity of deciding the winner in the game under optimal play requires Omega (N^{g-o(1)}) time on instances with O(N log^2 N) edges, conditioned on the Strong Exponential Time Hypothesis. Moreover, the problem of calculating the minimum number of cops needed to win the game is 2^{Omega (sqrt{N})}, conditioned on the weaker Exponential Time Hypothesis. Our conditional lower bound comes very close to a conditional upper bound: if Meyniel's conjecture holds then the cop number can be decided in 2^{O(sqrt{N}log N)} time. In recent years, the Strong Exponential Time Hypothesis has been used to obtain many lower bounds on classic combinatorial problems, such as graph diameter, LCS, EDIT-DISTANCE, and REGEXP matching. To our knowledge, these are the first conditional (S)ETH-hard lower bounds on a strategic game.

Cite as

Sebastian Brandt, Seth Pettie, and Jara Uitto. Fine-grained Lower Bounds on Cops and Robbers. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{brandt_et_al:LIPIcs.ESA.2018.9,
  author =	{Brandt, Sebastian and Pettie, Seth and Uitto, Jara},
  title =	{{Fine-grained Lower Bounds on Cops and Robbers}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.9},
  URN =		{urn:nbn:de:0030-drops-94725},
  doi =		{10.4230/LIPIcs.ESA.2018.9},
  annote =	{Keywords: Cops and Robbers}
}
Document
Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees

Authors: Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
We revisit multipass pairing heaps and path-balanced binary search trees (BSTs), two classical algorithms for data structure maintenance. The pairing heap is a simple and efficient "self-adjusting" heap, introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. In the multipass variant (one of the original pairing heap variants described by Fredman et al.) the minimum item is extracted via repeated pairing rounds in which neighboring siblings are linked. Path-balanced BSTs, proposed by Sleator (cf. Subramanian, 1996), are a natural alternative to Splay trees (Sleator and Tarjan, 1983). In a path-balanced BST, whenever an item is accessed, the search path leading to that item is re-arranged into a balanced tree. Despite their simplicity, both algorithms turned out to be difficult to analyse. Fredman et al. showed that operations in multipass pairing heaps take amortized O(log n * log log n / log log log n) time. For searching in path-balanced BSTs, Balasubramanian and Raman showed in 1995 the same amortized time bound of O(log n * log log n / log log log n), using a different argument. In this paper we show an explicit connection between the two algorithms and improve both bounds to O(log n * 2^{log^* n} * log^* n), respectively O(log n * 2^{log^* n} * (log^* n)^2), where log^* denotes the slowly growing iterated logarithm function. These are the first improvements in more than three, resp. two decades, approaching the information-theoretic lower bound of Omega(log n).

Cite as

Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick. Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ESA.2018.24,
  author =	{Dorfman, Dani and Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Pettie, Seth and Zwick, Uri},
  title =	{{Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.24},
  URN =		{urn:nbn:de:0030-drops-94879},
  doi =		{10.4230/LIPIcs.ESA.2018.24},
  annote =	{Keywords: data structure, priority queue, pairing heap, binary search tree}
}
Document
Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts

Authors: Shang-En Huang and Seth Pettie

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any O(n)-size shortcut set cannot bring the diameter below Omega(n^{1/6}), and that any O(m)-size shortcut set cannot bring it below Omega(n^{1/11}). These improve Hesse's [Hesse, 2003] lower bound of Omega(n^{1/17}). By combining these constructions with Abboud and Bodwin's [Abboud and Bodwin, 2017] edge-splitting technique, we get additive stretch lower bounds of +Omega(n^{1/13}) for O(n)-size spanners and +Omega(n^{1/18}) for O(n)-size emulators. These improve Abboud and Bodwin's +Omega(n^{1/22}) lower bounds.

Cite as

Shang-En Huang and Seth Pettie. Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 26:1-26:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SWAT.2018.26,
  author =	{Huang, Shang-En and Pettie, Seth},
  title =	{{Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{26:1--26:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.26},
  URN =		{urn:nbn:de:0030-drops-88521},
  doi =		{10.4230/LIPIcs.SWAT.2018.26},
  annote =	{Keywords: additive spanners, emulators, shortcutting directed graphs}
}
Document
Simultaneously Load Balancing for Every p-norm, With Reassignments

Authors: Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
This paper investigates the task of load balancing where the objective function is to minimize the p-norm of loads, for p\geq 1, in both static and incremental settings. We consider two closely related load balancing problems. In the bipartite matching problem we are given a bipartite graph G=(C\cup S, E) and the goal is to assign each client c\in C to a server s\in S so that the p-norm of assignment loads on S is minimized. In the graph orientation problem the goal is to orient (direct) the edges of a given undirected graph while minimizing the p-norm of the out-degrees. The graph orientation problem is a special case of the bipartite matching problem, but less complex, which leads to simpler algorithms. For the graph orientation problem we show that the celebrated Chiba-Nishizeki peeling algorithm provides a simple linear time load balancing scheme whose output is an orientation that is 2-competitive, in a p-norm sense, for all p\geq 1. For the bipartite matching problem we first provide an offline algorithm that computes an optimal assignment. We then extend this solution to the online bipartite matching problem with reassignments, where vertices from C arrive in an online fashion together with their corresponding edges, and we are allowed to reassign an amortized O(1) vertices from C each time a new vertex arrives. In this online scenario we show how to maintain a single assignment that is 8-competitive, in a p-norm sense, for all p\geq 1.

Cite as

Aaron Bernstein, Tsvi Kopelowitz, Seth Pettie, Ely Porat, and Clifford Stein. Simultaneously Load Balancing for Every p-norm, With Reassignments. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bernstein_et_al:LIPIcs.ITCS.2017.51,
  author =	{Bernstein, Aaron and Kopelowitz, Tsvi and Pettie, Seth and Porat, Ely and Stein, Clifford},
  title =	{{Simultaneously Load Balancing for Every p-norm, With Reassignments}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.51},
  URN =		{urn:nbn:de:0030-drops-82009},
  doi =		{10.4230/LIPIcs.ITCS.2017.51},
  annote =	{Keywords: Online Matching, Graph Orientation, Minmizing the p-norm}
}
Document
Structure and Hardness in P (Dagstuhl Seminar 16451)

Authors: Moshe Lewenstein, Seth Pettie, and Virginia Vassilevska Williams

Published in: Dagstuhl Reports, Volume 6, Issue 11 (2017)


Abstract
This document contains description of the talks at the Dagstuhl seminar 16451 "Structure and Hardness in P". The main goal of the seminar was to bring together researchers from several disciplines and connect those who work on proving conditional lower bounds with those who or may benefit from it. This resulted in an extensive list of open problems which is also provided.

Cite as

Moshe Lewenstein, Seth Pettie, and Virginia Vassilevska Williams. Structure and Hardness in P (Dagstuhl Seminar 16451). In Dagstuhl Reports, Volume 6, Issue 11, pp. 1-34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{lewenstein_et_al:DagRep.6.11.1,
  author =	{Lewenstein, Moshe and Pettie, Seth and Vassilevska Williams, Virginia},
  title =	{{Structure and Hardness in P (Dagstuhl Seminar 16451)}},
  pages =	{1--34},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{6},
  number =	{11},
  editor =	{Lewenstein, Moshe and Pettie, Seth and Vassilevska Williams, Virginia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.11.1},
  URN =		{urn:nbn:de:0030-drops-70373},
  doi =		{10.4230/DagRep.6.11.1},
  annote =	{Keywords: Algorithmic equivalences, Classifying P, Hardness assumptions, Lower bounds}
}
Document
Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap

Authors: Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B. Riva Shalom

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
We examine the complexity of the online Dictionary Matching with One Gap Problem (DMOG) which is the following. Preprocess a dictionary D of d patterns, where each pattern contains a special gap symbol that can match any string, so that given a text that arrives online, a character at a time, we can report all of the patterns from D that are suffixes of the text that has arrived so far, before the next character arrives. In more general versions the gap symbols are associated with bounds determining the possible lengths of matching strings. Online DMOG captures the difficulty in a bottleneck procedure for cyber-security, as many digital signatures of viruses manifest themselves as patterns with a single gap. In this paper, we demonstrate that the difficulty in obtaining efficient solutions for the DMOG problem, even in the offline setting, can be traced back to the infamous 3SUM conjecture. We show a conditional lower bound of Omega(delta(G_D)+op) time per text character, where G_D is a bipartite graph that captures the structure of D, delta(G_D) is the degeneracy of this graph, and op is the output size. Moreover, we show a conditional lower bound in terms of the magnitude of gaps for the bounded case, thereby showing that some known offline upper bounds are essentially optimal. We also provide matching upper-bounds (up to sub-polynomial factors), in terms of the degeneracy, for the online DMOG problem. In particular, we introduce algorithms whose time cost depends linearly on delta(G_D). Our algorithms make use of graph orientations, together with some additional techniques. These algorithms are of practical interest since although delta(G_D) can be as large as sqrt(d), and even larger if G_D is a multi-graph, it is typically a very small constant in practice. Finally, when delta(G_D) is large we are able to obtain even more efficient solutions.

Cite as

Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B. Riva Shalom. Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 12:1-12:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ISAAC.2016.12,
  author =	{Amir, Amihood and Kopelowitz, Tsvi and Levy, Avivit and Pettie, Seth and Porat, Ely and Shalom, B. Riva},
  title =	{{Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{12:1--12:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.12},
  URN =		{urn:nbn:de:0030-drops-67841},
  doi =		{10.4230/LIPIcs.ISAAC.2016.12},
  annote =	{Keywords: Pattern matching, Dictionary matching, 3SUM, Triangle reporting}
}
Document
Faster Worst Case Deterministic Dynamic Connectivity

Authors: Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We present a deterministic dynamic connectivity data structure for undirected graphs with worst case update time O(sqrt{(n(log(log(n)))^2)/log(n)}) and constant query time. This improves on the previous best deterministic worst case algorithm of Frederickson (SIAM J. Comput., 1985) and Eppstein Galil, Italiano, and Nissenzweig (J. ACM, 1997), which had update time O(sqrt{n}). All other algorithms for dynamic connectivity are either randomized (Monte Carlo) or have only amortized performance guarantees.

Cite as

Casper Kejlberg-Rasmussen, Tsvi Kopelowitz, Seth Pettie, and Mikkel Thorup. Faster Worst Case Deterministic Dynamic Connectivity. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 53:1-53:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kejlbergrasmussen_et_al:LIPIcs.ESA.2016.53,
  author =	{Kejlberg-Rasmussen, Casper and Kopelowitz, Tsvi and Pettie, Seth and Thorup, Mikkel},
  title =	{{Faster Worst Case Deterministic Dynamic Connectivity}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{53:1--53:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.53},
  URN =		{urn:nbn:de:0030-drops-63953},
  doi =		{10.4230/LIPIcs.ESA.2016.53},
  annote =	{Keywords: dynamic graph, spanning tree}
}
Document
Sources of Superlinearity in Davenport-Schinzel Sequences

Authors: Seth Pettie

Published in: Dagstuhl Seminar Proceedings, Volume 8081, Data Structures (2008)


Abstract
A {em generalized} Davenport-Schinzel sequence is one over a finite alphabet that contains no subsequences isomorphic to a fixed {em forbidden subsequence}. One of the fundamental problems in this area is bounding (asymptotically) the maximum length of such sequences. Following Klazar, let $Ex(sigma,n)$ be the maximum length of a sequence over an alphabet of size $n$ avoiding subsequences isomorphic to $sigma$. It has been proved that for every $sigma$, $Ex(sigma,n)$ is either linear or very close to linear; in particular it is $O(n2^{alpha(n)^{O(1)}})$, where $alpha$ is the inverse-Ackermann function and $O(1)$ depends on $sigma$. However, very little is known about the properties of $sigma$ that induce superlinearity of $Ex(sigma,n)$. In this paper we exhibit an infinite family of independent superlinear forbidden subsequences. To be specific, we show that there are 17 {em prototypical} superlinear forbidden subsequences, some of which can be made arbitrarily long through a simple padding operation. Perhaps the most novel part of our constructions is a new succinct code for representing superlinear forbidden subsequences.

Cite as

Seth Pettie. Sources of Superlinearity in Davenport-Schinzel Sequences. In Data Structures. Dagstuhl Seminar Proceedings, Volume 8081, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{pettie:DagSemProc.08081.4,
  author =	{Pettie, Seth},
  title =	{{Sources of Superlinearity in Davenport-Schinzel Sequences}},
  booktitle =	{Data Structures},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8081},
  editor =	{Lars Arge and Robert Sedgewick and Raimund Seidel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08081.4},
  URN =		{urn:nbn:de:0030-drops-15291},
  doi =		{10.4230/DagSemProc.08081.4},
  annote =	{Keywords: Davenport-Schinzel Sequences, lower envelopes, splay trees}
}
Document
Towards a Final Analysis of Pairing Heaps

Authors: Seth Pettie

Published in: Dagstuhl Seminar Proceedings, Volume 6091, Data Structures (2006)


Abstract
Fredman, Sedgewick, Sleator, and Tarjan proposed the {em pairing heap} as a self-adjusting, streamlined version of the Fibonacci heap. It provably supports all priority queue operations in logarithmic time and is known to be extremely efficient in practice. However, despite its simplicity and empirical superiority, the pairing heap is one of the few popular data structures whose basic complexity remains open. In this paper we prove that pairing heaps support the deletemin operation in optimal logarithmic time and all other operations (insert, meld, and decreasekey) in time $O(2^{2sqrt{loglog n}})$. This result gives the {em first} sub-logarithmic time bound for decreasekey and comes close to the lower bound of $Omega(loglog n)$ established by Fredman. Pairing heaps have a well known but poorly understood relationship to splay trees and, to date, the transfer of ideas has flowed in one direction: from splaying to pairing. One contribution of this paper is a new analysis that reasons explicitly with information-theoretic measures. Whether these ideas could contribute to the analysis of splay trees is an open question.

Cite as

Seth Pettie. Towards a Final Analysis of Pairing Heaps. In Data Structures. Dagstuhl Seminar Proceedings, Volume 6091, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2006)


Copy BibTex To Clipboard

@InProceedings{pettie:DagSemProc.06091.5,
  author =	{Pettie, Seth},
  title =	{{Towards a Final Analysis of Pairing Heaps}},
  booktitle =	{Data Structures},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2006},
  volume =	{6091},
  editor =	{Lars Arge and Robert Sedgewick and Dorothea Wagner},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.06091.5},
  URN =		{urn:nbn:de:0030-drops-7642},
  doi =		{10.4230/DagSemProc.06091.5},
  annote =	{Keywords: Data structure, heap, self-adjusting, amortized analysis}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail