Search Results

Documents authored by Yi, Wang



Yi, Wang

Document
Refinement of Workload Models for Engine Controllers by State Space Partitioning

Authors: Morteza Mohaqeqi, Jakaria Abdullah, Pontus Ekberg, and Wang Yi

Published in: LIPIcs, Volume 76, 29th Euromicro Conference on Real-Time Systems (ECRTS 2017)


Abstract
We study an engine control application where the behavior of engine controllers depends on the engine's rotational speed. For efficient and precise timing analysis, we use the Digraph Real-Time (DRT) task model to specify the workload of control tasks where we employ optimal control theory to faithfully calculate the respective minimum inter-release times. We show how DRT models can be refined by finer grained partitioning of the state space of the engine up to a model which enables an exact timing analysis. Compared to previously proposed methods which are either unsafe or pessimistic, our work provides both abstract and tight characterizations of the corresponding workload.

Cite as

Morteza Mohaqeqi, Jakaria Abdullah, Pontus Ekberg, and Wang Yi. Refinement of Workload Models for Engine Controllers by State Space Partitioning. In 29th Euromicro Conference on Real-Time Systems (ECRTS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 76, pp. 11:1-11:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{mohaqeqi_et_al:LIPIcs.ECRTS.2017.11,
  author =	{Mohaqeqi, Morteza and Abdullah, Jakaria and Ekberg, Pontus and Yi, Wang},
  title =	{{Refinement of Workload Models for Engine Controllers by State Space Partitioning}},
  booktitle =	{29th Euromicro Conference on Real-Time Systems (ECRTS 2017)},
  pages =	{11:1--11:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-037-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{76},
  editor =	{Bertogna, Marko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2017.11},
  URN =		{urn:nbn:de:0030-drops-71598},
  doi =		{10.4230/LIPIcs.ECRTS.2017.11},
  annote =	{Keywords: Engine Control Tasks, Schedulability Analysis, Minimum-Time Problem, DRT Task Model}
}
Document
A Survey on Static Cache Analysis for Real-Time Systems

Authors: Mingsong Lv, Nan Guan, Jan Reineke, Reinhard Wilhelm, and Wang Yi

Published in: LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1


Abstract
Real-time systems are reactive computer systems that must produce their reaction to a stimulus within given time bounds. A vital verification requirement is to estimate the Worst-Case Execution Time (WCET) of programs. These estimates are then used to predict the timing behavior of the overall system. The execution time of a program heavily depends on the underlying hardware, among which cache has the biggest influence. Analyzing cache behavior is very challenging due to the versatile cache features and complex execution environment. This article provides a survey on static cache analysis for real-time systems. We first present the challenges and static analysis techniques for independent programs with respect to different cache features. Then, the discussion is extended to cache analysis in complex execution environment, followed by a survey of existing tools based on static techniques for cache analysis. An outlook for future research is provided at last.

Cite as

Mingsong Lv, Nan Guan, Jan Reineke, Reinhard Wilhelm, and Wang Yi. A Survey on Static Cache Analysis for Real-Time Systems. In LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1, pp. 05:1-05:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{lv_et_al:LITES-v003-i001-a005,
  author =	{Lv, Mingsong and Guan, Nan and Reineke, Jan and Wilhelm, Reinhard and Yi, Wang},
  title =	{{A Survey on Static Cache Analysis for Real-Time Systems}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{05:1--05:48},
  ISSN =	{2199-2002},
  year =	{2016},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v003-i001-a005},
  doi =		{10.4230/LITES-v003-i001-a005},
  annote =	{Keywords: Hard real-time, Cache analysis, Worst-case execution time}
}
Document
Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling

Authors: Yi Zhang, Nan Guan, and Wang Yi

Published in: OASIcs, Volume 18, Bringing Theory to Practice: Predictability and Performance in Embedded Systems (2011)


Abstract
Recent theoretical studies have shown that partitioning-based scheduling has better real-time performance than other scheduling paradigms like global scheduling on multi-cores. Especially, a class of partitioning-based scheduling algorithms (called semi-partitioned scheduling), which allow to split a small number of tasks among different cores, offer very high resource utilization, and appear to be a promising solution for scheduling real-time systems on multi-cores. The major concern about the semi-partitioned scheduling is that due to the task splitting, some tasks will migrate from one core to another at run time, and might incur higher context switch overhead than partitioned scheduling. So one would suspect whether the extra overhead caused by task splitting would counteract the theoretical performance gain of semi-partitioned scheduling. In this work, we implement a semi-partitioned scheduler in the Linux operating system, and run experiments on a Intel Core-i7 4-cores machine to measure the real overhead in both partitioned scheduling and semi-partitioned scheduling. Then we integrate the obtained overhead into the state-of-the-art partitioned scheduling and semi-partitioned scheduling algorithms, and conduct empirical comparison of their real-time performance. Our results show that the extra overhead caused by task splitting in semi-partitioned scheduling is very low, and its effect on the system schedulability is very small. Semi-partitioned scheduling indeed outperforms partitioned scheduling in realistic systems.

Cite as

Yi Zhang, Nan Guan, and Wang Yi. Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling. In Bringing Theory to Practice: Predictability and Performance in Embedded Systems. Open Access Series in Informatics (OASIcs), Volume 18, pp. 42-46, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:OASIcs.PPES.2011.42,
  author =	{Zhang, Yi and Guan, Nan and Yi, Wang},
  title =	{{Towards the Implementation and Evaluation of Semi-Partitioned Multi-Core Scheduling}},
  booktitle =	{Bringing Theory to Practice: Predictability and Performance in Embedded Systems},
  pages =	{42--46},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-28-6},
  ISSN =	{2190-6807},
  year =	{2011},
  volume =	{18},
  editor =	{Lucas, Philipp and Wilhelm, Reinhard},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.PPES.2011.42},
  URN =		{urn:nbn:de:0030-drops-30804},
  doi =		{10.4230/OASIcs.PPES.2011.42},
  annote =	{Keywords: real-time operating system, multi-core, semi-partitioned scheduling}
}

Wang, Yi

Document
Probabilistic Action Language pBC+

Authors: Yi Wang

Published in: OASIcs, Volume 64, Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)


Abstract
We present an ongoing research on a probabilistic extension of action language BC+. Just like BC+ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call pBC+, is defined as a high-level notation of LP^{MLN} programs - a probabilistic extension of answer set programs. As preliminary results accomplished, we illustrate how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in pBC+ and computed using an implementation of LP^{MLN}. For future work, we plan to develop a compiler that automatically translates pBC+ description into LP^{MLN} programs, as well as parameter learning in probabilistic action domains through LP^{MLN} weight learning. We will work on defining useful extensions of pBC+ to facilitate hypothetical/counterfactual reasoning. We will also find real-world applications, possibly in robotic domains, to empirically study the performance of this approach to probabilistic reasoning in action domains.

Cite as

Yi Wang. Probabilistic Action Language pBC+. In Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Open Access Series in Informatics (OASIcs), Volume 64, pp. 15:1-15:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wang:OASIcs.ICLP.2018.15,
  author =	{Wang, Yi},
  title =	{{Probabilistic Action Language pBC+}},
  booktitle =	{Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018)},
  pages =	{15:1--15:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-090-3},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{64},
  editor =	{Dal Palu', Alessandro and Tarau, Paul and Saeedloei, Neda and Fodor, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICLP.2018.15},
  URN =		{urn:nbn:de:0030-drops-98818},
  doi =		{10.4230/OASIcs.ICLP.2018.15},
  annote =	{Keywords: action language, probabilistic reasoning, LP^\{MLN\}}
}

Wang, Yuyi

Document
The k-Server Problem with Delays on the Uniform Metric Space

Authors: Predrag Krnetić, Darya Melnyk, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
In this paper, we present tight bounds for the k-server problem with delays in the uniform metric space. The problem is defined on n+k nodes in the uniform metric space which can issue requests over time. These requests can be served directly or with some delay using k servers, by moving a server to the corresponding node with an open request. The task is to find an online algorithm that can serve the requests while minimizing the total moving and delay costs. We first provide a lower bound by showing that the competitive ratio of any deterministic online algorithm cannot be better than (2k+1) in the clairvoyant setting. We will then show that conservative algorithms (without delay) can be equipped with an accumulative delay function such that all such algorithms become (2k+1)-competitive in the non-clairvoyant setting. Together, the two bounds establish a tight result for both, the clairvoyant and the non-clairvoyant settings.

Cite as

Predrag Krnetić, Darya Melnyk, Yuyi Wang, and Roger Wattenhofer. The k-Server Problem with Delays on the Uniform Metric Space. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 61:1-61:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{krnetic_et_al:LIPIcs.ISAAC.2020.61,
  author =	{Krneti\'{c}, Predrag and Melnyk, Darya and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{The k-Server Problem with Delays on the Uniform Metric Space}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{61:1--61:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.61},
  URN =		{urn:nbn:de:0030-drops-134056},
  doi =		{10.4230/LIPIcs.ISAAC.2020.61},
  annote =	{Keywords: Online k-Server, Paging, Delayed Service, Conservative Algorithms}
}
Document
Algorithmic Channel Design

Authors: Georgia Avarikioti, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
Payment networks, also known as channels, are a most promising solution to the throughput problem of cryptocurrencies. In this paper we study the design of capital-efficient payment networks, offline as well as online variants. We want to know how to compute an efficient payment network topology, how capital should be assigned to the individual edges, and how to decide which transactions to accept. Towards this end, we present a flurry of interesting results, basic but generally applicable insights on the one hand, and hardness results and approximation algorithms on the other hand.

Cite as

Georgia Avarikioti, Yuyi Wang, and Roger Wattenhofer. Algorithmic Channel Design. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 16:1-16:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{avarikioti_et_al:LIPIcs.ISAAC.2018.16,
  author =	{Avarikioti, Georgia and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{Algorithmic Channel Design}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{16:1--16:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.16},
  URN =		{urn:nbn:de:0030-drops-99648},
  doi =		{10.4230/LIPIcs.ISAAC.2018.16},
  annote =	{Keywords: blockchain, payment channels, layer 2 solution, network design, payment hubs, routing}
}
Document
Impatient Online Matching

Authors: Xingwu Liu, Zhida Pan, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
We investigate the problem of Min-cost Perfect Matching with Delays (MPMD) in which requests are pairwise matched in an online fashion with the objective to minimize the sum of space cost and time cost. Though linear-MPMD (i.e., time cost is linear in delay) has been thoroughly studied in the literature, it does not well model impatient requests that are common in practice. Thus, we propose convex-MPMD where time cost functions are convex, capturing the situation where time cost increases faster and faster. Since the existing algorithms for linear-MPMD are not competitive any more, we devise a new deterministic algorithm for convex-MPMD problems. For a large class of convex time cost functions, our algorithm achieves a competitive ratio of O(k) on any k-point uniform metric space. Moreover, our deterministic algorithm is asymptotically optimal, which uncover a substantial difference between convex-MPMD and linear-MPMD which allows a deterministic algorithm with constant competitive ratio on any uniform metric space.

Cite as

Xingwu Liu, Zhida Pan, Yuyi Wang, and Roger Wattenhofer. Impatient Online Matching. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 62:1-62:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ISAAC.2018.62,
  author =	{Liu, Xingwu and Pan, Zhida and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{Impatient Online Matching}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{62:1--62:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.62},
  URN =		{urn:nbn:de:0030-drops-100107},
  doi =		{10.4230/LIPIcs.ISAAC.2018.62},
  annote =	{Keywords: online algorithm, online matching, convex function, competitive analysis, lower bound}
}
Document
Min-Cost Bipartite Perfect Matching with Delays

Authors: Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.

Cite as

Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer. Min-Cost Bipartite Perfect Matching with Delays. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ashlagi_et_al:LIPIcs.APPROX-RANDOM.2017.1,
  author =	{Ashlagi, Itai and Azar, Yossi and Charikar, Moses and Chiplunkar, Ashish and Geri, Ofir and Kaplan, Haim and Makhijani, Rahul and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{Min-Cost Bipartite Perfect Matching with Delays}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  URN =		{urn:nbn:de:0030-drops-75509},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  annote =	{Keywords: online algorithms with delayed service, bipartite matching, competitive analysis}
}

Wang, Yipu

Document
Topologically Trivial Closed Walks in Directed Surface Graphs

Authors: Jeff Erickson and Yipu Wang

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
Let G be a directed graph with n vertices and m edges, embedded on a surface S, possibly with boundary, with first Betti number beta. We consider the complexity of finding closed directed walks in G that are either contractible (trivial in homotopy) or bounding (trivial in integer homology) in S. Specifically, we describe algorithms to determine whether G contains a simple contractible cycle in O(n+m) time, or a contractible closed walk in O(n+m) time, or a bounding closed walk in O(beta (n+m)) time. Our algorithms rely on subtle relationships between strong connectivity in G and in the dual graph G^*; our contractible-closed-walk algorithm also relies on a seminal topological result of Hass and Scott. We also prove that detecting simple bounding cycles is NP-hard. We also describe three polynomial-time algorithms to compute shortest contractible closed walks, depending on whether the fundamental group of the surface is free, abelian, or hyperbolic. A key step in our algorithm for hyperbolic surfaces is the construction of a context-free grammar with O(g^2L^2) non-terminals that generates all contractible closed walks of length at most L, and only contractible closed walks, in a system of quads of genus g >= 2. Finally, we show that computing shortest simple contractible cycles, shortest simple bounding cycles, and shortest bounding closed walks are all NP-hard.

Cite as

Jeff Erickson and Yipu Wang. Topologically Trivial Closed Walks in Directed Surface Graphs. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{erickson_et_al:LIPIcs.SoCG.2019.34,
  author =	{Erickson, Jeff and Wang, Yipu},
  title =	{{Topologically Trivial Closed Walks in Directed Surface Graphs}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{34:1--34:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.34},
  URN =		{urn:nbn:de:0030-drops-104383},
  doi =		{10.4230/LIPIcs.SoCG.2019.34},
  annote =	{Keywords: computational topology, surface-embedded graphs, homotopy, homology, strong connectivity, hyperbolic geometry, medial axes, context-free grammars}
}

Wang, Yiqiu

Document
ParGeo: A Library for Parallel Computational Geometry

Authors: Yiqiu Wang, Rahul Yesantharao, Shangdi Yu, Laxman Dhulipala, Yan Gu, and Julian Shun

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
This paper presents ParGeo, a multicore library for computational geometry. ParGeo contains modules for fundamental tasks including kd-tree based spatial search, spatial graph generation, and algorithms in computational geometry. We focus on three new algorithmic contributions provided in the library. First, we present a new parallel convex hull algorithm based on a reservation technique to enable parallel modifications to the hull. We also provide the first parallel implementations of the randomized incremental convex hull algorithm as well as a divide-and-conquer convex hull algorithm in ℝ³. Second, for the smallest enclosing ball problem, we propose a new sampling-based algorithm to quickly reduce the size of the data set. We also provide the first parallel implementation of Welzl’s classic algorithm for smallest enclosing ball. Third, we present the BDL-tree, a parallel batch-dynamic kd-tree that allows for efficient parallel updates and k-NN queries over dynamically changing point sets. BDL-trees consist of a log-structured set of kd-trees which can be used to efficiently insert, delete, and query batches of points in parallel. On 36 cores with two-way hyper-threading, our fastest convex hull algorithm achieves up to 44.7x self-relative parallel speedup and up to 559x speedup against the best existing sequential implementation. Our smallest enclosing ball algorithm using our sampling-based algorithm achieves up to 27.1x self-relative parallel speedup and up to 178x speedup against the best existing sequential implementation. Our implementation of the BDL-tree achieves self-relative parallel speedup of up to 46.1x. Across all of the algorithms in ParGeo, we achieve self-relative parallel speedup of 8.1-46.61x.

Cite as

Yiqiu Wang, Rahul Yesantharao, Shangdi Yu, Laxman Dhulipala, Yan Gu, and Julian Shun. ParGeo: A Library for Parallel Computational Geometry. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 88:1-88:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ESA.2022.88,
  author =	{Wang, Yiqiu and Yesantharao, Rahul and Yu, Shangdi and Dhulipala, Laxman and Gu, Yan and Shun, Julian},
  title =	{{ParGeo: A Library for Parallel Computational Geometry}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{88:1--88:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.88},
  URN =		{urn:nbn:de:0030-drops-170265},
  doi =		{10.4230/LIPIcs.ESA.2022.88},
  annote =	{Keywords: Computational Geometry, Parallel Algorithms, Libraries}
}
Document
A Parallel Batch-Dynamic Data Structure for the Closest Pair Problem

Authors: Yiqiu Wang, Shangdi Yu, Yan Gu, and Julian Shun

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
We propose a theoretically-efficient and practical parallel batch-dynamic data structure for the closest pair problem. Our solution is based on a serial dynamic closest pair data structure by Golin et al., and supports batches of insertions and deletions in parallel. For a data set of size n, our data structure supports a batch of insertions or deletions of size m in O(m(1+log ((n+m)/m))) expected work and O(log (n+m)log^*(n+m)) depth with high probability, and takes linear space. The key techniques for achieving these bounds are a new work-efficient parallel batch-dynamic binary heap, and careful management of the computation across sets of points to minimize work and depth. We provide an optimized multicore implementation of our data structure using dynamic hash tables, parallel heaps, and dynamic k-d trees. Our experiments on a variety of synthetic and real-world data sets show that it achieves a parallel speedup of up to 38.57x (15.10x on average) on 48 cores with hyper-threading. In addition, we also implement and compare four parallel algorithms for static closest pair problem, for which we are not aware of any existing practical implementations. On 48 cores with hyper-threading, the static algorithms achieve up to 51.45x (29.42x on average) speedup, and Rabin’s algorithm performs the best on average. Comparing our dynamic algorithm to the fastest static algorithm, we find that it is advantageous to use the dynamic algorithm for batch sizes of up to 20% of the data set. As far as we know, our work is the first to experimentally evaluate parallel closest pair algorithms, in both the static and the dynamic settings.

Cite as

Yiqiu Wang, Shangdi Yu, Yan Gu, and Julian Shun. A Parallel Batch-Dynamic Data Structure for the Closest Pair Problem. In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 60:1-60:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.SoCG.2021.60,
  author =	{Wang, Yiqiu and Yu, Shangdi and Gu, Yan and Shun, Julian},
  title =	{{A Parallel Batch-Dynamic Data Structure for the Closest Pair Problem}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{60:1--60:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.60},
  URN =		{urn:nbn:de:0030-drops-138594},
  doi =		{10.4230/LIPIcs.SoCG.2021.60},
  annote =	{Keywords: Closest Pair, Parallel Algorithms, Dynamic Algorithms, Experimental Algorithms}
}

Wang, Yishu

Document
A General Framework for Enumerating Equivalence Classes of Solutions

Authors: Yishu Wang, Arnaud Mary, Marie-France Sagot, and Blerina Sinaimeri

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
When a problem has more than one solution, it is often important, depending on the underlying context, to enumerate (i.e., to list) them all. Even when the enumeration can be done in polynomial delay, that is, spending no more than polynomial time to go from one solution to the next, this can be costly as the number of solutions themselves may be huge, including sometimes exponential. Furthermore, depending on the application, many of these solutions can be considered equivalent. The problem of an efficient enumeration of the equivalence classes or of one representative per class (without generating all the solutions), although identified as a need in many areas, has been addressed only for very few specific cases. In this paper, we provide a general framework that solves this problem in polynomial delay for a wide variety of contexts, including optimization ones that can be addressed by dynamic programming algorithms, and for certain types of equivalence relations between solutions.

Cite as

Yishu Wang, Arnaud Mary, Marie-France Sagot, and Blerina Sinaimeri. A General Framework for Enumerating Equivalence Classes of Solutions. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ESA.2021.80,
  author =	{Wang, Yishu and Mary, Arnaud and Sagot, Marie-France and Sinaimeri, Blerina},
  title =	{{A General Framework for Enumerating Equivalence Classes of Solutions}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.80},
  URN =		{urn:nbn:de:0030-drops-146614},
  doi =		{10.4230/LIPIcs.ESA.2021.80},
  annote =	{Keywords: Enumeration algorithms, Equivalence relation, Dynamic programming}
}
Document
Making Sense of a Cophylogeny Output: Efficient Listing of Representative Reconciliations

Authors: Yishu Wang, Arnaud Mary, Marie-France Sagot, and Blerina Sinaimeri

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
Cophylogeny reconciliation is a powerful method for analyzing host-parasite (or host-symbiont) co-evolution. It models co-evolution as an optimization problem where the set of all optimal solutions may represent different biological scenarios which thus need to be analyzed separately. Despite the significant research done in the area, few approaches have addressed the problem of helping the biologist deal with the often huge space of optimal solutions. In this paper, we propose a new approach to tackle this problem. We introduce three different criteria under which two solutions may be considered biologically equivalent, and then we propose polynomial-delay algorithms that enumerate only one representative per equivalence class (without listing all the solutions). Our results are of both theoretical and practical importance. Indeed, as shown by the experiments, we are able to significantly reduce the space of optimal solutions while still maintaining important biological information about the whole space.

Cite as

Yishu Wang, Arnaud Mary, Marie-France Sagot, and Blerina Sinaimeri. Making Sense of a Cophylogeny Output: Efficient Listing of Representative Reconciliations. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.WABI.2021.3,
  author =	{Wang, Yishu and Mary, Arnaud and Sagot, Marie-France and Sinaimeri, Blerina},
  title =	{{Making Sense of a Cophylogeny Output: Efficient Listing of Representative Reconciliations}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.3},
  URN =		{urn:nbn:de:0030-drops-143564},
  doi =		{10.4230/LIPIcs.WABI.2021.3},
  annote =	{Keywords: Cophylogeny, Enumeration, Equivalence relation, Dynamic programming}
}

Wang, Yiyuan

Document
Improving Local Search for Pseudo Boolean Optimization by Fragile Scoring Function and Deep Optimization

Authors: Wenbo Zhou, Yujiao Zhao, Yiyuan Wang, Shaowei Cai, Shimao Wang, Xinyu Wang, and Minghao Yin

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Pseudo-Boolean optimization (PBO) is usually used to model combinatorial optimization problems, especially for some real-world applications. Despite its significant importance in both theory and applications, there are few works on using local search to solve PBO. This paper develops a novel local search framework for PBO, which has three main ideas. First, we design a two-level selection strategy to evaluate all candidate variables. Second, we propose a novel deep optimization strategy to disturb some search spaces. Third, a sampling flipping method is applied to help the algorithm jump out of local optimum. Experimental results show that the proposed algorithms outperform three state-of-the-art PBO algorithms on most instances.

Cite as

Wenbo Zhou, Yujiao Zhao, Yiyuan Wang, Shaowei Cai, Shimao Wang, Xinyu Wang, and Minghao Yin. Improving Local Search for Pseudo Boolean Optimization by Fragile Scoring Function and Deep Optimization. In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 41:1-41:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{zhou_et_al:LIPIcs.CP.2023.41,
  author =	{Zhou, Wenbo and Zhao, Yujiao and Wang, Yiyuan and Cai, Shaowei and Wang, Shimao and Wang, Xinyu and Yin, Minghao},
  title =	{{Improving Local Search for Pseudo Boolean Optimization by Fragile Scoring Function and Deep Optimization}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{41:1--41:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.41},
  URN =		{urn:nbn:de:0030-drops-190784},
  doi =		{10.4230/LIPIcs.CP.2023.41},
  annote =	{Keywords: Local Search, Pseudo-Boolean Optimization, Deep Optimization}
}
Document
Improving Local Search for Minimum Weighted Connected Dominating Set Problem by Inner-Layer Local Search

Authors: Bohan Li, Kai Wang, Yiyuan Wang, and Shaowei Cai

Published in: LIPIcs, Volume 210, 27th International Conference on Principles and Practice of Constraint Programming (CP 2021)


Abstract
The minimum weighted connected dominating set (MWCDS) problem is an important variant of connected dominating set problems with wide applications, especially in heterogenous networks and gene regulatory networks. In the paper, we develop a nested local search algorithm called NestedLS for solving MWCDS on classic benchmarks and massive graphs. In this local search framework, we propose two novel ideas to make it effective by utilizing previous search information. First, we design the restart based smoothing mechanism as a diversification method to escape from local optimal. Second, we propose a novel inner-layer local search method to enlarge the candidate removal set, which can be modelled as an optimized version of spanning tree problem. Moreover, inner-layer local search method is a general method for maintaining the connectivity constraint when dealing with massive graphs. Experimental results show that NestedLS outperforms state-of-the-art meta-heuristic algorithms on most instances.

Cite as

Bohan Li, Kai Wang, Yiyuan Wang, and Shaowei Cai. Improving Local Search for Minimum Weighted Connected Dominating Set Problem by Inner-Layer Local Search. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 210, pp. 39:1-39:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CP.2021.39,
  author =	{Li, Bohan and Wang, Kai and Wang, Yiyuan and Cai, Shaowei},
  title =	{{Improving Local Search for Minimum Weighted Connected Dominating Set Problem by Inner-Layer Local Search}},
  booktitle =	{27th International Conference on Principles and Practice of Constraint Programming (CP 2021)},
  pages =	{39:1--39:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-211-2},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{210},
  editor =	{Michel, Laurent D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2021.39},
  URN =		{urn:nbn:de:0030-drops-153304},
  doi =		{10.4230/LIPIcs.CP.2021.39},
  annote =	{Keywords: Operations Research, NP-hard Problem, Local Search, Weighted Connected Dominating Set Problem}
}

Wang, Huayi

Document
On Efficient Range-Summability of IID Random Variables in Two or Higher Dimensions

Authors: Jingfan Meng, Huayi Wang, Jun Xu, and Mitsunori Ogihara

Published in: LIPIcs, Volume 255, 26th International Conference on Database Theory (ICDT 2023)


Abstract
d-dimensional (for d > 1) efficient range-summability (dD-ERS) of random variables (RVs) is a fundamental algorithmic problem that has applications to two important families of database problems, namely, fast approximate wavelet tracking (FAWT) on data streams and approximately answering range-sum queries over a data cube. Whether there are efficient solutions to the dD-ERS problem, or to the latter database problem, have been two long-standing open problems. Both are solved in this work. Specifically, we propose a novel solution framework to dD-ERS on RVs that have Gaussian or Poisson distribution. Our dD-ERS solutions are the first ones that have polylogarithmic time complexities. Furthermore, we develop a novel k-wise independence theory that allows our dD-ERS solutions to have both high computational efficiencies and strong provable independence guarantees. Finally, we show that under a sufficient and likely necessary condition, certain existing solutions for 1D-ERS can be generalized to higher dimensions.

Cite as

Jingfan Meng, Huayi Wang, Jun Xu, and Mitsunori Ogihara. On Efficient Range-Summability of IID Random Variables in Two or Higher Dimensions. In 26th International Conference on Database Theory (ICDT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 255, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{meng_et_al:LIPIcs.ICDT.2023.21,
  author =	{Meng, Jingfan and Wang, Huayi and Xu, Jun and Ogihara, Mitsunori},
  title =	{{On Efficient Range-Summability of IID Random Variables in Two or Higher Dimensions}},
  booktitle =	{26th International Conference on Database Theory (ICDT 2023)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-270-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{255},
  editor =	{Geerts, Floris and Vandevoort, Brecht},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2023.21},
  URN =		{urn:nbn:de:0030-drops-177624},
  doi =		{10.4230/LIPIcs.ICDT.2023.21},
  annote =	{Keywords: fast range-summation, multidimensional data streams, Haar wavelet transform}
}
Document
A Dyadic Simulation Approach to Efficient Range-Summability

Authors: Jingfan Meng, Huayi Wang, Jun Xu, and Mitsunori Ogihara

Published in: LIPIcs, Volume 220, 25th International Conference on Database Theory (ICDT 2022)


Abstract
Efficient range-summability (ERS) of a long list of random variables is a fundamental algorithmic problem that has applications to three important database applications, namely, data stream processing, space-efficient histogram maintenance (SEHM), and approximate nearest neighbor searches (ANNS). In this work, we propose a novel dyadic simulation framework and develop three novel ERS solutions, namely Gaussian-dyadic simulation tree (DST), Cauchy-DST and Random Walk-DST, using it. We also propose novel rejection sampling techniques to make these solutions computationally efficient. Furthermore, we develop a novel k-wise independence theory that allows our ERS solutions to have both high computational efficiencies and strong provable independence guarantees.

Cite as

Jingfan Meng, Huayi Wang, Jun Xu, and Mitsunori Ogihara. A Dyadic Simulation Approach to Efficient Range-Summability. In 25th International Conference on Database Theory (ICDT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 220, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{meng_et_al:LIPIcs.ICDT.2022.17,
  author =	{Meng, Jingfan and Wang, Huayi and Xu, Jun and Ogihara, Mitsunori},
  title =	{{A Dyadic Simulation Approach to Efficient Range-Summability}},
  booktitle =	{25th International Conference on Database Theory (ICDT 2022)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-223-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{220},
  editor =	{Olteanu, Dan and Vortmeier, Nils},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2022.17},
  URN =		{urn:nbn:de:0030-drops-158915},
  doi =		{10.4230/LIPIcs.ICDT.2022.17},
  annote =	{Keywords: fast range-summation, locality-sensitive hashing, rejection sampling}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail