21 Search Results for "Liu, Paul"


Document
Invited Talk
Beyond Optimal Solutions for Real-World Problems (Invited Talk)

Authors: Maria Garcia de la Banda

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Combinatorial optimisation technology has come a long way. We now have mature high-level modelling languages in which to specify a model of the particular problem of interest [Nethercote et al., 2007; Frisch et al., 2008; Van Hentenryck, 1999; Fourer et al., 1990]; robust complete solvers in each major constraint paradigm, including Constraint Programming (CP), MaxSAT [Jessica Davies and Fahiem Bacchus, 2011; Alexey Ignatiev et al., 2019], and Mixed Integer Programming (MIP); effective incomplete search techniques that can easily be combined with complete solvers to speed up the search such as Large Neighbourhood Search [Paul Shaw, 1998]; and enough general knowledge about modelling techniques to understand the need for our models to incorporate components such as global constraints [Willem-Jan van Hoeve and Irit Katriel, 2006], symmetry constraints [Ian P. Gent et al., 2006], and more. All this has significantly reduced the amount of knowledge required to apply this technology successfully to the many different combinatorial optimisation problems that permeate our society. And yet, not many organisations use such advanced optimisation technology; instead, they often rely on the solutions provided by problem-specific algorithms that are implemented in traditional imperative languages and lack any of the above advances. Further, while advanced optimisation technology is particularly suitable for the kind of complex human-in-the-loop decision-making problems that occur in critical sectors of our society, including health, transport, energy, disaster management, environment and finance, these decisions are often still made by people with little or no technological support. In this extended abstract I argue that to change this state of affairs, our research focus needs to change from improving the technology on its own, to improving it so that users can better trust, use, and maintain the optimisation systems that we develop with it. The rest of this extended abstract discusses my personal experiences and opinion on these three points. Trust I highlight trust (which focuses on the user’s point of view) rather than trustworthiness (which is a characteristic of the software itself) because I think it is the former rather than the latter that is at stake for the adoption of optimisation technology. One of the biggest hurdles I have found for trust in the context of optimisation systems is for the domain experts to (feel like they) understand the underlying model. While many users will never do (or have to), I believe it is key for domain experts to have a high-level understanding of the constraints in the model, since their (dis)trust will likely spread through the organisation, impacting the adoption of the system. Thanks to the use of high-level modelling languages in CP, our group has achieved this [Matthias Klapperstueck et al., 2023] by documenting the constraints in a language the user knows (mathematics) and linking each constraint to the particular part of the model that implements it (via comments). While domain experts do not completely understand the model, the similarity between the format they understand (mathematics) and the model constraint has helped them verify our perception of their problem and improved their trust in the model. However, more needs to be done in this direction via the development of formal techniques. For example, our group is exploring the use of domain-specific languages [Hudak, 1997] as a bridge between domain experts and modellers that helps both trust and maintenance (see later). This [Sameela Suharshani Wijesundara et al., 2023] and other approaches need to be explored. A very significant source of trust for our domain experts (and of trustworthiness for the software) has been the development of two different models implemented by two different people for the same problem [Matthias Klapperstueck et al., 2023]. While this can be seen as a prohibitively expensive exercise, it did not take that long once the first model was mature, is a good way to onboard new optimisation team members, and has helped up detect not only bugs but also differences in the interpretation of domain expert information. For optimisation problems where it is not possible to verify the optimality (or even correctness) of the solution, we see such redundant modelling as the only solution for now. Interestingly, a significant step forward in obtaining the trust of our domain experts has been the generation of an optimality gap whenever an optimal solution could not be found due to time constraints. While explaining this concept took time, once understood it has boosted their trust, particularly when tackling problems where the solution is not easy verifiable or when approximated models/data are used (needed for speed, see later). This makes it difficult to work with CP and SAT solvers, as they usually lack tight lower bounds. Finally, trust is often developed through the use of the system, which I discuss below. Use Usability is known to be key for the deployment of software systems. By "system" in our context, I refer to the combination of the problem model(s), the associated solver(s) and, importantly, the User Interface (UI) that often integrates them and is fundamental to their success. In addition to the traditional usability characteristics of software systems, I believe an optimisation system requires particular care in the following areas. Interaction, i.e., the system must allow users to interact with the UI not only to provide and modify the input data, but also to modify the constraints (at the very least by turning some on/off) as well as explore and compare solutions, as argued in [David Meignan et al., 2015; Jie Liu et al., 2021]. Incremental compilers and solvers would significantly help in making this easier, as well as generic ways for the UIs to communicate with them. Conflict resolution, that is, ensuring the system can not only detect infeasible instances, but also support users in understanding the data/constraints that cause infeasibility and how to modify the instance to make it feasible. Any interactive optimisation system that has users, will likely have conflicts. Thus, it is mandatory for CP to improve its conflict resolution technology which, while existent [João Marques-Silva and Alessandro Previti, 2014; Lauffer and Topcu, 2019; Ilankaikone Senthooran et al., 2023], is not widespread and it is often still problem-dependent, overwhelming (in the number of constraints shown to the user) and slow. Without it, users will be "stumped" when (rather than if) infeasibility is reached. Solution diversity, that is, supporting users in obtaining a diverse set of (close-to-optimal) solutions, where diversity is measured by a user-provided metric modelled somehow. While some solver-independent technology has been developed and implemented for this [Emmanuel Hebrard et al., 2005; Thierry Petit and Andrew C. Trapp, 2015; Linnea Ingmar et al., 2020], it should be easier to use and more widespread. Further, it requires sophisticated solution comparison capabilities and, importantly, for optimal solutions to be found in seconds rather than hours. This brings me to speed, an area where CP solvers are falling behind. Most of our research group applications now use MIP solvers due to the need for floats (which precludes us from using learning solvers such as Chuffed [Geoffrey Chu, 2013]), but also to the lack of effective warm-start processes that are available in MIP solvers. Interestingly, data and model approximations have been proved to achieve orders of magnitude speedups with small reductions in optimality [Matthias Klapperstueck et al., 2023]. Developing generic (i.e., problem independent) accurate approximations would be extremely useful for complex decision systems. Other areas where I think generic CP methods are worth investigating more include dealing with uncertainty and online problems, ensuring solution fairness (even if it is over time), and studying predict + optimise approaches. Maintain I know very few papers devoted to the issue of maintenance in optimisation technology. While this may be due to my lack of knowledge, I suspect it is also due to the limited adoption of optimisation technology. While the issues in this area are again common to other software systems, I believe the solutions for CP require special attention. For example, the issue of changes in user requirements (that our research group calls problem drift) seems particularly prevalent in decision-making systems, as such problems can evolve rapidly due to unforeseen circumstances. This can make optimisation systems obsolete faster than expected. Our research group has proposed to tackle problem drift by developing a requirements model implemented in the above-mentioned MDSLs and created by both domain experts and modellers that, when modified re-generates parts of the model to support the modifications [Sameela Suharshani Wijesundara et al., 2023]. This and other approaches such as the creation of reusable models components [Sophia Saller and Jana Koehler, 2022; Toby Walsh, 2003], or instantiatable classes for common problem domains, are worth investigating.

Cite as

Maria Garcia de la Banda. Beyond Optimal Solutions for Real-World Problems (Invited Talk). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 1:1-1:4, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{garciadelabanda:LIPIcs.CP.2023.1,
  author =	{Garcia de la Banda, Maria},
  title =	{{Beyond Optimal Solutions for Real-World Problems}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{1:1--1:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.1},
  URN =		{urn:nbn:de:0030-drops-190384},
  doi =		{10.4230/LIPIcs.CP.2023.1},
  annote =	{Keywords: Combinatorial optimisation systems, usability, trust, maintenance}
}
Document
Solving Directed Feedback Vertex Set by Iterative Reduction to Vertex Cover

Authors: Sebastian Angrick, Ben Bals, Katrin Casel, Sarel Cohen, Tobias Friedrich, Niko Hastrich, Theresa Hradilak, Davis Issac, Otto Kißig, Jonas Schmidt, and Leo Wendt

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
In the Directed Feedback Vertex Set (DFVS) problem, one is given a directed graph G = (V,E) and wants to find a minimum cardinality set S ⊆ V such that G-S is acyclic. DFVS is a fundamental problem in computer science and finds applications in areas such as deadlock detection. The problem was the subject of the 2022 PACE coding challenge. We develop a novel exact algorithm for the problem that is tailored to perform well on instances that are mostly bi-directed. For such instances, we adapt techniques from the well-researched vertex cover problem. Our core idea is an iterative reduction to vertex cover. To this end, we also develop a new reduction rule that reduces the number of not bi-directed edges. With the resulting algorithm, we were able to win third place in the exact track of the PACE challenge. We perform computational experiments and compare the running time to other exact algorithms, in particular to the winning algorithm in PACE. Our experiments show that we outpace the other algorithms on instances that have a low density of uni-directed edges.

Cite as

Sebastian Angrick, Ben Bals, Katrin Casel, Sarel Cohen, Tobias Friedrich, Niko Hastrich, Theresa Hradilak, Davis Issac, Otto Kißig, Jonas Schmidt, and Leo Wendt. Solving Directed Feedback Vertex Set by Iterative Reduction to Vertex Cover. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 10:1-10:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{angrick_et_al:LIPIcs.SEA.2023.10,
  author =	{Angrick, Sebastian and Bals, Ben and Casel, Katrin and Cohen, Sarel and Friedrich, Tobias and Hastrich, Niko and Hradilak, Theresa and Issac, Davis and Ki{\ss}ig, Otto and Schmidt, Jonas and Wendt, Leo},
  title =	{{Solving Directed Feedback Vertex Set by Iterative Reduction to Vertex Cover}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.10},
  URN =		{urn:nbn:de:0030-drops-183602},
  doi =		{10.4230/LIPIcs.SEA.2023.10},
  annote =	{Keywords: directed feedback vertex set, vertex cover, reduction rules}
}
Document
A Graph-Theoretic Formulation of Exploratory Blockmodeling

Authors: Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, and Nils Morawietz

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We present a new simple graph-theoretic formulation of the exploratory blockmodeling problem on undirected and unweighted one-mode networks. Our formulation takes as input the network G and the maximum number t of blocks for the solution model. The task is to find a minimum-size set of edge insertions and deletions that transform the input graph G into a graph G' with at most t neighborhood classes. Herein, a neighborhood class is a maximal set of vertices with the same neighborhood. The neighborhood classes of G' directly give the blocks and block interactions of the computed blockmodel. We analyze the classic and parameterized complexity of the exploratory blockmodeling problem, provide a branch-and-bound algorithm, an ILP formulation and several heuristics. Finally, we compare our exact algorithms to previous ILP-based approaches and show that the new algorithms are faster for t ≥ 4.

Cite as

Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, and Nils Morawietz. A Graph-Theoretic Formulation of Exploratory Blockmodeling. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 14:1-14:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.SEA.2023.14,
  author =	{Bille, Alexander and Gr\"{u}ttemeier, Niels and Komusiewicz, Christian and Morawietz, Nils},
  title =	{{A Graph-Theoretic Formulation of Exploratory Blockmodeling}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.14},
  URN =		{urn:nbn:de:0030-drops-183648},
  doi =		{10.4230/LIPIcs.SEA.2023.14},
  annote =	{Keywords: Clustering, Exact Algorithms, ILP-Formulation, Branch-and-Bound, Social Networks}
}
Document
Greedy Heuristics for Judicious Hypergraph Partitioning

Authors: Noah Wahl and Lars Gottesbüren

Published in: LIPIcs, Volume 265, 21st International Symposium on Experimental Algorithms (SEA 2023)


Abstract
We investigate the efficacy of greedy heuristics for the judicious hypergraph partitioning problem. In contrast to balanced partitioning problems, the goal of judicious hypergraph partitioning is to minimize the maximum load over all blocks of the partition. We devise strategies for initial partitioning and FM-style post-processing. In combination with a multilevel scheme, they beat the previous state-of-the-art solver - based on greedy set covers - in both running time (two to four orders of magnitude) and solution quality (18% to 45%). A major challenge that makes local greedy approaches difficult to use for this problem is the high frequency of zero-gain moves, for which we present and evaluate counteracting mechanisms.

Cite as

Noah Wahl and Lars Gottesbüren. Greedy Heuristics for Judicious Hypergraph Partitioning. In 21st International Symposium on Experimental Algorithms (SEA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 265, pp. 17:1-17:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{wahl_et_al:LIPIcs.SEA.2023.17,
  author =	{Wahl, Noah and Gottesb\"{u}ren, Lars},
  title =	{{Greedy Heuristics for Judicious Hypergraph Partitioning}},
  booktitle =	{21st International Symposium on Experimental Algorithms (SEA 2023)},
  pages =	{17:1--17:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-279-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{265},
  editor =	{Georgiadis, Loukas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2023.17},
  URN =		{urn:nbn:de:0030-drops-183674},
  doi =		{10.4230/LIPIcs.SEA.2023.17},
  annote =	{Keywords: hypergraph partitioning, local search algorithms, load balancing, local search}
}
Document
Track A: Algorithms, Complexity and Games
Faster Submodular Maximization for Several Classes of Matroids

Authors: Monika Henzinger, Paul Liu, Jan Vondrák, and Da Wei Zheng

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
The maximization of submodular functions have found widespread application in areas such as machine learning, combinatorial optimization, and economics, where practitioners often wish to enforce various constraints; the matroid constraint has been investigated extensively due to its algorithmic properties and expressive power. Though tight approximation algorithms for general matroid constraints exist in theory, the running times of such algorithms typically scale quadratically, and are not practical for truly large scale settings. Recent progress has focused on fast algorithms for important classes of matroids given in explicit form. Currently, nearly-linear time algorithms only exist for graphic and partition matroids [Alina Ene and Huy L. Nguyen, 2019]. In this work, we develop algorithms for monotone submodular maximization constrained by graphic, transversal matroids, or laminar matroids in time near-linear in the size of their representation. Our algorithms achieve an optimal approximation of 1-1/e-ε and both generalize and accelerate the results of Ene and Nguyen [Alina Ene and Huy L. Nguyen, 2019]. In fact, the running time of our algorithm cannot be improved within the fast continuous greedy framework of Badanidiyuru and Vondrák [Ashwinkumar Badanidiyuru and Jan Vondrák, 2014]. To achieve near-linear running time, we make use of dynamic data structures that maintain bases with approximate maximum cardinality and weight under certain element updates. These data structures need to support a weight decrease operation and a novel Freeze operation that allows the algorithm to freeze elements (i.e. force to be contained) in its basis regardless of future data structure operations. For the laminar matroid, we present a new dynamic data structure using the top tree interface of Alstrup, Holm, de Lichtenberg, and Thorup [Stephen Alstrup et al., 2005] that maintains the maximum weight basis under insertions and deletions of elements in O(log n) time. This data structure needs to support certain subtree query and path update operations that are performed every insertion and deletion that are non-trivial to handle in conjunction. For the transversal matroid the Freeze operation corresponds to requiring the data structure to keep a certain set S of vertices matched, a property that we call S-stability. While there is a large body of work on dynamic matching algorithms, none are S-stable and maintain an approximate maximum weight matching under vertex updates. We give the first such algorithm for bipartite graphs with total running time linear (up to log factors) in the number of edges.

Cite as

Monika Henzinger, Paul Liu, Jan Vondrák, and Da Wei Zheng. Faster Submodular Maximization for Several Classes of Matroids. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 74:1-74:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{henzinger_et_al:LIPIcs.ICALP.2023.74,
  author =	{Henzinger, Monika and Liu, Paul and Vondr\'{a}k, Jan and Zheng, Da Wei},
  title =	{{Faster Submodular Maximization for Several Classes of Matroids}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{74:1--74:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.74},
  URN =		{urn:nbn:de:0030-drops-181267},
  doi =		{10.4230/LIPIcs.ICALP.2023.74},
  annote =	{Keywords: submodular optimization, dynamic data structures, matching algorithms}
}
Document
Higher-Dimensional Timed and Hybrid Automata

Authors: Uli Fahrenberg

Published in: LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2


Abstract
We introduce a new formalism of higher-dimensional timed automata, based on Pratt and van Glabbeek’s higher-dimensional automata and Alur and Dill’s timed automata. We prove that their reachability is PSPACE-complete and can be decided using zone-based algorithms. We also extend the setting to higher-dimensional hybrid automata.The interest of our formalism is in modeling systems which exhibit both real-time behavior and concurrency. Other existing formalisms for real-time modeling identify concurrency and interleaving, which, as we shall argue, is problematic.

Cite as

Uli Fahrenberg. Higher-Dimensional Timed and Hybrid Automata. In LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2, pp. 03:1-03:16, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{fahrenberg:LITES.8.2.3,
  author =	{Fahrenberg, Uli},
  title =	{{Higher-Dimensional Timed and Hybrid Automata}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{03:1--03:16},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.2.3},
  doi =		{10.4230/LITES.8.2.3},
  annote =	{Keywords: timed automaton, higher-dimensional automaton, precubical set, real time, non-interleaving concurrency, hybrid automaton}
}
Document
A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems

Authors: Eduard Kamburjan, Stefan Mitsch, and Reiner Hähnle

Published in: LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2


Abstract
Designing and modeling complex cyber-physical systems (CPS) faces the double challenge of combined discrete-continuous dynamics and concurrent behavior. Existing formal modeling and verification languages for CPS expose the underlying proof search technology. They lack high-level structuring elements and are not efficiently executable. The ensuing modeling gap renders formal CPS models hard to understand and to validate. We propose a high-level programming-based approach to formal modeling and verification of hybrid systems as a hybrid extension of an Active Objects language. Well-structured hybrid active programs and requirements allow automatic, reachability-preserving translation into differential dynamic logic, a logic for hybrid (discrete-continuous) programs. Verification is achieved by discharging the resulting formulas with the theorem prover KeYmaera X. We demonstrate the usability of our approach with case studies.

Cite as

Eduard Kamburjan, Stefan Mitsch, and Reiner Hähnle. A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems. In LITES, Volume 8, Issue 2 (2022): Special Issue on Distributed Hybrid Systems. Leibniz Transactions on Embedded Systems, Volume 8, Issue 2, pp. 04:1-04:34, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{kamburjan_et_al:LITES.8.2.4,
  author =	{Kamburjan, Eduard and Mitsch, Stefan and H\"{a}hnle, Reiner},
  title =	{{A Hybrid Programming Language for Formal Modeling and Verification of Hybrid Systems}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{04:1--04:34},
  ISSN =	{2199-2002},
  year =	{2022},
  volume =	{8},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.8.2.4},
  doi =		{10.4230/LITES.8.2.4},
  annote =	{Keywords: Active Objects, Differential Dynamic Logic, Hybrid Systems}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Submodular Maximization Under Matroid Constraints

Authors: Moran Feldman, Paul Liu, Ashkan Norouzi-Fard, Ola Svensson, and Rico Zenklusen

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
Recent progress in (semi-)streaming algorithms for monotone submodular function maximization has led to tight results for a simple cardinality constraint. However, current techniques fail to give a similar understanding for natural generalizations, including matroid constraints. This paper aims at closing this gap. For a single matroid of rank k (i.e., any solution has cardinality at most k), our main results are: - A single-pass streaming algorithm that uses Õ(k) memory and achieves an approximation guarantee of 0.3178. - A multi-pass streaming algorithm that uses Õ(k) memory and achieves an approximation guarantee of (1-1/e - ε) by taking a constant (depending on ε) number of passes over the stream. This improves on the previously best approximation guarantees of 1/4 and 1/2 for single-pass and multi-pass streaming algorithms, respectively. In fact, our multi-pass streaming algorithm is tight in that any algorithm with a better guarantee than 1/2 must make several passes through the stream and any algorithm that beats our guarantee of 1-1/e must make linearly many passes (as well as an exponential number of value oracle queries). Moreover, we show how the approach we use for multi-pass streaming can be further strengthened if the elements of the stream arrive in uniformly random order, implying an improved result for p-matchoid constraints.

Cite as

Moran Feldman, Paul Liu, Ashkan Norouzi-Fard, Ola Svensson, and Rico Zenklusen. Streaming Submodular Maximization Under Matroid Constraints. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 59:1-59:20, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.ICALP.2022.59,
  author =	{Feldman, Moran and Liu, Paul and Norouzi-Fard, Ashkan and Svensson, Ola and Zenklusen, Rico},
  title =	{{Streaming Submodular Maximization Under Matroid Constraints}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{59:1--59:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.59},
  URN =		{urn:nbn:de:0030-drops-164007},
  doi =		{10.4230/LIPIcs.ICALP.2022.59},
  annote =	{Keywords: Submodular maximization, streaming, matroid, random order}
}
Document
Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication

Authors: Kristin Krüger, Nils Vreman, Richard Pates, Martina Maggio, Marcus Völp, and Gerhard Fohler

Published in: LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1


Abstract
Time-triggered real-time systems achieve deterministic behavior using schedules that are constructed offline, based on scheduling constraints. Their deterministic behavior makes time-triggered systems suitable for usage in safety-critical environments, like avionics. However, this determinism also allows attackers to fine-tune attacks that can be carried out after studying the behavior of the system through side channels, targeting safety-critical victim tasks. Replication -- i.e., the execution of task variants across different cores -- is inherently able to tolerate both accidental and malicious faults (i.e. attacks) as long as these faults are independent of one another. Yet, targeted attacks on the timing behavior of tasks which utilize information gained about the system behavior violate the fault independence assumption fault tolerance is based on. This violation may give attackers the opportunity to compromise all replicas simultaneously, in particular if they can mount the attack from already compromised components. In this paper, we analyze vulnerabilities of time-triggered systems, focusing on safety-certified multicore real-time systems. We introduce two runtime mitigation strategies to withstand directed timing inference based attacks: (i) schedule randomization at slot level, and (ii) randomization within a set of offline constructed schedules. We evaluate these mitigation strategies with synthetic experiments and a real case study to show their effectiveness and practicality.

Cite as

Kristin Krüger, Nils Vreman, Richard Pates, Martina Maggio, Marcus Völp, and Gerhard Fohler. Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication. In LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1, pp. 01:1-01:29, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{kruger_et_al:LITES.7.1.1,
  author =	{Kr\"{u}ger, Kristin and Vreman, Nils and Pates, Richard and Maggio, Martina and V\"{o}lp, Marcus and Fohler, Gerhard},
  title =	{{Randomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{01:1--01:29},
  ISSN =	{2199-2002},
  year =	{2021},
  volume =	{7},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.7.1.1},
  doi =		{10.4230/LITES.7.1.1},
  annote =	{Keywords: real-time systems, time-triggered systems, security}
}
Document
We know what you're doing! Application detection using thermal data

Authors: Philipp Miedl, Rehan Ahmed, and Lothar Thiele

Published in: LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1


Abstract
Modern mobile and embedded devices have high computing power which allows them to be used for multiple purposes. Therefore, applications with low security restrictions may execute on the same device as applications handling highly sensitive information. In such a setup, a security risk occurs if it is possible that an application uses system characteristics to gather information about another application on the same device.In this work, we present a method to leak sensitive runtime information by just using temperature sensor readings of a mobile device. We employ a Convolutional-Neural-Network, Long Short-Term Memory units and subsequent label sequence processing to identify the sequence of executed applications over time. To test our hypothesis we collect data from two state-of-the-art smartphones and real user usage patterns. We show an extensive evaluation using laboratory data, where we achieve labelling accuracies up to 90% and negligible timing error. Based on our analysis we state that the thermal information can be used to compromise sensitive user data and increase the vulnerability of mobile devices. A study based on data collected outside of the laboratory opens up various future directions for research.

Cite as

Philipp Miedl, Rehan Ahmed, and Lothar Thiele. We know what you're doing! Application detection using thermal data. In LITES, Volume 7, Issue 1 (2021): Special Issue on Embedded System Security. Leibniz Transactions on Embedded Systems, Volume 7, Issue 1, pp. 02:1-02:28, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{miedl_et_al:LITES.7.1.2,
  author =	{Miedl, Philipp and Ahmed, Rehan and Thiele, Lothar},
  title =	{{We know what you're doing! Application detection using thermal data}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{02:1--02:28},
  ISSN =	{2199-2002},
  year =	{2021},
  volume =	{7},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES.7.1.2},
  doi =		{10.4230/LITES.7.1.2},
  annote =	{Keywords: Thermal Monitoring, Side Channel, Data Leak, Sequence Labelling}
}
Document
CG Challenge
Coordinated Motion Planning Through Randomized k-Opt (CG Challenge)

Authors: Paul Liu, Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng

Published in: LIPIcs, Volume 189, 37th International Symposium on Computational Geometry (SoCG 2021)


Abstract
This paper examines the approach taken by team gitastrophe in the CG:SHOP 2021 challenge. The challenge was to find a sequence of simultaneous moves of square robots between two given configurations that minimized either total distance travelled or makespan (total time). Our winning approach has two main components: an initialization phase that finds a good initial solution, and a k-opt local search phase which optimizes this solution. This led to a first place finish in the distance category and a third place finish in the makespan category.

Cite as

Paul Liu, Jack Spalding-Jamieson, Brandon Zhang, and Da Wei Zheng. Coordinated Motion Planning Through Randomized k-Opt (CG Challenge). In 37th International Symposium on Computational Geometry (SoCG 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 189, pp. 64:1-64:8, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.SoCG.2021.64,
  author =	{Liu, Paul and Spalding-Jamieson, Jack and Zhang, Brandon and Zheng, Da Wei},
  title =	{{Coordinated Motion Planning Through Randomized k-Opt}},
  booktitle =	{37th International Symposium on Computational Geometry (SoCG 2021)},
  pages =	{64:1--64:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-184-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{189},
  editor =	{Buchin, Kevin and Colin de Verdi\`{e}re, \'{E}ric},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2021.64},
  URN =		{urn:nbn:de:0030-drops-138635},
  doi =		{10.4230/LIPIcs.SoCG.2021.64},
  annote =	{Keywords: motion planning, randomized local search, path finding}
}
Document
Algorithms and Adaptivity Gaps for Stochastic k-TSP

Authors: Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Given a metric (V,d) and a root ∈ V, the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [Ene et al., 2018], each vertex v in the given metric (V,d) contains a stochastic reward R_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost C_v, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [Singla, 2018] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels' inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a "critical" scale.

Cite as

Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla. Algorithms and Adaptivity Gaps for Stochastic k-TSP. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 45:1-45:25, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ITCS.2020.45,
  author =	{Jiang, Haotian and Li, Jian and Liu, Daogao and Singla, Sahil},
  title =	{{Algorithms and Adaptivity Gaps for Stochastic k-TSP}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{45:1--45:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.45},
  URN =		{urn:nbn:de:0030-drops-117308},
  doi =		{10.4230/LIPIcs.ITCS.2020.45},
  annote =	{Keywords: approximation algorithms, stochastic optimization, travelling salesman problem}
}
Document
Jointly Embedding Multiple Single-Cell Omics Measurements

Authors: Jie Liu, Yuanhao Huang, Ritambhara Singh, Jean-Philippe Vert, and William Stafford Noble

Published in: LIPIcs, Volume 143, 19th International Workshop on Algorithms in Bioinformatics (WABI 2019)


Abstract
Many single-cell sequencing technologies are now available, but it is still difficult to apply multiple sequencing technologies to the same single cell. In this paper, we propose an unsupervised manifold alignment algorithm, MMD-MA, for integrating multiple measurements carried out on disjoint aliquots of a given population of cells. Effectively, MMD-MA performs an in silico co-assay by embedding cells measured in different ways into a learned latent space. In the MMD-MA algorithm, single-cell data points from multiple domains are aligned by optimizing an objective function with three components: (1) a maximum mean discrepancy (MMD) term to encourage the differently measured points to have similar distributions in the latent space, (2) a distortion term to preserve the structure of the data between the input space and the latent space, and (3) a penalty term to avoid collapse to a trivial solution. Notably, MMD-MA does not require any correspondence information across data modalities, either between the cells or between the features. Furthermore, MMD-MA’s weak distributional requirements for the domains to be aligned allow the algorithm to integrate heterogeneous types of single cell measures, such as gene expression, DNA accessibility, chromatin organization, methylation, and imaging data. We demonstrate the utility of MMD-MA in simulation experiments and using a real data set involving single-cell gene expression and methylation data.

Cite as

Jie Liu, Yuanhao Huang, Ritambhara Singh, Jean-Philippe Vert, and William Stafford Noble. Jointly Embedding Multiple Single-Cell Omics Measurements. In 19th International Workshop on Algorithms in Bioinformatics (WABI 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 143, pp. 10:1-10:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.WABI.2019.10,
  author =	{Liu, Jie and Huang, Yuanhao and Singh, Ritambhara and Vert, Jean-Philippe and Noble, William Stafford},
  title =	{{Jointly Embedding Multiple Single-Cell Omics Measurements}},
  booktitle =	{19th International Workshop on Algorithms in Bioinformatics (WABI 2019)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-123-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{143},
  editor =	{Huber, Katharina T. and Gusfield, Dan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2019.10},
  URN =		{urn:nbn:de:0030-drops-110401},
  doi =		{10.4230/LIPIcs.WABI.2019.10},
  annote =	{Keywords: Manifold alignment, single-cell sequencing}
}
Document
Speeding Up BigClam Implementation on SNAP

Authors: C. H. Bryan Liu and Benjamin Paul Chamberlain

Published in: OASIcs, Volume 66, 2018 Imperial College Computing Student Workshop (ICCSW 2018)


Abstract
We perform a detailed analysis of the C++ implementation of the Cluster Affiliation Model for Big Networks (BigClam) on the Stanford Network Analysis Project (SNAP). BigClam is a popular graph mining algorithm that is capable of finding overlapping communities in networks containing millions of nodes. Our analysis shows a key stage of the algorithm - determining if a node belongs to a community - dominates the runtime of the implementation, yet the computation is not parallelized. We show that by parallelizing computations across multiple threads using OpenMP we can speed up the algorithm by 5.3 times when solving large networks for communities, while preserving the integrity of the program and the result.

Cite as

C. H. Bryan Liu and Benjamin Paul Chamberlain. Speeding Up BigClam Implementation on SNAP. In 2018 Imperial College Computing Student Workshop (ICCSW 2018). Open Access Series in Informatics (OASIcs), Volume 66, pp. 1:1-1:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:OASIcs.ICCSW.2018.1,
  author =	{Liu, C. H. Bryan and Chamberlain, Benjamin Paul},
  title =	{{Speeding Up BigClam Implementation on SNAP}},
  booktitle =	{2018 Imperial College Computing Student Workshop (ICCSW 2018)},
  pages =	{1:1--1:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-097-2},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{66},
  editor =	{Pirovano, Edoardo and Graversen, Eva},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICCSW.2018.1},
  URN =		{urn:nbn:de:0030-drops-101829},
  doi =		{10.4230/OASIcs.ICCSW.2018.1},
  annote =	{Keywords: BigClam, Community Detection, Parallelization, Networks}
}
Document
Submodular Optimization in the MapReduce Model

Authors: Paul Liu and Jan Vondrak

Published in: OASIcs, Volume 69, 2nd Symposium on Simplicity in Algorithms (SOSA 2019)


Abstract
Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a (1/2-o(1))-approximation in 2 MapReduce rounds, and the second is a (1-1/e-epsilon)-approximation in (1+o(1))/epsilon MapReduce rounds.

Cite as

Paul Liu and Jan Vondrak. Submodular Optimization in the MapReduce Model. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 18:1-18:10, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:OASIcs.SOSA.2019.18,
  author =	{Liu, Paul and Vondrak, Jan},
  title =	{{Submodular Optimization in the MapReduce Model}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{18:1--18:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.18},
  URN =		{urn:nbn:de:0030-drops-100447},
  doi =		{10.4230/OASIcs.SOSA.2019.18},
  annote =	{Keywords: mapreduce, submodular, optimization, approximation algorithms}
}
  • Refine by Author
  • 4 Liu, Paul
  • 2 Zheng, Da Wei
  • 1 Ahmed, Rehan
  • 1 Angrick, Sebastian
  • 1 Bals, Ben
  • Show More...

  • Refine by Classification
  • 3 Computer systems organization → Real-time systems
  • 2 Mathematics of computing → Graph algorithms
  • 2 Mathematics of computing → Submodular optimization and polymatroids
  • 2 Theory of computation → Timed and hybrid models
  • 1 Applied computing → Computational biology
  • Show More...

  • Refine by Keyword
  • 2 approximation algorithms
  • 1 (m
  • 1 Active Objects
  • 1 BigClam
  • 1 Blocking optimality
  • Show More...

  • Refine by Type
  • 21 document

  • Refine by Publication Year
  • 5 2023
  • 3 2019
  • 3 2021
  • 3 2022
  • 2 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail