12 Search Results for "Schr�der, Simon"


Document
Threshold Testing and Semi-Online Prophet Inequalities

Authors: Martin Hoefer and Kevin Schewior

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).

Cite as

Martin Hoefer and Kevin Schewior. Threshold Testing and Semi-Online Prophet Inequalities. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 62:1-62:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.ESA.2023.62,
  author =	{Hoefer, Martin and Schewior, Kevin},
  title =	{{Threshold Testing and Semi-Online Prophet Inequalities}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{62:1--62:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.62},
  URN =		{urn:nbn:de:0030-drops-187159},
  doi =		{10.4230/LIPIcs.ESA.2023.62},
  annote =	{Keywords: Prophet Inequalities, Testing, Stochastic Probing}
}
Document
Managing Industrial Control Systems Security Risks for Cyber Insurance (Dagstuhl Seminar 21451)

Authors: Simon Dejung, Mingyan Liu, Arndt Lüder, and Edgar Weippl

Published in: Dagstuhl Reports, Volume 11, Issue 10 (2022)


Abstract
Industrial control systems (ICSs), such as production systems or critical infrastructures, are an attractive target for cybercriminals, since attacks against these systems may cause severe physical damages/material damages (PD/MD), resulting in business interruption (BI) and loss of profit (LOP). Besides financial loss, cyber-attacks against ICSs can also harm human health or the environment or even be used as a kind of weapon. Thus, it is of utmost importance to manage cyber risks throughout the ICS’s lifecycle (i.e., engineering, operation, decommissioning), especially in light of the ever-increasing threat level that is accompanied by the progressive digitization of industrial processes. However, asset owners may not be able to address security risks sufficiently, nor adequately quantify them in terms of their potential impact (physical and non-physical) and likelihood. A self-deceptive solution might be using insurance to transfer these risks and offload them from their balance sheet since the underlying problem remains unsolved. The reason for this is that the exposure for asset owners remains and mitigation measures may still not be implemented adequately while the insurance industry is onboarding unassessed risks and covering it often without premium and without managing the potential exposure of accumulated events. The Dagstuhl Seminar 21451 "Managing Industrial Control Systems Security Risks for Cyber Insurance" aimed to provide an interdisciplinary forum to analyze and discuss open questions and current topics of research in this area in order to gain in-depth insights into the security risks of ICSs and the quantification thereof.

Cite as

Simon Dejung, Mingyan Liu, Arndt Lüder, and Edgar Weippl. Managing Industrial Control Systems Security Risks for Cyber Insurance (Dagstuhl Seminar 21451). In Dagstuhl Reports, Volume 11, Issue 10, pp. 36-56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{dejung_et_al:DagRep.11.10.36,
  author =	{Dejung, Simon and Liu, Mingyan and L\"{u}der, Arndt and Weippl, Edgar},
  title =	{{Managing Industrial Control Systems Security Risks for Cyber Insurance (Dagstuhl Seminar 21451)}},
  pages =	{36--56},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2022},
  volume =	{11},
  number =	{10},
  editor =	{Dejung, Simon and Liu, Mingyan and L\"{u}der, Arndt and Weippl, Edgar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.11.10.36},
  URN =		{urn:nbn:de:0030-drops-159273},
  doi =		{10.4230/DagRep.11.10.36},
  annote =	{Keywords: industrial control systems, security, cyber insurance, cyber risk quantification, production systems engineering, risk engineering, SCADA, Industry 4.0}
}
Document
A Linear-Time Nominal μ-Calculus with Name Allocation

Authors: Daniel Hausmann, Stefan Milius, and Lutz Schröder

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
Logics and automata models for languages over infinite alphabets, such as Freeze LTL and register automata, serve the verification of processes or documents with data. They relate tightly to formalisms over nominal sets, such as nondetermininistic orbit-finite automata (NOFAs), where names play the role of data. Reasoning problems in such formalisms tend to be computationally hard. Name-binding nominal automata models such as {regular nondeterministic nominal automata (RNNAs)} have been shown to be computationally more tractable. In the present paper, we introduce a linear-time fixpoint logic Bar-μTL} for finite words over an infinite alphabet, which features full negation and freeze quantification via name binding. We show by a nontrivial reduction to extended regular nondeterministic nominal automata that even though Bar-μTL} allows unrestricted nondeterminism and unboundedly many registers, model checking Bar-μTL} over RNNAs and satisfiability checking both have elementary complexity. For example, model checking is in 2ExpSpace, more precisely in parametrized ExpSpace, effectively with the number of registers as the parameter.

Cite as

Daniel Hausmann, Stefan Milius, and Lutz Schröder. A Linear-Time Nominal μ-Calculus with Name Allocation. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hausmann_et_al:LIPIcs.MFCS.2021.58,
  author =	{Hausmann, Daniel and Milius, Stefan and Schr\"{o}der, Lutz},
  title =	{{A Linear-Time Nominal \mu-Calculus with Name Allocation}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.58},
  URN =		{urn:nbn:de:0030-drops-144987},
  doi =		{10.4230/LIPIcs.MFCS.2021.58},
  annote =	{Keywords: Model checking, linear-time logic, nominal sets}
}
Document
Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group

Authors: Caroline Mattes and Armin Weiß

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
Power circuits have been introduced in 2012 by Myasnikov, Ushakov and Won as a data structure for non-elementarily compressed integers supporting the arithmetic operations addition and (x,y) ↦ x⋅2^y. The same authors applied power circuits to give a polynomial-time solution to the word problem of the Baumslag group, which has a non-elementary Dehn function. In this work, we examine power circuits and the word problem of the Baumslag group under parallel complexity aspects. In particular, we establish that the word problem of the Baumslag group can be solved in NC - even though one of the essential steps is to compare two integers given by power circuits and this, in general, is shown to be 𝖯-complete. The key observation is that the depth of the occurring power circuits is logarithmic and such power circuits can be compared in NC.

Cite as

Caroline Mattes and Armin Weiß. Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 74:1-74:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mattes_et_al:LIPIcs.MFCS.2021.74,
  author =	{Mattes, Caroline and Wei{\ss}, Armin},
  title =	{{Parallel Algorithms for Power Circuits and the Word Problem of the Baumslag Group}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{74:1--74:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.74},
  URN =		{urn:nbn:de:0030-drops-145148},
  doi =		{10.4230/LIPIcs.MFCS.2021.74},
  annote =	{Keywords: Word problem, Baumslag group, power circuit, parallel complexity}
}
Document
Syntactic Minimization Of Nondeterministic Finite Automata

Authors: Robert S. R. Myers and Henning Urbat

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
Nondeterministic automata may be viewed as succinct programs implementing deterministic automata, i.e. complete specifications. Converting a given deterministic automaton into a small nondeterministic one is known to be computationally very hard; in fact, the ensuing decision problem is PSPACE-complete. This paper stands in stark contrast to the status quo. We restrict attention to subatomic nondeterministic automata, whose individual states accept unions of syntactic congruence classes. They are general enough to cover almost all structural results concerning nondeterministic state-minimality. We prove that converting a monoid recognizing a regular language into a small subatomic acceptor corresponds to an NP-complete problem. The NP certificates are solutions of simple equations involving relations over the syntactic monoid. We also consider the subclass of atomic nondeterministic automata introduced by Brzozowski and Tamm. Given a deterministic automaton and another one for the reversed language, computing small atomic acceptors is shown to be NP-complete with analogous certificates. Our complexity results emerge from an algebraic characterization of (sub)atomic acceptors in terms of deterministic automata with semilattice structure, combined with an equivalence of categories leading to succinct representations.

Cite as

Robert S. R. Myers and Henning Urbat. Syntactic Minimization Of Nondeterministic Finite Automata. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 78:1-78:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{myers_et_al:LIPIcs.MFCS.2021.78,
  author =	{Myers, Robert S. R. and Urbat, Henning},
  title =	{{Syntactic Minimization Of Nondeterministic Finite Automata}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{78:1--78:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.78},
  URN =		{urn:nbn:de:0030-drops-145186},
  doi =		{10.4230/LIPIcs.MFCS.2021.78},
  annote =	{Keywords: Algebraic language theory, Nondeterministic automata, NP-completeness}
}
Document
Efficiently Testing Simon’s Congruence

Authors: Paweł Gawrychowski, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
Simon’s congruence ∼_k is a relation on words defined by Imre Simon in the 1970s and intensely studied since then. This congruence was initially used in connection to piecewise testable languages, but also found many applications in, e.g., learning theory, databases theory, or linguistics. The ∼_k-relation is defined as follows: two words are ∼_k-congruent if they have the same set of subsequences of length at most k. A long standing open problem, stated already by Simon in his initial works on this topic, was to design an algorithm which computes, given two words s and t, the largest k for which s∼_k t. We propose the first algorithm solving this problem in linear time O(|s|+|t|) when the input words are over the integer alphabet {1,…,|s|+|t|} (or other alphabets which can be sorted in linear time). Our approach can be extended to an optimal algorithm in the case of general alphabets as well. To achieve these results, we introduce a novel data-structure, called Simon-Tree, which allows us to construct a natural representation of the equivalence classes induced by ∼_k on the set of suffixes of a word, for all k ≥ 1. We show that such a tree can be constructed for an input word in linear time. Then, when working with two words s and t, we compute their respective Simon-Trees and efficiently build a correspondence between the nodes of these trees. This correspondence, which can also be constructed in linear time O(|s|+|t|), allows us to retrieve the largest k for which s∼_k t.

Cite as

Paweł Gawrychowski, Maria Kosche, Tore Koß, Florin Manea, and Stefan Siemer. Efficiently Testing Simon’s Congruence. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 34:1-34:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.STACS.2021.34,
  author =	{Gawrychowski, Pawe{\l} and Kosche, Maria and Ko{\ss}, Tore and Manea, Florin and Siemer, Stefan},
  title =	{{Efficiently Testing Simon’s Congruence}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{34:1--34:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.34},
  URN =		{urn:nbn:de:0030-drops-136796},
  doi =		{10.4230/LIPIcs.STACS.2021.34},
  annote =	{Keywords: Simon’s congruence, Subsequence, Scattered factor, Efficient algorithms}
}
Document
On the Complexity of Multi-Pushdown Games

Authors: Roland Meyer and Sören van der Wall

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
We study the influence of parameters like the number of contexts, phases, and stacks on the complexity of solving parity games over concurrent recursive programs. Our first result shows that k-context games are b-EXPTIME-complete, where b = max{k-2, 1}. This means up to three contexts do not increase the complexity over an analysis for the sequential case. Our second result shows that for ordered k-stack as well as k-phase games the complexity jumps to k-EXPTIME-complete.

Cite as

Roland Meyer and Sören van der Wall. On the Complexity of Multi-Pushdown Games. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 52:1-52:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{meyer_et_al:LIPIcs.FSTTCS.2020.52,
  author =	{Meyer, Roland and van der Wall, S\"{o}ren},
  title =	{{On the Complexity of Multi-Pushdown Games}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{52:1--52:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.52},
  URN =		{urn:nbn:de:0030-drops-132930},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.52},
  annote =	{Keywords: concurrency, complexity, games, infinite state, multi-pushdown}
}
Document
Worst-Case Energy-Consumption Analysis by Microarchitecture-Aware Timing Analysis for Device-Driven Cyber-Physical Systems

Authors: Phillip Raffeck, Christian Eichler, Peter Wägemann, and Wolfgang Schröder-Preikschat

Published in: OASIcs, Volume 72, 19th International Workshop on Worst-Case Execution Time Analysis (WCET 2019)


Abstract
Many energy-constrained cyber-physical systems require both timeliness and the execution of tasks within given energy budgets. That is, besides knowledge on worst-case execution time (WCET), the worst-case energy consumption (WCEC) of operations is essential. Unfortunately, WCET analysis approaches are not directly applicable for deriving WCEC bounds in device-driven cyber-physical systems: For example, a single memory operation can lead to a significant power-consumption increase when thereby switching on a device (e.g. transceiver, actuator) in the embedded system. However, as we demonstrate in this paper, existing approaches from microarchitecture-aware timing analysis (i.e. considering cache and pipeline effects) are beneficial for determining WCEC bounds: We extended our framework on whole-system analysis with microarchitecture-aware timing modeling to precisely account for the execution time that devices are kept (in)active. Our evaluations based on a benchmark generator, which is able to output benchmarks with known baselines (i.e. actual WCET and actual WCEC), and an ARM Cortex-M4 platform validate that the approach significantly reduces analysis pessimism in whole-system WCEC analyses.

Cite as

Phillip Raffeck, Christian Eichler, Peter Wägemann, and Wolfgang Schröder-Preikschat. Worst-Case Energy-Consumption Analysis by Microarchitecture-Aware Timing Analysis for Device-Driven Cyber-Physical Systems. In 19th International Workshop on Worst-Case Execution Time Analysis (WCET 2019). Open Access Series in Informatics (OASIcs), Volume 72, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{raffeck_et_al:OASIcs.WCET.2019.4,
  author =	{Raffeck, Phillip and Eichler, Christian and W\"{a}gemann, Peter and Schr\"{o}der-Preikschat, Wolfgang},
  title =	{{Worst-Case Energy-Consumption Analysis by Microarchitecture-Aware Timing Analysis for Device-Driven Cyber-Physical Systems}},
  booktitle =	{19th International Workshop on Worst-Case Execution Time Analysis (WCET 2019)},
  pages =	{4:1--4:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-118-4},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{72},
  editor =	{Altmeyer, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2019.4},
  URN =		{urn:nbn:de:0030-drops-107699},
  doi =		{10.4230/OASIcs.WCET.2019.4},
  annote =	{Keywords: WCEC, WCRE, WCET, michroarchitecture analysis, whole-system analysis}
}
Document
An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks

Authors: Binay Bhattacharya, Yuya Higashikawa, Tsunehiko Kameda, and Naoki Katoh

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
We model evacuation in emergency situations by dynamic flow in a network. We want to minimize the aggregate evacuation time to an evacuation center (called a sink) on a path network with uniform edge capacities. The evacuees are initially located at the vertices, but their precise numbers are unknown, and are given by upper and lower bounds. Under this assumption, we compute a sink location that minimizes the maximum "regret." We present the first sub-cubic time algorithm in n to solve this problem, where n is the number of vertices. Although we cast our problem as evacuation, our result is accurate if the "evacuees" are fluid-like continuous material, but is a good approximation for discrete evacuees.

Cite as

Binay Bhattacharya, Yuya Higashikawa, Tsunehiko Kameda, and Naoki Katoh. An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 14:1-14:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ISAAC.2018.14,
  author =	{Bhattacharya, Binay and Higashikawa, Yuya and Kameda, Tsunehiko and Katoh, Naoki},
  title =	{{An O(n^2 log^2 n) Time Algorithm for Minmax Regret Minsum Sink on Path Networks}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{14:1--14:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.14},
  URN =		{urn:nbn:de:0030-drops-99624},
  doi =		{10.4230/LIPIcs.ISAAC.2018.14},
  annote =	{Keywords: Facility location, minsum sink, evacuation problem, minmax regret, dynamic flow path network}
}
Document
(Social) Norm Dynamics

Authors: Giulia Andrighetto, Cristiano Castelfranchi, Eunate Mayor, John McBreen, Maite Lopez-Sanchez, and Simon Parsons

Published in: Dagstuhl Follow-Ups, Volume 4, Normative Multi-Agent Systems (2013)


Abstract
This chapter is concerned with the dynamics of social norms, that is the way that such norms change. In particular this chapter concentrates on the lifecycle that social norms go through, focusing on the generation of norms, the way that norms spread and stabilize, and the way that norms evolve. We also discuss the cognitive mechanisms behind norm compliance, the role of culture in norm dynamics, and the way that trust affects norm dynamics.

Cite as

Giulia Andrighetto, Cristiano Castelfranchi, Eunate Mayor, John McBreen, Maite Lopez-Sanchez, and Simon Parsons. (Social) Norm Dynamics. In Normative Multi-Agent Systems. Dagstuhl Follow-Ups, Volume 4, pp. 135-170, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InCollection{andrighetto_et_al:DFU.Vol4.12111.135,
  author =	{Andrighetto, Giulia and Castelfranchi, Cristiano and Mayor, Eunate and McBreen, John and Lopez-Sanchez, Maite and Parsons, Simon},
  title =	{{(Social) Norm Dynamics}},
  booktitle =	{Normative Multi-Agent Systems},
  pages =	{135--170},
  series =	{Dagstuhl Follow-Ups},
  ISBN =	{978-3-939897-51-4},
  ISSN =	{1868-8977},
  year =	{2013},
  volume =	{4},
  editor =	{Andrighetto, Giulia and Governatori, Guido and Noriega, Pablo and van der Torre, Leendert W. N.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DFU.Vol4.12111.135},
  URN =		{urn:nbn:de:0030-drops-40023},
  doi =		{10.4230/DFU.Vol4.12111.135},
  annote =	{Keywords: Social norms, Norm generation, Norm spreading, Norm evolution, Trust, Culture}
}
Document
Feature-based Visualization of Dense Integral Line Data

Authors: Simon Schröder, Harald Obermaier, Christoph Garth, and Kenneth I. Joy

Published in: OASIcs, Volume 27, Visualization of Large and Unstructured Data Sets: Applications in Geospatial Planning, Modeling and Engineering - Proceedings of IRTG 1131 Workshop 2011


Abstract
Feature-based visualization of flow fields has proven as an effective tool for flow analysis. While most flow visualization techniques operate on vector field data, our visualization techniques make use of a different simulation output: Particle Tracers. Our approach solely relies on integral lines that can be easily obtained from most simulation software. The task is the visualization of dense integral line data. We combine existing methods for streamline visualization, i.e. illumination, transparency, and halos, and add ambient occlusion for lines. But, this only solves one part of the problem: because of the high density of lines, visualization has to fight with occlusion, high frequency noise, and overlaps. As a solution we propose non-automated choices of transfer functions on curve properties that help highlighting important flow features like vortices or turbulent areas. These curve properties resemble some of the original flow properties. With the new combination of existing line drawing methods and the addition of ambient occlusion we improve the visualization of lines by adding better shape and depth cues. The intelligent use of transfer functions on curve properties reduces visual clutter and helps focusing on important features while still retaining context, as demonstrated in the examples given in this work.

Cite as

Simon Schröder, Harald Obermaier, Christoph Garth, and Kenneth I. Joy. Feature-based Visualization of Dense Integral Line Data. In Visualization of Large and Unstructured Data Sets: Applications in Geospatial Planning, Modeling and Engineering - Proceedings of IRTG 1131 Workshop 2011. Open Access Series in Informatics (OASIcs), Volume 27, pp. 71-87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{schroder_et_al:OASIcs.VLUDS.2011.71,
  author =	{Schr\"{o}der, Simon and Obermaier, Harald and Garth, Christoph and Joy, Kenneth I.},
  title =	{{Feature-based Visualization of Dense Integral Line Data}},
  booktitle =	{Visualization of Large and Unstructured Data Sets: Applications in Geospatial Planning, Modeling and Engineering - Proceedings of IRTG 1131 Workshop 2011},
  pages =	{71--87},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-46-0},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{27},
  editor =	{Garth, Christoph and Middel, Ariane and Hagen, Hans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.VLUDS.2011.71},
  URN =		{urn:nbn:de:0030-drops-37424},
  doi =		{10.4230/OASIcs.VLUDS.2011.71},
  annote =	{Keywords: flow simulation, feature-based visualization, dense lines, ambient occlusion}
}
Document
The Past, Present and Future of High Performance Computing

Authors: Ruud van der Pas

Published in: Dagstuhl Seminar Proceedings, Volume 9061, Combinatorial Scientific Computing (2009)


Abstract
In this overview paper we start by looking at the birth of what is called ``High Performance Computing'' today. It all began over 30 years ago when the Cray 1 and CDC Cyber 205 ``supercomputers'' were introduced. This had a huge impact on scientific computing. A very turbulent time at both the hardware and software level was to follow. Eventually the situation stabilized, but not for long. Today, there are two different trends in hardware architectures and have created a bifurcation in the market. On one hand the GPGPU quickly found a place in the marketplace, but is still the domain of the expert. In contrast to this, multicore processors make hardware parallelism available to the masses. Each have their own set of issues to deal with. In the last section we make an attempt to look into the future, but this is of course a highly personal opinion.

Cite as

Ruud van der Pas. The Past, Present and Future of High Performance Computing. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{vanderpas:DagSemProc.09061.20,
  author =	{van der Pas, Ruud},
  title =	{{The Past, Present and Future of High Performance Computing}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--7},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.20},
  URN =		{urn:nbn:de:0030-drops-20836},
  doi =		{10.4230/DagSemProc.09061.20},
  annote =	{Keywords: High-Performance Scientific Computing}
}
  • Refine by Author
  • 1 Andrighetto, Giulia
  • 1 Bhattacharya, Binay
  • 1 Castelfranchi, Cristiano
  • 1 Dejung, Simon
  • 1 Eichler, Christian
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Formal languages and automata theory
  • 1 Applied computing → Transportation
  • 1 Computer systems organization → Embedded and cyber-physical systems
  • 1 Hardware → Power and energy
  • 1 Hardware → Static timing analysis
  • Show More...

  • Refine by Keyword
  • 1 Algebraic language theory
  • 1 Baumslag group
  • 1 Culture
  • 1 Efficient algorithms
  • 1 Facility location
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 4 2021
  • 1 2009
  • 1 2012
  • 1 2013
  • 1 2018
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail