Volume

Dagstuhl Seminar Proceedings, Volume 9061



Publication Details

  • published at: 2009-07-24
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
09061 Abstracts Collection – Combinatorial Scientific Computing

Authors: Uwe Naumann, Olaf Schenk, Horst D Simon, and Sivan Toledo


Abstract
From 01.02.2009 to 06.02.2009, the Dagstuhl Seminar 09061 ``Combinatorial Scientific Computing '' was held in Schloss Dagstuhl – Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Uwe Naumann, Olaf Schenk, Horst D Simon, and Sivan Toledo. 09061 Abstracts Collection – Combinatorial Scientific Computing. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{naumann_et_al:DagSemProc.09061.1,
  author =	{Naumann, Uwe and Schenk, Olaf and Simon, Horst D and Toledo, Sivan},
  title =	{{09061 Abstracts Collection – Combinatorial Scientific Computing}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--49},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.1},
  URN =		{urn:nbn:de:0030-drops-21124},
  doi =		{10.4230/DagSemProc.09061.1},
  annote =	{Keywords: Graphs, combinatorics, high-performance scientific computing}
}
Document
A Model for Task Repartioning under Data Replication

Authors: Cevdet Aykanat, Erkan Okuyan, and B. Barla Cambazoglu


Abstract
We propose a two-phase model for solving the problem of task repartitioning under data replication with memory constraints. The hypergraph-partitioning-based model proposed for the first phase aims to minimize the total message volume that will be incurred due to the replication/migration of input data while maintaining balance on computational and receive-volume loads of processors. The network-flow-based model proposed for the second phase aims to minimize the maximum message volume handled by processors via utilizing the flexibility in assigning send-communication tasks to processors, which is introduced by data replication. The validity of our proposed model is verified on parallelization of a direct volume rendering algorithm.

Cite as

Cevdet Aykanat, Erkan Okuyan, and B. Barla Cambazoglu. A Model for Task Repartioning under Data Replication. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{aykanat_et_al:DagSemProc.09061.2,
  author =	{Aykanat, Cevdet and Okuyan, Erkan and Cambazoglu, B. Barla},
  title =	{{A Model for Task Repartioning under Data Replication}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.2},
  URN =		{urn:nbn:de:0030-drops-20903},
  doi =		{10.4230/DagSemProc.09061.2},
  annote =	{Keywords: Task repartitioning, data replication, hypergraph partitioning with fixed vertices, assignment flow network}
}
Document
A Nearly-Linear Time Algorithm for Approximately Solving Linear Systems in a Symmetric M-Matrix

Authors: Samuel I. Daitch and Daniel A. Spielman


Abstract
We present an algorithm for solving a linear system in a symmetric M-matrix. In particular, for $n times n$ symmetric M-matrix $M$, we show how to find a diagonal matrix $D$ such that $DMD$ is diagonally-dominant. To compute $D$, the algorithm must solve $O{log n}$ linear systems in diagonally-dominant matrices. If we solve these diagonally-dominant systems approximately using the Spielman-Teng nearly-linear time solver, then we obtain an algorithm for approximately solving linear systems in symmetric M-matrices, for which the expected running time is also nearly-linear.

Cite as

Samuel I. Daitch and Daniel A. Spielman. A Nearly-Linear Time Algorithm for Approximately Solving Linear Systems in a Symmetric M-Matrix. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{daitch_et_al:DagSemProc.09061.3,
  author =	{Daitch, Samuel I. and Spielman, Daniel A.},
  title =	{{A Nearly-Linear Time Algorithm for Approximately Solving Linear Systems in a Symmetric M-Matrix}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.3},
  URN =		{urn:nbn:de:0030-drops-20803},
  doi =		{10.4230/DagSemProc.09061.3},
  annote =	{Keywords: M-matrix, diagonally-dominant matrix, linear system solver, iterative algorithm, randomized algorithm, network flow, gain graph}
}
Document
Algorithmic Differentiation Through Automatic Graph Elimination Ordering (ADTAGEO)

Authors: Jan Riehme and Andreas Griewank


Abstract
Algorithmic Differentiation Through Automatic Graph Elimination Ordering (ADTAGEO) is based on the principle of Instant Elimination: At runtime we dynamically maintain a DAG representing only active variables that are alive at any time. Whenever an active variable is deallocated or its value is overwritten the corresponding vertex in the Live-DAG will be eliminated immediately by the well known vertex elimination rule [1]. Consequently, the total memory requirement is equal to that of the sparse forward mode. Assuming that local variables are destructed in the opposite order of their construction (as in C++), a single assignment code is in effect differentiated in reverse mode. If compiler-generated temporaries are destroyed in reverse order too, then Instant Elimination yields the statement level reverse mode of ADIFOR [2] naturally. The user determines the elimination order intentionally (or unintentionally) by the order in which he declares variables, which makes hybrid modes of AD possible by combining forward and reverse differentiated parts. By annotating the Live-DAG with local Hessians and applying second order elimination rules, Hessian-vector products can be computed efficiently since the annotated Live-DAG stores one half of the symmetric Hessian graph only (as suggested in [1]). Nested automatic differentiation is done easily by subsequent propagations, since sensitivities between variables alive can be obtained at any point in time within the Live-DAG. The concept of maintaining a Live-DAG fits optimally into the strategy of overloaded operators for classes, it is a very natural example of Object Oriented Programming. A proof-of-concept implementation in C++ is available (contact the first author). References 1. Griewank, A.: Evaluating Derivatives. Principles and Techniques of Algorithmic Differentiation. SIAM (2000) 2.Bischof, C.H., Carle, A., Khademi, P., Mauer, A.: ADIFOR 2.0: Automatic differentiation of Fortran 77 programs. IEEE Computational Science & Engineering 3 (1996) 18-32

Cite as

Jan Riehme and Andreas Griewank. Algorithmic Differentiation Through Automatic Graph Elimination Ordering (ADTAGEO). In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{riehme_et_al:DagSemProc.09061.4,
  author =	{Riehme, Jan and Griewank, Andreas},
  title =	{{Algorithmic Differentiation Through Automatic Graph  Elimination Ordering (ADTAGEO)}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.4},
  URN =		{urn:nbn:de:0030-drops-20852},
  doi =		{10.4230/DagSemProc.09061.4},
  annote =	{Keywords: Automatic Differentiation, Instant Elimination, Live-DAG, symmetric Hessian-DAG, forward mode, reverse mode, checkpointing, ADTAGEO}
}
Document
Assessing an approximation algorithm for the minimum fill-in problem in practice

Authors: H. Martin Bücker, Michael Lülfesmann, and Arno Rasch


Abstract
We investigate an implementation of an approximation algorithm for the minimum fill-in problem. The algorithm has some degree of freedom since it is composed of several subtasks for which one can choose between different algorithms. The goal of the present work is to study the impact of theses components and carefully examine the practicability of the overall approximation algorithm by a set of numerical examples.

Cite as

H. Martin Bücker, Michael Lülfesmann, and Arno Rasch. Assessing an approximation algorithm for the minimum fill-in problem in practice. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{bucker_et_al:DagSemProc.09061.5,
  author =	{B\"{u}cker, H. Martin and L\"{u}lfesmann, Michael and Rasch, Arno},
  title =	{{Assessing an approximation algorithm for the minimum fill-in problem in practice}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.5},
  URN =		{urn:nbn:de:0030-drops-21191},
  doi =		{10.4230/DagSemProc.09061.5},
  annote =	{Keywords: Sparse linear algebra}
}
Document
Combinatorial Problems in High-Performance Computing: Partitioning

Authors: Rob Bisseling, Tristan van Leeuwen, and Umit V. Catalyurek


Abstract
This extended abstract presents a survey of combinatorial problems encountered in scientific computations on today's high-performance architectures, with sophisticated memory hierarchies, multiple levels of cache, and multiple processors on chip as well as off-chip. For parallelism, the most important problem is to partition sparse matrices, graph, or hypergraphs into nearly equal-sized parts while trying to reduce inter-processor communication. Common approaches to such problems involve multilevel methods based on coarsening and uncoarsening (hyper)graphs, matching of similar vertices, searching for good separator sets and good splittings, dynamical adjustment of load imbalance, and two-dimensional matrix splitting methods.

Cite as

Rob Bisseling, Tristan van Leeuwen, and Umit V. Catalyurek. Combinatorial Problems in High-Performance Computing: Partitioning. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{bisseling_et_al:DagSemProc.09061.6,
  author =	{Bisseling, Rob and van Leeuwen, Tristan and Catalyurek, Umit V.},
  title =	{{Combinatorial Problems in High-Performance Computing: Partitioning}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--5},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.6},
  URN =		{urn:nbn:de:0030-drops-20818},
  doi =		{10.4230/DagSemProc.09061.6},
  annote =	{Keywords: Partitioning, sparse matrix, hypergraph, parallel, HPC}
}
Document
Combinatorial Problems in OpenAD

Authors: Jean Utke and Uwe Naumann


Abstract
Computing derivatives using automatic differentiation methods entails a variety of combinatorial problems. The OpenAD tool implements automatic differentiation as source transformation of a program that represents a numerical model. We select three combinatorial problems and discuss the solutions implemented in OpenAD. Our intention is to explain the specific parts of the implementation so that readers can easily use OpenAD to investigate and develop their own solutions to these problems.

Cite as

Jean Utke and Uwe Naumann. Combinatorial Problems in OpenAD. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{utke_et_al:DagSemProc.09061.7,
  author =	{Utke, Jean and Naumann, Uwe},
  title =	{{Combinatorial Problems in OpenAD}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.7},
  URN =		{urn:nbn:de:0030-drops-20954},
  doi =		{10.4230/DagSemProc.09061.7},
  annote =	{Keywords: Automatic differentiation, combinatorial problem, tool tutorial}
}
Document
Combinatorial problems in solving linear systems

Authors: Iain S. Duff and Bora Ucar


Abstract
Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices, therefore, form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today's numerical linear algebra computations consists of sparse linear system solutions, we will cover combinatorial problems, notions, and algorithms relating to those computations. This talk is thus concerned with direct and iterative methods for sparse linear systems and their intercation with combinatorial optimization. On the direct methods side, we discuss matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factor preconditioners (notion of level of fill-in), support graph preconditioners (graph embedding concepts), and algebraic multigrids (independent sets in undirected graphs). In a separate part of the talk, we discuss methods that aim to exploit sparsity during linear system solution. These methods include block diagonalization of the matrix; efficient triangular system solutions for right-hand side vectors of single nonzero entries. Towards the end, we mention, quite briefly as they are topics of other invited talks, some other areas whose interactions with combinatorial optimization are of great benefit to numerical linear algebra. These include graph and hypergraph partitioning for load balancing problems, and colouring problems in numerical optimization. On closing, we compile and list a set of open problems.

Cite as

Iain S. Duff and Bora Ucar. Combinatorial problems in solving linear systems. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{duff_et_al:DagSemProc.09061.8,
  author =	{Duff, Iain S. and Ucar, Bora},
  title =	{{Combinatorial problems in solving linear systems}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--37},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.8},
  URN =		{urn:nbn:de:0030-drops-20779},
  doi =		{10.4230/DagSemProc.09061.8},
  annote =	{Keywords: Combinatorial scientific computing, graph theory, combinatorial optimization, sparse matrices, linear system solution}
}
Document
Distillating knowledge about SCOTCH

Authors: Francois Pellegrini


Abstract
The design of the Scotch library for static mapping, graph partitioning and sparse matrix ordering is highly modular, so as to allow users and potential contributors to tweak it and add easily new static mapping, graph bipartitioning, vertex separation or graph ordering methods to match their particular needs. The purpose of this tutorial is twofold. It will start with a description of the interface of Scotch, presenting its visible objects and data structures. Then, we will step into the API mirror and have a look at the inside: the internal representation of graphs, mappings and orderings, and the basic sequential and parallel building blocks: graph induction, graph coarsening which can be re-used by third-party software. As an example, we will show how to add a simple genetic algorithm routine to the graph bipartitioning methods.

Cite as

Francois Pellegrini. Distillating knowledge about SCOTCH. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{pellegrini:DagSemProc.09061.9,
  author =	{Pellegrini, Francois},
  title =	{{Distillating knowledge about SCOTCH}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.9},
  URN =		{urn:nbn:de:0030-drops-20914},
  doi =		{10.4230/DagSemProc.09061.9},
  annote =	{Keywords: Scotch, graph algorithms, data structures}
}
Document
Getting Started with ADOL-C

Authors: Andrea Walther


Abstract
The C++ package ADOL-C described in this paper facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The numerical values of derivative vectors are obtained free of truncation errors at mostly a small multiple of the run time and a fix small multiple random access memory required by the given function evaluation program. Derivative matrices are obtained by columns, by rows or in sparse format. This tutorial describes the source code modification required for the application of ADOL-C, the most frequently used drivers to evaluate derivatives and some recent developments.

Cite as

Andrea Walther. Getting Started with ADOL-C. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{walther:DagSemProc.09061.10,
  author =	{Walther, Andrea},
  title =	{{Getting Started with ADOL-C}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.10},
  URN =		{urn:nbn:de:0030-drops-20849},
  doi =		{10.4230/DagSemProc.09061.10},
  annote =	{Keywords: ADOL-C, algorithmic differentiation of C/C++ programs}
}
Document
Getting Started with Zoltan: A Short Tutorial

Authors: Karen D. Devine, Erik G. Boman, Lee Ann Riesen, Umit V. Catalyurek, and Cédric Chevalier


Abstract
The Zoltan library is a toolkit of parallel combinatorial algorithms for unstructured and/or adaptive computations. In this paper, we briefly describe the most significant tools in Zoltan: dynamic partitioning, graph coloring and ordering. We also describe how to obtain, build, and use Zoltan in parallel applications.

Cite as

Karen D. Devine, Erik G. Boman, Lee Ann Riesen, Umit V. Catalyurek, and Cédric Chevalier. Getting Started with Zoltan: A Short Tutorial. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{devine_et_al:DagSemProc.09061.11,
  author =	{Devine, Karen D. and Boman, Erik G. and Riesen, Lee Ann and Catalyurek, Umit V. and Chevalier, C\'{e}dric},
  title =	{{Getting Started with Zoltan: A Short Tutorial}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.11},
  URN =		{urn:nbn:de:0030-drops-20886},
  doi =		{10.4230/DagSemProc.09061.11},
  annote =	{Keywords: Parallel computing, load balancing, partitioning, coloring, ordering, software}
}
Document
Low-Memory Tour Reversal in Directed Graphs

Authors: Viktor Mosenkis, Uwe Naumann, and Elmar Peise


Abstract
We consider the problem of reversing a {em tour} $(i_1,i_2,ldots,i_l)$ in a directed graph $G=(V,E)$ with positive integer vertices $V$ and edges $E subseteq V imes V$, where $i_j in V$ and $(i_j,i_{j+1}) in E$ for all $j=1,ldots,l-1.$ The tour can be processed in last-in-first-out order as long as the size of the corresponding stack does not exceed the available memory. This constraint is violated in most cases when considering control-flow graphs of large-scale numerical simulation programs. The tour reversal problem also arises in adjoint programs used, for example, in the context of derivative-based nonlinear optimization, sensitivity analysis, or other, often inverse, problems. The intention is to compress the tour in order not to run out of memory. As the general optimal compression problem was proven to be NP-hard and big control-flow graphs results from loops in programs we do not want to use general purpose algorithms to compress the tour. We want rather to compress the tour by finding loops and replace the redundant information by proper representation of the loops.

Cite as

Viktor Mosenkis, Uwe Naumann, and Elmar Peise. Low-Memory Tour Reversal in Directed Graphs. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{mosenkis_et_al:DagSemProc.09061.12,
  author =	{Mosenkis, Viktor and Naumann, Uwe and Peise, Elmar},
  title =	{{Low-Memory Tour Reversal in Directed Graphs}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.12},
  URN =		{urn:nbn:de:0030-drops-20924},
  doi =		{10.4230/DagSemProc.09061.12},
  annote =	{Keywords: Directed graph, tour reversal, offline algorithm, dynamic programming, online algorithm, control-flow reversal, adjoint program}
}
Document
Multifrontral multithreaded rank-revealing sparse QR factorization

Authors: Timothy Davis


Abstract
SuiteSparseQR is a sparse multifrontal QR factorization algorithm. Dense matrix methods within each frontal matrix enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel's Threading Building Blocks library. Rank-detection is performed within each frontal matrix using Heath's method, which does not require column pivoting. The resulting sparse QR factorization obtains a substantial fraction of the theoretical peak performance of a multicore computer.

Cite as

Timothy Davis. Multifrontral multithreaded rank-revealing sparse QR factorization. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{davis:DagSemProc.09061.13,
  author =	{Davis, Timothy},
  title =	{{Multifrontral multithreaded rank-revealing sparse QR factorization}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.13},
  URN =		{urn:nbn:de:0030-drops-20781},
  doi =		{10.4230/DagSemProc.09061.13},
  annote =	{Keywords: Sparse matrix algorithms, QR factorization, multifrontal}
}
Document
Parallelization of Mapping Algorithms for Next Generation Sequencing Applications

Authors: Doruk Bozdag, Catalin C. Barbacioru, and Umit V. Catalyurek


Abstract
With the advent of next-generation high throughput sequencing instruments, large volumes of short sequence data are generated at an unprecedented rate. Processing and analyzing these massive data requires overcoming several challenges. A particular challenge addressed in this abstract is the mapping of short sequences (reads) to a reference genome by allowing mismatches. This is a significantly time consuming combinatorial problem in many applications including whole-genome resequencing, targeted sequencing, transcriptome/small RNA, DNA methylation and ChiP sequencing, and takes time on the order of days using existing sequential techniques on large scale datasets. In this work, we introduce six parallelization methods each having different scalability characteristics to speedup short sequence mapping. We also address an associated load balancing problem that involves grouping nodes of a tree from different levels. This problem arises due to a trade-off between computational cost and granularity while partitioning the workload. We comparatively present the proposed parallelization methods and give theoretical cost models for each of them. Experimental results on real datasets demonstrate the effectiveness of the methods and indicate that they are successful at reducing the execution time from the order of days to under just a few hours for large datasets. To the best of our knowledge this is the first study on parallelization of short sequence mapping problem.

Cite as

Doruk Bozdag, Catalin C. Barbacioru, and Umit V. Catalyurek. Parallelization of Mapping Algorithms for Next Generation Sequencing Applications. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{bozdag_et_al:DagSemProc.09061.14,
  author =	{Bozdag, Doruk and Barbacioru, Catalin C. and Catalyurek, Umit V.},
  title =	{{Parallelization of Mapping Algorithms for Next Generation Sequencing Applications}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.14},
  URN =		{urn:nbn:de:0030-drops-21294},
  doi =		{10.4230/DagSemProc.09061.14},
  annote =	{Keywords: Genome sequencing, sequence mapping}
}
Document
Randomized Heuristics for Exploiting Jacobian Scarcity

Authors: Andrew Lyons and Ilya Safro


Abstract
Griewank and Vogel introduced the notion of Jacobian scarcity, which generalizes the properties of sparsity and rank to capture a kind of deficiency in the degrees of freedom of the Jacobian matrix $F'(mathbf{x}).$ We describe new randomized heuristics that exploit scarcity for the optimized evaluation of collections of Jacobian-vector or Jacobian-transpose-vector products.

Cite as

Andrew Lyons and Ilya Safro. Randomized Heuristics for Exploiting Jacobian Scarcity. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{lyons_et_al:DagSemProc.09061.15,
  author =	{Lyons, Andrew and Safro, Ilya},
  title =	{{Randomized Heuristics for Exploiting Jacobian Scarcity}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.15},
  URN =		{urn:nbn:de:0030-drops-20868},
  doi =		{10.4230/DagSemProc.09061.15},
  annote =	{Keywords: Jacobian, scarcity, accumulation, directed acyclic graph}
}
Document
Short Tutorial: Getting Started With Ipopt in 90 Minutes

Authors: Andreas Wächter


Abstract
Ipopt is an open-source software package for large-scale nonlinear optimization. This tutorial gives a short introduction that should allow the reader to install and test the package on a UNIX-like system, and to run simple examples in a short period of time.

Cite as

Andreas Wächter. Short Tutorial: Getting Started With Ipopt in 90 Minutes. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{wachter:DagSemProc.09061.16,
  author =	{W\"{a}chter, Andreas},
  title =	{{Short Tutorial: Getting Started With Ipopt in 90 Minutes}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.16},
  URN =		{urn:nbn:de:0030-drops-20890},
  doi =		{10.4230/DagSemProc.09061.16},
  annote =	{Keywords: Nonlinear Optimization, Tutorial, Ipopt, COIN-OR}
}
Document
Stabilising aggregation AMG

Authors: Frank Hülsemann


Abstract
When applied to linear systems arising from scalar elliptic partial differential equations, algebraic multigrid (AMG) schemes based on aggregation exhibit a mesh size dependent convergence behaviour. As the number of iterations increases with the number of unknowns in the linear system, the computational complexity of such a scheme is non-optimal. This contribution presents a stabilisation of the aggregation AMG algorithm which adds a number of subspace projection steps at different stages of the algorithm and allows for variable cycling strategies. Numerical results illustrate the advantage of the stabilised algorithm over its original formulation.

Cite as

Frank Hülsemann. Stabilising aggregation AMG. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{hulsemann:DagSemProc.09061.17,
  author =	{H\"{u}lsemann, Frank},
  title =	{{Stabilising aggregation AMG}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.17},
  URN =		{urn:nbn:de:0030-drops-20946},
  doi =		{10.4230/DagSemProc.09061.17},
  annote =	{Keywords: Algebraic multigrid, aggregation, stabilisation}
}
Document
The CPR Method and Beyond : Prologue

Authors: Shahadat Hossain and Trond Steihaug


Abstract
In $1974$ A.R. Curtis, M.J.D. Powell, and J.K.Reid published a seminal paper on the estimation of Jacobian matrices which was later coined as the CPR method. Central to the CPR method is the effective utilization of a priori known sparsity information. It is only recently that the optimal CPR method in its general form is characterized and the theoretical underpinning for the optimality is shown. In this short note we provide an overview of the development of computational techniques and software tools for the estimation of Jacobian matrices.

Cite as

Shahadat Hossain and Trond Steihaug. The CPR Method and Beyond : Prologue. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{hossain_et_al:DagSemProc.09061.18,
  author =	{Hossain, Shahadat and Steihaug, Trond},
  title =	{{The CPR Method and Beyond : Prologue}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.18},
  URN =		{urn:nbn:de:0030-drops-20797},
  doi =		{10.4230/DagSemProc.09061.18},
  annote =	{Keywords: Structural Orthogonality, Optimal CPR, Sparse Jacobian Estimation Software}
}
Document
The Enabling Power of Graph Coloring Algorithms in Automatic Differentiation and Parallel Processing

Authors: Assefaw H. Gebremedhin


Abstract
Combinatorial scientific computing (CSC) is founded on the recognition of the enabling power of combinatorial algorithms in scientific and engineering computation and in high-performance computing. The domain of CSC extends beyond traditional scientific computing---the three major branches of which are numerical linear algebra, numerical solution of differential equations, and numerical optimization---to include a range of emerging and rapidly evolving computational and information science disciplines. Orthogonally, CSC problems could also emanate from infrastructural technologies for supporting high-performance computing. Despite the apparent disparity in their origins, CSC problems and scenarios are unified by the following common features: (A) The overarching goal is often to make computation efficient---by minimizing overall execution time, memory usage, and/or storage space---or to facilitate knowledge discovery or analysis. (B) Identifying the most accurate combinatorial abstractions that help achieve this goal is usually a part of the challenge. (C) The abstractions are often expressed, with advantage, as graph or hypergraph problems. (D) The identified combinatorial problems are typically NP-hard to solve optimally. Thus, fast, often linear-time, approximation (or heuristic) algorithms are the methods of choice. (E) The combinatorial algorithms themselves often need to be parallelized, to avoid their being bottlenecks within a larger parallel computation. (F) Implementing the algorithms and deploying them via software toolkits is critical. This talk attempts to illustrate the aforementioned features of CSC through an example: we consider the enabling role graph coloring models and their algorithms play in efficient computation of sparse derivative matrices via automatic differentiation (AD). The talk focuses on efforts being made on this topic within the SciDAC Institute for Combinatorial Scientific Computing and Petascale Simulations (CSCAPES). Aiming at providing overview than details, we discuss the various coloring models used in sparse Jacobian and Hessian computation, the serial and parallel algorithms developed in CSCAPES for solving the coloring problems, and a case study that demonstrate the efficacy of the coloring techniques in the context of an optimization problem in a Simulated Moving Bed process. Implementations of our serial algorithms for the coloring and related problems in derivative computation are assembled and made publicly available in a package called ColPack. Implementations of our parallel coloring algorithms are incorporated into and deployed via the load-balancing toolkit Zoltan. ColPack has been interfaced with ADOL-C, an operator overloading-based AD tool that has recently acquired improved capabilities for automatic detection of sparsity patterns of Jacobians and Hessians (sparsity pattern detection is the first step in derivative matrix computation via coloring-based compression). Further information on ColPack and Zoltan is available at their respective websites, which can be accessed via http://www.cscapes.org

Cite as

Assefaw H. Gebremedhin. The Enabling Power of Graph Coloring Algorithms in Automatic Differentiation and Parallel Processing. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{gebremedhin:DagSemProc.09061.19,
  author =	{Gebremedhin, Assefaw H.},
  title =	{{The Enabling Power of Graph Coloring Algorithms in Automatic Differentiation and Parallel Processing}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.19},
  URN =		{urn:nbn:de:0030-drops-20934},
  doi =		{10.4230/DagSemProc.09061.19},
  annote =	{Keywords: Graph coloring, sparse derivative computation, automatic differentiation, parallel computing}
}
Document
The Past, Present and Future of High Performance Computing

Authors: Ruud van der Pas


Abstract
In this overview paper we start by looking at the birth of what is called ``High Performance Computing'' today. It all began over 30 years ago when the Cray 1 and CDC Cyber 205 ``supercomputers'' were introduced. This had a huge impact on scientific computing. A very turbulent time at both the hardware and software level was to follow. Eventually the situation stabilized, but not for long. Today, there are two different trends in hardware architectures and have created a bifurcation in the market. On one hand the GPGPU quickly found a place in the marketplace, but is still the domain of the expert. In contrast to this, multicore processors make hardware parallelism available to the masses. Each have their own set of issues to deal with. In the last section we make an attempt to look into the future, but this is of course a highly personal opinion.

Cite as

Ruud van der Pas. The Past, Present and Future of High Performance Computing. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{vanderpas:DagSemProc.09061.20,
  author =	{van der Pas, Ruud},
  title =	{{The Past, Present and Future of High Performance Computing}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--7},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.20},
  URN =		{urn:nbn:de:0030-drops-20836},
  doi =		{10.4230/DagSemProc.09061.20},
  annote =	{Keywords: High-Performance Scientific Computing}
}
Document
Weighted aggregation for multi-level graph partitioning

Authors: Cédric Chevalier and Ilya Safro


Abstract
Graph partitioning is a well-known optimization problem of great interest in theoretical and applied studies. Since the 1990s, many multilevel schemes have been introduced as a practical tool to solve this problem. A multilevel algorithm may be viewed as a process of graph topology learning at different scales in order to generate a better approximation for any approximation method incorporated at the uncoarsening stage in the framework. In this work we compare two multilevel frameworks based on the geometric and the algebraic multigrid schemes for the partitioning problem.

Cite as

Cédric Chevalier and Ilya Safro. Weighted aggregation for multi-level graph partitioning. In Combinatorial Scientific Computing. Dagstuhl Seminar Proceedings, Volume 9061, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{chevalier_et_al:DagSemProc.09061.21,
  author =	{Chevalier, C\'{e}dric and Safro, Ilya},
  title =	{{Weighted aggregation for multi-level graph partitioning}},
  booktitle =	{Combinatorial Scientific Computing},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9061},
  editor =	{Uwe Naumann and Olaf Schenk and Horst D. Simon and Sivan Toledo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.09061.21},
  URN =		{urn:nbn:de:0030-drops-20820},
  doi =		{10.4230/DagSemProc.09061.21},
  annote =	{Keywords: Graph partitioning, multilevel, coarsening, weighted aggregation}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail