Dagstuhl Seminar Proceedings, Volume 7391
Dagstuhl Seminar Proceedings
DagSemProc
https://www.dagstuhl.de/dagpub/1862-4405
https://dblp.org/db/series/dagstuhl
1862-4405
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
7391
2007
https://drops.dagstuhl.de/entities/volume/DagSemProc-volume-7391
07391 Abstracts Collection – Probabilistic Methods in the Design and Analysis of Algorithms
From 23.09.2007 to 28.09.2007, the Dagstuhl Seminar 07391 "Probabilistic Methods in the Design and Analysis of Algorithms''was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
The seminar brought together leading researchers in probabilistic
methods to strengthen and foster collaborations among various areas of
Theoretical Computer Science. The interaction between researchers
using randomization in algorithm design and researchers studying known
algorithms and heuristics in probabilistic models enhanced the
research of both groups in developing new complexity frameworks and in
obtaining new algorithmic results.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
Algorithms
Randomization
Probabilistic analysis
Complexity
1-18
Regular Paper
Martin
Dietzfelbinger
Martin Dietzfelbinger
Shang-Hua
Teng
Shang-Hua Teng
Eli
Upfal
Eli Upfal
Berthold
Vöcking
Berthold Vöcking
10.4230/DagSemProc.07391.1
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization
Stochastic optimization problems provide a means to model uncertainty in the input data where the uncertainty is modeled by a probability distribution over the possible realizations of the data. We consider a broad class of these problems, called {it multi-stage stochastic programming problems with recourse}, where the uncertainty evolves through a series of stages and one take decisions in each stage in response to the new information learned. These problems are often computationally quite difficult with even very specialized (sub)problems being $#P$-complete.
We obtain the first fully polynomial randomized approximation scheme (FPRAS) for a broad class of multi-stage stochastic linear programming problems with any constant number of stages, without placing any restrictions on the underlying probability distribution or on the cost structure of the input. For any fixed $k$, for a rich class of $k$-stage stochastic linear programs (LPs), we show that, for any probability distribution, for any $epsilon>0$, one can compute, with high probability, a solution with expected cost at most $(1+e)$ times the optimal expected cost, in time polynomial in the input size, $frac{1}{epsilon}$, and a parameter $lambda$ that is an upper bound on the cost-inflation over successive stages. Moreover, the algorithm analyzed is a simple and intuitive algorithm that is often used in practice, the {it sample average approximation} (SAA) method. In this method, one draws certain samples from the underlying distribution, constructs an approximate distribution from these samples, and solves the stochastic problem given by this approximate distribution. This is the first result establishing that the SAA method yields near-optimal solutions for (a class of) multi-stage programs with a polynomial number of samples.
As a corollary of this FPRAS, by adapting a generic rounding technique of Shmoys and Swamy, we also obtain the first approximation algorithms for the analogous class of multi-stage stochastic integer programs, which includes the multi-stage versions of the set cover, vertex cover, multicut on trees, facility location, and multicommodity flow problems.
Stochastic optimization
approximation algorithms
randomized algorithms
linear programming
1-24
Regular Paper
Chaitanya
Swamy
Chaitanya Swamy
David
Shmoys
David Shmoys
10.4230/DagSemProc.07391.2
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise
While the height of binary search trees is linear in the worst case, their
average height is logarithmic. We investigate what happens in between, i.e.,
when the randomness is limited, by analyzing the smoothed height of binary
search trees: Randomly perturb a given (adversarial) sequence and then take
the expected height of the binary search tree generated by the resulting
sequence.
As perturbation models, we consider partial permutations, where some
elements are randomly permuted, and additive noise, where random numbers
are added to the adversarial sequence. We prove tight bounds for the
smoothed height of binary search trees under these models. We also obtain
tight bounds for smoothed number of left-to-right maxima. Furthermore, we
exploit the results obtained to get bounds for the smoothed number of
comparisons that quicksort needs.
Smoothed Analysis
Binary Search Trees
Quicksort
Left-to-right Maxima
1-19
Regular Paper
Bodo
Manthey
Bodo Manthey
Till
Tantau
Till Tantau
10.4230/DagSemProc.07391.3
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode