OASIcs.CCA.2009.2260.pdf
- Filesize: 317 kB
- 12 pages
In the theory of algorithmic randomness, several notions of random sequence are defined via a game-theoretic approach, and the notions that received most attention are perhaps Martin-L\"of randomness and computable randomness. The latter notion was introduced by Schnorr and is rather natural: an infinite binary sequence is computably random if no total computable strategy succeeds on it by betting on bits in order. However, computably random sequences can have properties that one may consider to be incompatible with being random, in particular, there are computably random sequences that are highly compressible. The concept of Martin-L\"of randomness is much better behaved in this and other respects, on the other hand its definition in terms of martingales is considerably less natural. Muchnik, elaborating on ideas of Kolmogorov and Loveland, refined Schnorr's model by also allowing non-monotonic strategies, i.e.\ strategies that do not bet on bits in order. The subsequent ``non-monotonic'' notion of randomness, now called Kolmogorov-Loveland-randomness, has been shown to be quite close to Martin-L\"of randomness, but whether these two classes coincide remains a fundamental open question. In order to get a better understanding of non-monotonic randomness notions, Miller and Nies introduced some interesting intermediate concepts, where one only allows non-adaptive strategies, i.e., strategies that can still bet non-monotonically, but such that the sequence of betting positions is known in advance (and computable). Recently, these notions were shown by Kastermans and Lempp to differ from Martin-L\"of randomness. We continue the study of the non-monotonic randomness notions introduced by Miller and Nies and obtain results about the Kolmogorov complexities of initial segments that may and may not occur for such sequences, where these results then imply a complete classification of these randomness notions by order of strength.
Feedback for Dagstuhl Publishing