Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem

The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and Winkler (ITCS 2014), is to collaboratively leak a piece of information known to only one member of a group (i)~without revealing who was the origin of this information and (ii)~without any private communication, neither during the process nor before. Despite several deep structural results, even the smallest case of leaking one bit of information present at one of two players is not well understood. Brody et al.\ gave a 2-round protocol enabling the two players to succeed with probability $1/3$ and showed the hardness result that no protocol can give a success probability of more than~$3/8$. In this work, we show that neither bound is tight. Our new hardness result, obtained by a different application of the concavity method used also in the previous work, states that a success probability better than 0.3672 is not possible. Using both theoretical and numerical approaches, we improve the lower bound to $0.3384$, that is, give a protocol leading to this success probability. To ease the design of new protocols, we prove an equivalent formulation of the cryptogenography problem as solitaire vector splitting game. Via an automated game tree search, we find good strategies for this game. We then translate the splits that occurred in this strategy into inequalities relating position values and use an LP solver to find an optimal solution for these inequalities. This gives slightly better game values, but more importantly, it gives a more compact representation of the protocol and a way to easily verify the claimed quality of the protocol. These improved bounds, as well as the large sizes and depths of the improved protocols we find, suggests that finding good protocols for the cryptogenography problem as well as understanding their structure are harder than what the simple problem formulation suggests.

understanding of this smallest-possible problem of leaking one bit from two players, ideally by determining an optimal protocol (that is, matching a hardness result), could greatly improve the situation.
Our Results. We shall be partially successful in achieving these goals. On the positive side, we find protocols with strictly larger success probability than 1 3 (namely 0.3384) and we prove a stricter hardness result of 0.3672. Our new protocols look very different from the 2-round protocol given by Brody et al., in particular, they use infinite protocol trees (but have an expected finite number of communication rounds). These findings motivate and give new starting points for further research on the cryptogenography problem.
On the not so positive side, our work on better protocols indicates that good cryptogenographic protocols can be very complicated. The simplest protocol we found that beats the 1 3 barrier already has a protocol tree of depth 16, that is, the two players need to communicate for 16 rounds in the worst case. While we still manage to give a human-readable description and performance proof for this protocol, it is not surprising that previous works not incorporating a computer-assisted search did not find such a protocol. Our best protocol, giving a success probability of 0.3384, already uses 18248 non-equivalent states.
Technical contributions. To find the improved protocols, we use a number of theoretical and experimental tools. We first reformulate the cryptogenography problem as a solitaire vector splitting game over vectors in R 2×k ≥0 . Both for human researchers and for automated protocol searches, this reformulation seems to be easier to work with than the previous reformulation via convex combinations of distributions lying in a common allowed plane [2]. It also proved to be beneficial for improving upon the hardness result.
Restrictions of the vector splitting game to a finite subset of R 2×k ≥0 , e.g., {0, . . . , T } 2×k , can easily be solved via dynamic programming, giving (due to the restriction possibly sub-optimal) cryptogenographic protocols. Unfortunately, for k = 2 even discretizations as fine as T = 40 are not sufficient to find protocols beating the 1/3 barrier and memory usage quickly becomes a bottleneck issue. However, exploiting the simple fact that the game values are homogeneous (that is, multiplying a game position by a non-negative scalar changes the game value by this factor), we can (partially) simulate a much finer discretization in a coarse one. This extended dynamic programming approach easily gives cryptogenographic success probabilities larger than 1/3. Reading off the corresponding protocols, due to the reuse of the same position in different contexts, needs more care, but in the end gives without greater difficulties also the improved protocols.
When a cryptogenographic protocol reuses a state a second time (with a non-trivial split in between), then there is no reason to re-iterate this part of the protocol whenever this position occurs. Such a protocol allows infinite paths, while still needing only an expected finite number of rounds. Since the extended dynamic programming approach in finite time cannot find such protocols, we use a linear programming based postprocessing stage. We translate each splitting operation used in the extended dynamic programming search into an inequality relating game values. By exporting these into an LP-solver, we do not only obtain better game values (possibly corresponding to cryptogenographic protocols with infinite paths, for which we would get a compact representation by making the cycles explicit), but also a way to easily certify these values using an optimality check for a linear program instead of having to trust the ad-hoc dynamic programming implementation.
Related work. Despite a visible interest of the research community in the cryptogenography problem, the only relevant follow-up work is Jakobsen's paper [3], which analyses the cryptogenography problem for the case that several of the players know the secret. This allows to leak a much larger amount of information as made precise in [3]. Due to the asymptotic nature of these results, unfortunately, they do not give new insights in the 2-player case. Other work on anonymous broadcasting typically assumes bounded computational power of the adversary (see, e.g., [4]).
In [2], the cryptogenography problem was reformulated to the problem of finding the point-wise minimal function f on the set of secret distributions that is point-wise not smaller than some given function g and that is concave on an infinite set of 2-dimensional planes. Such restricted notions of concavity (or, equivalently, convexity) seem to be less understood. We found work by Matoušek [5] for a similar convexity problem, however, with only a finite number of one-dimensional directions in which convexity is required. We do not see how to extend these results to our needs.

Finding Better Cryptogenography Protocols
This section is devoted to the design of stronger cryptogenographic protocols. In particular, we demonstrate that a success probability of more than 1/3 can be achieved. We start by making the cryptogenography problem precise (Section 2.1 and Section 2.2) and introduce an equivalent formulation as solitaire vector splitting game (Section 2.3). We illustrate both formulations using the best known protocol for the 2-player case (Section 2.4). In Section 2.5, we state basic properties that simplify the analysis of protocols and aid our automated search for better protocols, which is detailed in Section 2.7. In Section 2.6, we give a simple, human-readable proof that 1/3 is not the optimal success probability by analyzing a protocol with success probability 449 1334 ≈ 0.3341. We describe how to post-optimize and certify the results obtained by the automated search using linear programming in Section 2.8 and summarize our findings (in particular, the best lower bound we have found) in Section 2.9.

The Cryptogenography Problem
Let us fix an arbitrary number k of players called 1, . . . , k for simplicity. We write [k] := {1, . . . , k} for the set of players. We assume that a random one of the them, the "secret owner" J ∈ [k], initially has a secret, namely a random bit X ∈ {0, 1}. The task of the players is, using public communication only, to make this random bit public without revealing the identity of the secret owner. More precisely, we assume that the players, before looking at the secret distribution, (publicly) decide on a communication protocol π. This is again public, that is, all bits sent are broadcast to all players, and they may depend only on previous communication, the private knowledge of the sender (whether he is the secret owner or not, and if so, the secret), and private random numbers of the sender. At the end of the communication phase, the protocol specifies an output bit Y (depending on all communication).
The aspect of not disclosing the identity of the secret owner is modeled by an adversary, who knows the protocol (because it was discussed in public) and who gets to see all communication (and consequently also knows the protocol output Y ). The adversary, based on all this data, blames one player K. The players win this game if the protocol outputs the true secret (that is, Y = X) and the adversary does not blame the secret owner (that is, K = J), otherwise the adversary wins. It is easy to see (see Section 2.2), what the best strategy for the adversary is (given the protocol and the communication), so the interesting part of the cryptogenography problem is finding strategies that maximize the probability that the players win assuming that the adversary plays optimal. We call this the (players') success probability of the protocol.
While the game starts with a uniform secret distribution, it will be useful to regard arbitrary secret distributions. In general, a secret distribution is a distribution D over {0, 1} × [k], where D ij is the probability that player j ∈ [k] is the secret owner and the secret is i ∈ {0, 1}. Modulo a trivial isomorphism, D is just a vector in R 2×k ≥0 with D 1 = 1. We denote by ∆ = ∆ k the set of all these distributions (this was denoted by Brody et al. [2] observe that any cryptogenographic protocol can be viewed as successive rounds of one-bit communication, where in each step some (a priori) secret distribution probabilistically leads to one of two follow-up (a posteriori) distributions (depending on the bit transmitted) such that the a priori distribution is a convex combination of these and a certain proportionality condition is fulfilled (all three distributions lie in the same "allowed plane"). Conversely, whenever the initial distribution can be written as such a convex combination of certain distributions, then there is a round of a cryptogenographic protocol leading to these two distributions (with certain probabilities). Consequently, the problem of finding a good cryptogenographic protocol is equivalent to iteratively rewriting the initial equidistribution as certain convex combinations of other secret distributions in such a way that the success probability, which can be expressed in terms of this rewriting tree, is large. Instead of directly working with this formulation, we propose a slightly different reformulation in Section 2.3. To prepare readers that are unfamiliar with the work of Brody et al. [2], we give a high-level introduction in the following section.

The Convex Combination Formulation
For readers' convenience, we give a high-level description of the convex combination formulation of Brody et al. For proofs and a more formal treatment, we refer the reader to [2].
Optimal strategy of the adversary. Recall that X denotes the secret bit and J the identity of the secret owner. Fix a protocol π of the players, which for every state of the protocol execution, i.e., every possible history of communication, determines (1) which player's turn it is to communicate (or whether communication has ended) and (2) probability distributions over the next message this player sends (two for the case that this player is the secret owner, i.e., one for each value of the secret bit, and one for the other case). Both (1) and (2) may depend on all previous communication and, if the active player is the secret owner, also on the value of the secret bit. Additionally, π fixes a protocol output function Out that given the transcript τ of all communication returns the players' guess Out(τ ) on the secret bit. Without loss of generality, we may assume that the protocol proceeds in rounds, where in each round a message consisting of a single bit is sent.
Let Com(π) denote the transcript of all communication of the protocol π. Note that this is a random variable, since we assume a random player to be the owner of a random bit as secret. It is not difficult to see what the optimal strategy of the adversary is, given the knowledge of the protocol π. He may assume that the players' guess is correct, i.e., Out(Com(π)) = X, as otherwise the players have already lost and therefore his guess is irrelevant. After the protocol execution has finished with a transcript τ , the adversary maximizes his winning probability by blaming the player argmax j∈[k] Pr[J = j | Com(π) = τ, X = Out(τ )] (breaking ties arbitrarily), i.e., the player who is most likely to own the secret given the communication described by τ .
From this reasoning, it becomes clear that the decisive information in the game is, for any partial transcript τ , the distribution D = ((D 0,1 , D 1,1 ), . . . , (D 0,k , D 1,k )), where is the probability an outside observer (knowing only public information, i.e., all previous communication) assigns to the event (J = j, X = x). As a simple consequence, assume that some fixed transcript τ transforms the initial uniform distribution into the distribution D and no further communication is allowed. Then the optimal choice for the protocol output is the guess as for any fixed choice Out(τ ) = x, the adversary blames the player := argmax j∈[k] D x,j and hence the players win in all cases with X = x except when J =. We call the strategy applied here the zero-bit strategy (since no further communication is done).
Convex combination formulation. Brody et al. prove that it is not only sufficient, but in fact equivalent to represent the cryptogenography game using only the distributions D described above and how the protocol affects these distributions. More precisely, one can model the game starting from any initial distribution D on {0, 1} × {1, . . . , k}. Then the first bit sent by some player j splits D into the distributions D 0 (for the case that the 0-bit is sent) and D 1 (for the case that the 1-bit is sent), i.e., D i is the distribution an outside observer assigns to (J, X) after bit i has been sent. By this abstraction, one can recursively consider the distributions D 0 and D 1 (i.e., their optimal protocols and success probabilities).
To determine the properties of possible splits in a protocol, let p be the probability that player j transmits a 0-bit. By a simple calculation, we have that D = pD 0 +(1−p)D 1 (cf. [2, Lemma 4.1]). Additionally, since a player may only use the information whether or not he has the secret bit (and if so, the value of the secret bit), player j may never leak new information about whether another player j ∈ [k] \ {j} is more likely to have secret 0 or 1 (i.e., the ratio of D 0,j and D 1,j is maintained in the resulting distributions D 0 and D 1 ) or whether player j = j is more likely to have the secret than another player j ∈ [k] \ {j, j }. This transfers to a proportionality condition that In fact, any split of D into D 0 and D 1 satisfying these conditions can be realized by a cryptogenographic protocol. Thus, the cryptogenography game is equivalent to, starting from the uniform distribution, recursively apply splits satisfying these conditions (i.e., allowed splits), using the zero-bit strategy at the leaves, in such a way that the resulting success probability is maximized. We argue that this view is equivalent to our vector splitting formulation in Lemma 2.6.

The Solitaire Vector Splitting Game
Instead of directly using the "convex combination" formulation of Brody et al., we propose a slightly different reformulation as solitaire vector splitting game. This formulation seems to ease finding good cryptogenographic protocols (lower bounds for the success probability), both for human researchers and via automated search (Section 2.6). The main advantage of our formulation is that it takes as positions all 2k-dimensional vectors with non-negative entries, whereas the cryptogenographic protocols are only defined on distributions over {0, 1} × [k]. In this way, we avoid arguing about probabilities and convex combinations and instead simply split a vector (resembling a secret distribution) into a sum of two other vectors. Furthermore, a simple monotonicity property (Proposition 2.5) eases the analyses. Still, there is an easy translation between the two formulations, so that we can re-use whatever results were found in [2].
The objective of the vector splitting game is to recursively apply allowed splits to a given vector D ∈ R 2×k ≥0 with the target of maximizing the sum of the values of the resulting vectors (note that when D is a distribution, then p(D ) is the 0-bit success probability of D as argued in Section 2.2). More precisely, an n-round play of the vector splitting game is described by a binary tree of height at most n, where the nodes are labeled with game positions in R 2×k ≥0 . The root is labeled with the initial position D. For each non-leaf v, the labels of the two children form an allowed split of the label of v. The payoff of such a play is D p(D ), where D runs over all leaves of the game tree. The aim is to maximize the payoff. Right from this definition, it is clear that the maximum payoff achievable in an n-round game started in position D, the value of this game, is succ n (D) as defined below.
Definition 2.2. For all n ∈ N and for all D ∈ R 2×k ≥0 , we recursively define Here the maximum is taken over all allowed splits (D 0 , D 1 ) of D.
For an example of an admissible game, we refer to Figure 1 in Section 2.4. It is easy to see that the game values are non-decreasing in the number of rounds, but bounded. The limiting value is thus well-defined.
Proof. The previous definition and an elementary induction shows succ n (D) ≤ D 1 . Since (D, 0) is an allowed split of D and succ n (0) = 0 by the previous observation, we have succ n+1 (D) ≥ succ n (D) + succ n (0) = succ n (D).
Proof. The statements follow right from the definition of succ n and succ via induction.
Proof. Clearly succ 0 (E) ≥ succ 0 (D). Hence assume that for some n ∈ N, we have where the maxima range over all allowed splits (D 0 , D 1 ) of D.
From the previous definitions and observations, we derive that the game values for games starting with a distribution D, that is, D 1 = 1, and the success probabilities of the optimal cryptogenographic protocols for D, are equal.  [2] establish that the first round of any cryptogenographic protocol for the distribution D with some probability λ leads to a distribution D 0 and with probabilityλ := 1 − λ leads to a position D 1 such that D = λD 0 +λD 1 and D 0 , D 1 are proportional to D on {0, 1}×([k]\{j}) for some j ∈ [k]. Conversely, for any such λ, D 0 , D 1 there is a one-round cryptogenographic protocol leading to the distribution D 0 with probability λ and to D 1 with probabilityλ. Hence for any n ≥ 1, the success probability s n (D) of the optimal n-round protocol for the distribution D is s n (D) = max λ,D 0 ,D 1 (λs n−1 (D 0 ) +λs n−1 (D 1 )), where λ, D 0 , D 1 run over all values as above. Note that these are exactly those values which make (λD 0 ,λD 1 ) an allowed split of D. By induction and scalability, we obtain where the last maximum is taken over all allowed splits (D 0 ,D 1 ) of D.

Example: The Best-so-far 2-Player Protocol
We now turn to the case of two players. We use this subsection to describe the best known protocol for two players in the different languages. We also use this mini-example to sketch the approaches used in the following subsections to design superior protocols.
The splitting game and the inequality view will in the following be used to design stronger protocols (better lower bounds for the optimal success probability). We shall compute good game trees by computing lower bounds for the game values of a discrete set of positions via repeatedly trying allowed splits. For example, the above game tree for the starting position (3, 3, 3, 3) could have easily be found by recursively computing the game values for all positions in {0, 1, 2, 3} 4 .
It turns out that such an automated search leads to better results when we also allow scaling moves (referring to Proposition 2.4). For example, in the above mini-example of computing optimal game values for all positions {0, 1, 2, 3} 4 , we could try to exploit the fact that succ(1, 1, 1, 1) = 1 3 succ (3, 3, 3, 3). Such scaling moves are a cheap way of Here translating the allowed splits used in the tree into the inequality formulation and then using an LP-solver is an interesting approach (detailed in Section 2.8). It allows to post-optimize the game trees found, in particular, by solving cyclic dependencies. This leads to slightly better game values and compacter representations of game trees.

Useful Facts
For some positions of the vector splitting game, the true value is easy to determine. We do this here to later ease the presentation of the protocols.
This statement is a simple corollary of the concavity method detailed in Section 3.1, hence at this point, we only state the proposition and postpone the proof. Proof. Clearly, succ(D) ≥ succ 0 (D) = min{a, c}. By Proposition 2.7, we obtain succ(D) ≤ min{a, c}, proving the claim.

Small Protocols Beating the 1/3 Barrier
We now present a sequence of protocols showing that there are cryptogenographic protocols having a success probability strictly larger than 1 3 . These protocols are still relatively simple, so we also obtain a human-readable proof of the following result. To be able to give a readable mathematical proof, we argue via inequalities for game values succ(·). We later discuss how the corresponding protocols (game trees) look like.
We first observe the following inequalities, always stemming from allowed splits (the underlined entries are proportional). Whenever Proposition 2.8 or 2.9 determine a value, we exploit this without further notice.

Automated Search
The vector splitting game formulation enables us to search for good cryptogenographic protocols as follows. We try to determine the game values of all positions from a discrete set D := {0, . . . , T } 2×k by repeatedly applying allowed splits. More precisely, we store a function s : D → R that gives a lower bound on the game value succ(D) of each position D ∈ D. We initialize this function with s ≡ succ 0 and then in order of ascending D 1 try all allowed splits D = D 0 + D 1 and update s(D) ← s(D 0 ) + s(D 1 ) in case we find that s(D) was smaller.
Recall that for any secret distribution D, the game value succ(D) is the supremum success probability of cryptogenographic protocols for D. Hence, e.g., the value s(T, . . . , T )/(2T k) ≤ succ(1/(2k), . . . , 1/(2k)) is a lower bound for this success probability. As we will discuss later, by keeping track of the update operations performed, we can not only compute such a lower bound, but also concrete protocols.
Since even for k = 2, the size of the position space D and the number of allowed splits increase quickly with T , only moderate choices of T are computationally feasible, limiting the power of this approach drastically. Surprisingly, introducing a simple scaling step is sufficient to overcome this problem and enables us to find protocols that are better than the previous-best protocol TwoBit. Algorithm 1 outlines our basic search procedure.
Instead of restricting to optimize only over all allowed splits (D 0 , D 1 ) ∈ D 2 of D ∈ D, however, we use monotonicity of the discretization to exploit even more reasonable splits. For ease of presentation, we focus here on the 2-player case (the generalization to larger values of k is straightforward). Note that while the definition relaxes an allowed split only at coordinate d, by symmetry of succ, we obtain the same lower bound when relaxing at other coordinates. This allows us to even split distributions D = (a, b, c, d) ∈ D where neither of the ratios a/b and c/d occur "perfectly" in another distribution D ∈ D by dismissing some "vector mass", i.e., rounding down from d/c · c i to d/c · c i . Although these splits might appear inherently wasteful (as this loss can never be regained), the best protocols that we find do indeed make use of (a small number of) such relaxed splits. More specifically, for k = 2, we can without loss of generality even assume that a ≥ b, c, d.
In principle, an implementation should take care to avoid propagation of floatingpoint rounding errors, since previously computed entries are reused heavily and identical values are regularly recalculated in a number of different ways. Instead of using interval arithmetic, however, we chose to use a simple, fast implementation ignoring potential rounding errors: This is justified by (i) our LP-based post-optimization which gives a proof of the obtained lower bound (that can be checked using an exact LP solver), hence  Iterations  119  129  141  146  Constraints  535  1756  4217  13958  Game Positions  394  1326  2956  9646   Table 1: Lower bounds s(T, . . . , T )/(4T ) on succ( 1 4 , 1 4 , 1 4 , 1 4 ) stemming from the automated search (line 1). Given are also the number of iterations until the automated search procedure converged, i.e., stopped finding improvements using relaxed splits or scalings, and the number of game positions and constraints that had an influence on the value of s(T, . . . , T ).
correctness of the output remains certified, and (ii) the fact that our discretization of the search space introduces an inherent imprecision that very likely dominates the floating-point rounding errors.
The probably most desirable termination criterion is to run the search until no improvements can be found. However, when the running time becomes a bottleneck issue, we can restrict the search to a small fixed number of iterations. This is especially useful in combination with the post-optimization (as the gain in the result per iteration is decreasing for later iterations of the process, which intuitively give increasingly accurate approximations of infinite "ideal" protocols -the post-optimization could potentially resolve these cyclic structures earlier in the process).
Results. The success probabilities of the protocols computed following the above approach, using different values for T , are given in the first line of Table 1. Further results exploiting the post-optimization are given in Table 2 in Section 2.9.

Post-Optimization via Linear Programming
When letting Algorithm 1 also keep track of at what time which update operation was performed, this data can be used to extract strategies for the splitting game (and cryptogenographic protocols). Some care has to be taken to only extract those intermediate positions that had an influence on the final game value for the position we are interested in (see below).
While this approach does deliver good cryptogenographic protocols, manually verifying the correctness of the updates or analyzing the structure of the underlying protocol quickly becomes a difficult task, as the size of the protocol grows rapidly.
Fortunately, it is possible to output a compact, machine-verifiable certificate for the lower bound obtained by the automated search that might even prove a better lower bound than computed: Each update step in the automated search corresponds to a valid inequality of the form succ(D) ≥ succ(D 0 ) + succ(D 1 ), succ(D) ≥ succ 0 (D) or λ · succ(D) = succ(λ · D). We can extract the (sparse) set ineq(T, T, T, T ) of those inequalities that lead to the computed lower bound on succ(T, T, T, T ).
Reconstructing the Strategy. Memorizing the best splits found by the dynamic programming updates, it is straightforward to reconstruct the best strategy found by the automated search. For preciseness, we define the i-th update step for i = 2k − 1 as the k-th splitting step (during the execution of Algorithm 1) and for i = 2k as the k-th scaling step, i.e., each update step is alternatingly a splitting and a scaling step. For every distribution D and update step i, we maintain an index L(D, i) defined as the last update step before and including i in which s(D) has been updated to a better value, or 0 if s(D) has never been updated. Moreover, for every D and update step i in which s(D) has been updated, we keep a constraint I(D, i) which represents the inequality or equality used to update s(D) to its best value in update step i. It is easy to see that this solution is at least as good as the solution stemming from the automated search alone. It can, however, even be better, in particular when a game strategy yields cyclic visits to certain positions. Table 2 contains, for different values of T , the success probabilities found by automated search (run with a bounded iteration number of 20) and by this above linear programming approach. The table also contains the number of linear inequalities (and game positions) that were extracted from the automated search run. We observe that consistently the LP-based solution is minimally better.  ) stemming from the automated search only (line 1) and from the LP solution of the linear system extracted from the automated search data, when the number of iterations is restricted to 20.
We also observe that the number of constraints is still moderate, posing no difficulties for ordinary LP solvers (which stands in stark contrast to feeding all relaxed splits and scalings over the complete discretization to the LP solver, which quickly becomes infeasible).
Hence the advantage of our approach of extracting the constraints from the automated search stage is that it generates a much sparser sets of constraints that still are sufficiently meaningful. After solving the LP, we can further sparsify this set of inequalities by deleting all inequalities that are not tight in the optimal solution of the LP, since these cannot correspond to the best splits found for the corresponding vector D, yielding a smaller set of relevant inequalities, which might help to analyze the structure of strong protocols.

Our Best Protocol
We report the best protocol we have found using the approach outlined in the previous sections.
Theorem 2.13. In the 2-player cryptogenography problem, succ( 1 4 , 1 4 , 1 4 , 1 4 ) ≥ 0.3384736. Proof. On http://people.mpi-inf.mpg.de/~marvin/verify.html, we provide a linear program based on feasible inequalities on the discretization D with T = 50. To verify the result, one only has to (1) check validity of each inequality, i.e., checking whether each constraints encodes a feasible scaling, relaxed split or zero-bit success probability and (2) solve the linear program. Since we represent the distributions D = (a, b, c, d) using a normal form a ≥ b, c, d (to break symmetries), checking validity of each splitting constraint is not completely trivial, but easy. We provide a simple checker program to verify validity of the constraints. The LP is output in a format compatible with the LP solver lp solve 1 .

A Stronger Hardness Result
In this section, we prove that any 2-player cryptogenographic protocol has a success probability of at most 0.3672. This improves over the previous 0.375 bound of [2].
We first describe the previously used concavity method in Section 3.1 and apply it to our new upper bound function in Section 3.2.
As a simple application of the concavity method, we can now give the simple proof of Proposition 2.7 stated in Section 2.5. Proposition 2.7. We have succ(a, b, c, d) ≤ min{a, c} + min{b, d}.
Proof. We make use of the vector splitting formulation. Define s U B (a, b, c, d) := min{a, c} + min{b, d}. We have succ 0 (a, b, c, d) = max{min{a, c}, min{b, d}} ≤ min{a, c} + min{b, d} = s U B (a, b, c, d), which proves condition (C2) of Lemma 3.3. Note that f : (x, y) → min{x, y} is superadditive and hence s U B , as a sum of superadditive functions, is superadditive as well. This proves s U B (D) ≥ s U B (D 0 ) + s U B (D 1 ) even for all splits D = D 0 + D 1 (not only allowed splits).
The following lemma is an extension of the concavity theorem. We relax the condition that s is lower bounded by succ 0 on all distributions to now on only six particular, very simple distributions. Proof. To appeal to Lemma 3.2, we need to show that (C1) and (C2') imply (C2), i.e., for all D = (a, b, c, d) ∈ ∆, we have s(D) ≥ succ 0 (D) = max{min{a, c}, min{b, d}}.
It remains to analyze the concavity of f q .
Lemma 3.6. For all q ≥ 0, f q is concave on S.
Proof of Lemma 3.4. Combining Lemmas 3.5 and 3.6 yields that s is concave on all allowed planes.

Conclusion
Despite the fundamental understanding of the cryptogenography problem obtained by Brody et al. [2], determining the success probability even of the 2-player case remains an intriguing open problem. The previous best protocol with success probability 1/3, while surprising and unexpected at first, is natural and very symmetric (in particular when viewed in the convex combination or vector splitting game formulation). We disprove the hope that it is an optimal protocol by exhibiting less intuitive and less symmetric protocols having success probabilities up to 0.3384. Concerning hardness results, our upper bound of 0.3671875 shows that also the previous upper bound of 3/8 was not the final answer. These findings add to the impression that the cryptography problem offers a more complex nature than its simple description might suggest and that understanding the structure of good protocols is highly non-trivial.
We are optimistic that our methods support a further development of improved protocols and bounds. (1) Trivially, investing more computational power or optimizing the automated search might lead to finding better protocols. (2) Our improved protocols might motivate to (manually) find infinite protocol families exploiting implicit properties and structure of these protocols. (3) Our reformulations, e.g., as vector splitting game, might ease further searches for better protocols and for better candidate functions for a hardness proof.