Combinatorial Pen Testing (Or Consumer Surplus of Deferred-Acceptance Auctions)
Abstract
Pen testing is the problem of selecting high-capacity resources when the only way to measure the capacity of a resource expends its capacity. We have a set of pens with unknown amounts of ink and our goal is to select a feasible subset of pens maximizing the total ink in them. We are allowed to learn about the ink levels by writing with them, but this uses up ink that was previously in the pens.
We identify optimal and near optimal pen testing algorithms by drawing analogues to auction theoretic frameworks of deferred-acceptance auctions and virtual values. Our framework allows the conversion of any near optimal deferred-acceptance mechanism into a near optimal pen testing algorithm. Moreover, these algorithms guarantee an additional overhead of at most in the approximation factor to the omniscient algorithm that has access to the ink levels in the pens. We use this framework to give pen testing algorithms for various combinatorial constraints like matroid, knapsack, and general downward-closed constraints, and also for online environments.
Keywords and phrases:
Pen testing, consumer surplus, money-burning, deferred-acceptance auctionsFunding:
Jason Hartline: This work is supported in part by the NSF award CCF-2229162.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Theory and algorithms for application domains ; Theory of computation Approximation algorithms analysis ; Theory of computation Algorithm design techniquesEditors:
Raghu MekaSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Pen testing is the problem of selecting high capacity resources when the only way to measure the capacity of the resource expends its capacity [23]. We show that any ascending-price auction can be converted into an equivalent pen testing algorithm. We apply the auction theoretic framework of virtual values to identify optimal pen testing algorithms. This connection allows many existing results from auction theory to be applied to pen testing and gives optimal and near optimal pen testing algorithms in combinatorial and online environments.
The pen testing problem of Qiao and Valiant [23] is the following. We have a set of pens with varying amounts of remaining ink and we want to choose a pen with the largest amount of ink left. We have access to the distribution of ink levels in these pens, but we are only allowed to learn about the ink levels by writing with the pens to test them. While writing gives information about whether there is still ink left, it uses up ink that was previously in the pen. Pens that are expended due to testing can be discarded without incurring any penalty. The combinatorial pen testing problem generalizes the pen testing problem to selecting a feasible subset of pens, e.g., according to a matroid or knapsack constraint, to maximize the total remaining ink in the chosen subset.
Deferred-acceptance auctions [21] formalize the notion of ascending-price mechanisms for allocating goods to strategic agents. These are mechanisms that greedily reject the least promising agent by increasing the price for getting allocated at each stage. For instance, to auction off one good, the deferred-acceptance mechanism keeps increasing the price, rejecting agents whenever their value for the good falls below the price until exactly one agent remains in the auction. The surplus of the mechanism is the sum of the values of all the allocated agents and the consumer surplus is the surplus minus the payments made by the agents.
We provide a black box reduction to convert any surplus optimal (or near optimal) deferred-acceptance auction into an optimal (or near optimal) pen testing algorithm with equivalent performance guarantees. In this reduction, pens and their ink levels are analogous to agents and their values. Payments correspond to the ink used up via testing, and as a consequence, the consumer surplus of the deferred-acceptance auction corresponds to the residual ink in the pen testing environment. Therefore, optimizing residual ink reduces to designing consumer-surplus-optimal deferred-acceptance mechanisms.
Qiao and Valiant [23] propose comparing the performance of a given pen testing algorithm against the omniscient benchmark, the performance of an omniscient algorithm that has access to the ink levels in the pens. The omniscient benchmark can thereby choose the optimal feasible subset of pens without having to burn any ink (i.e, payments in the auction environment), and is thus, analogous to the surplus optimal (sealed-bid) mechanism in the corresponding auction environment. Thus, the omniscient approximation, the ratio between the omniscient benchmark to the residual ink in the pen testing algorithm, is analogous to the ratio between the optimal surplus and the consumer surplus of the corresponding deferred-acceptance auction.
In the literature, near optimal deferred-acceptance mechanisms are evaluated against optimal (sealed bid) mechanisms, an upper bound to the optimal deferred-acceptance mechanism. Define the standard benchmark as the optimal (sealed-bid) mechanism for consumer surplus and let the standard approximation be the ratio between the standard benchmark and the consumer surplus of a given deferred-acceptance mechanism. Note that the standard benchmark does not have a natural pen testing analog, but is a much tighter upper bound on the optimal pen testing algorithm than the omniscient benchmark. Thus, the reduction to deferred-acceptance mechanism design enables a much tighter approximation bound for the optimal pen testing algorithm.
The reduction framework decomposes into two steps for designing near optimal deferred-acceptance mechanisms. One of the two steps involves proving upper bounds on the standard approximation . In the other step, for any given feasibility constraint, we consider the optimal omniscient approximation, the ratio between the omniscient benchmark (here, the optimal surplus) and the standard benchmark (here, the optimal consumer surplus). As we will argue in Theorem 1, the omniscient approximation is at most the product between the two approximation ratios. At a high level, captures the loss in surplus due to running a deferred-acceptance auction and captures the loss in surplus due to burning all payments. Beyond consumer surplus optimization, the reduction framework applies more generally to optimization problems with two natural benchmarks, a weaker standard benchmark and much more powerful omniscient benchmark.
Theorem 1.
Consider a combinatorial pen testing environment with pens, the analogous auction environment with an optimal omniscient approximation , and a deferred-acceptance mechanism with a standard approximation equal to . Then, there is a pen testing algorithm that is a -approximation to the standard benchmark and -approximation to the omniscient benchmark.
In the first step of our framework, we bound the optimal omniscient approximation under various feasibility constraints. Generally, remember that the optimal omniscient approximation is the ratio between the omniscient benchmark and the standard benchmark, and for our specific application, it corresponds to the loss in surplus due to payments getting burnt instead of getting transferred to the auctioneer. We show that in any combinatorial environment. Our result tightens the best known upper bound on for most settings of interest. Fotakis et al. [13] show that the optimal omniscient approximation is at most , where is the number of feasible outcomes. Our bound improves their bound by removing the constant and by replacing by , which is usually smaller.111In some non-downward-closed settings like public projects, . For example, for building a bridge that serves everyone. Either the bridge is built or not. However, in all downward-closed environments. Indeed, for each bidder, we assume there exists an outcome where they are included; otherwise, the bidder can be dropped from consideration altogether. Hartline and Roughgarden [18] show that when any out of the agents can be allocated the good simultaneously, i.e, any pens can be chosen in the corresponding pen testing environment. We improve their bound by a multiplicative factor of at least . We also prove that for the -identical goods setting, which we show is tight up to terms when .
Our upper bound for is via a reduction from multi-agent environments to a single-agent environment, similar in flavour to Feng et al. [12]. In a single-agent environment where the agent is allocated with an ex-ante allocation probability , we show that the ratio of the optimal surplus to the optimal consumer surplus is at most . Our reduction allows us to convert the -approximation in the single-agent setting to a -approximation for any multi-agent environment.
We reduce designing multi-agent approximation mechanisms to single-agent approximations with ex-ante constraints bounded away from 0. The approach is inspired by Feng et al. [12], who prove a similar reduction when the single-agent approximation is bounded above by a constant for all . Their approach cannot be directly applied to the optimal omniscient approximation of consumer surplus because the single-agent approximation approaches as . We adapt ideas from Hartline and Taggart [17] to prove a reduction to the single-agent setting with an ex-ante constraint . In this range, the single-agent approximation ratio is at most . The resulting construction establishes an optimal omniscient approximation of for multi-agent environments with arbitrary feasibility constraints.
The second step of the reduction framework from pen testing to deferred-acceptance auctions upper bounds the standard approximation , the ratio between the optimal consumer surplus of a sealed-bid mechanism and the consumer surplus of a given deferred-acceptance auction. Through Myerson’s virtual value framework [22], we argue that the above is equivalent to finding an upper bound for the optimal expected surplus divided by the expected surplus of some deferred-acceptance auction. Surplus optimal deferred-acceptance auctions are known for -identical goods and more general matroid feasibility constraints [21, 4]. We give a -approximate deferred-acceptance mechanism for knapsack constraints. Beyond matroids and knapsacks, Feldman et al. [11] give a logarithmic approximate deferred-acceptance auction for any downward-closed feasibility constraint. We observe an inherent downward-closure in the pen testing problem that allows the auction of Feldman et al. [11] to be extended to all combinatorial constraints. The set of pens chosen by any pen testing algorithm can be padded by adding (possibly expended) pens to always output a maximal feasible set.
Online posted-price mechanisms are a special case of deferred-acceptance auctions. Online environments are ones where an irrevocable decision regarding choosing a pen needs to be taken before testing the next one. We consider the settings where the order of testing pens is adversarially chosen (the oblivious version, cf. prophet inequalities, [6]) and where the algorithm can choose the order (the sequential version, cf. correlation gap, [26]). With our analysis of the optimal omniscient approximation and known bounds on the standard approximation of online pricing mechanisms, the reduction framework yields better performance guarantees than the bounds by Qiao and Valiant [23] by constant factors for these environments.
The results detailed above are summarized in Table 1.
| Environment | |||||
|---|---|---|---|---|---|
| Select | * | ** | |||
| Select | * | ** | |||
| Matroids | ** | ||||
| Knapsack | |||||
| General Downward-Closed | |||||
| Select Online Oblivious | |||||
| Select Online Sequential | |||||
| Select Online IID | |||||
| Matroid Online Oblivious | |||||
| Matroid Online Sequential | |||||
Our reduction from pen testing to deferred-acceptance auction design improves bounds from previous literature for all the environments that we discuss; however, it does not always yield a tight bound. For the online IID environment (the online environment where the ink levels of all pens are drawn IID), we identify an algorithm with an omniscient approximation that is a constant factor less than .
Our results and techniques are of interest beyond pen testing. Our upper bound of on the optimal omniscient approximation is the ratio between the optimal surplus and the optimal consumer surplus, and is a mechanism design problem of independent interest [18, 13, 15]. Recall that the surplus created in an auction equals the sum of the consumer surplus and the revenue generated by the auctioneer. However, when payments made by the winners are burnt instead of being transferred to the auctioneer, the total surplus created is just the consumer surplus. The need to burn payments can be diverse. For instance:
-
1.
Making monetary transactions is not desirable or repugnant, like in setting prices for welfare schemes for vulnerable populations. Ordeal-based mechanisms are employed in such scenarios, where agents make non-transferable non-monetary payments, like spending time waiting in a line to avail the provisions provided in the welfare scheme [20, 3, 5].
- 2.
-
3.
The auctioneer cannot be trusted to truthfully implement the auction, like in a blockchain, where a miner runs an auction to decide which set of transaction goes into the next block it proposes. To stop the miner from colluding with agents or from benefiting from injecting fake bids, it becomes essential to burn a substantial portion of the transaction fees collected from the agents [24, 7, 14].
The technique to obtain the -approximation in the single-agent environment can have further applications. We express consumer surplus optimization as a linear program and explicitly compute the worst-case distribution. This provides a proof technique which automatically gives tight approximation guarantees. For example, we reuse the technique to compute the worst-case distribution for the omniscient approximation for the online IID environment. We improve the bound obtained from the reduction framework to a multiplicative approximation ratio within an additive 1 of the lower bound of Qiao and Valiant [23] (where is the th harmonic number).
Paper Organization.
We make the reduction from the pen-testing problem to designing deferred-acceptance auctions formal in Section 2. We execute the first step of the reduction framework – upper bounding the optimal omniscient approximation – in Section 3. We bound the approximation ratio between the surplus and consumer surplus in a single-agent environment in Section 3.1 and discuss the reduction from multi-agent to single-agent settings in Section 3.2. The second step of the reduction from pen testing algorithms to deferred-acceptance mechanisms is discussed in Section 4, where we review deferred-acceptance mechanisms for various environments and discuss the corresponding bounds we get for pen testing. Finally, we revisit the online IID single item environment and achieve a tight (up to terms) bound in Section 5.
2 The Pen Testing Problem and Ascending-Price Auctions
In this section, we formally define the combinatorial pen testing problem, generalized from Qiao and Valiant [23], and explore its connections to ascending-price mechanisms.
Definition 2.
A pen testing instance is described by a set of pens with unknown ink levels , each drawn independently from the distribution , and a subset of the powerset of the pens denoting feasible subsets of pens. The residual ink level of pen is initiated to . We can test the pens before making a decision. Test is done by writing with pen for time . We receive a binary signal at the end of the test:
-
1.
If : The test succeeds. The pen now has an (unobservable) ink level .
-
2.
If : The test fails. The pen now has no ink left, i.e, .
We can choose to run multiple tests on the pens. We need to output some feasible subset of pens, maximizing
The goal is to optimize the total amount of ink remaining in the set of chosen pens.
Compare a pen testing instance with pens and a feasibility constraint with an -agent auction under the same feasibility constraint. The original levels of ink correspond to the value each agent gets upon being allocated. The ink spent through testing is analogous to the prices in the auction. Thus, maximizing total residual ink is equivalent to optimizing consumer surplus, i.e, the sum of the values of the winning agents minus the sum of all payments. Note that auctions generally give the auctioneer the added advantage of the ability to solicit bids from agents. In pen testing algorithms, we only get to learn whether the ink left in the pen is more than some threshold and in doing so, we irrevocably expend ink up to the threshold. This is similar to ascending-price auctions, where the auctioneer irreversibly increases the price faced by each agent, and in doing so, only learns whether the agent has a value at least the price.
Milgrom and Segal [21] describe the wide class of ascending-price auctions called deferred-acceptance mechanisms.
Definition 3 (Milgrom and Segal [21]; Deferred-Acceptance Auctions).
A deferred-acceptance auction is held across multiple stages. For each stage , the auction maintains a set of active bidders satisfying , where the initial active set is the set of all bidders. The auction is characterized by a (possibly randomized) pricing rule , mapping the history of the auction at stage to (discriminatory) prices for each agent satisfying for all agents and for all histories at stage arising out of stage history , i.e, the prices are monotonously non-decreasing for all agents over time. Agents can opt to drop out once the prices are updated in stage . is the set of agents in that did not drop out in stage . The mechanism terminates at stage when becomes feasible. The agents in are charged according to .
The definition from Milgrom and Segal [21] restricts deferred-acceptance mechanisms to be deterministic, which we relax for the purpose of designing pen testing algorithms. From the discussion above, note that any deferred-acceptance mechanism can be converted into a pen testing algorithm, by setting thresholds equal to the prices recommended by the mechanism. Thus, the optimal (near optimal) pen testing algorithm corresponds to the consumer surplus optimal (near optimal) deferred-acceptance auction.
We compare the performance of our deferred-acceptance mechanisms (and the corresponding pen testing algorithms) to the following benchmarks:
-
The omniscient benchmark [23]: The omniscient benchmark corresponds to the omniscient pen testing algorithm that exactly knows the ink levels in the pens, and thus, does not have to waste ink by writing with them. In the reduction to the auction environment, the omniscient benchmark corresponds to a mechanism that neither loses out on the payments nor has to stick to an increasing trajectory of prices. Thus, the omniscient benchmark corresponds to the surplus optimal sealed-bid mechanism. The omniscient approximation corresponds to the ratio between the omniscient benchmark and the performance of a given pen testing algorithm in the pen testing setting, and the omniscient benchmark and the consumer surplus of a given deferred-acceptance auction in the auction environment.
-
The standard benchmark: The standard benchmark corresponds to the consumer-surplus-optimal sealed-bid mechanism. We define the standard approximation to be the ratio between the standard benchmark and the consumer surplus of a candidate deferred-acceptance mechanism. While the standard benchmark does not have a natural pen testing analogy, observe that the standard benchmark is a tighter upper bound to the optimal pen testing algorithm than the omniscient benchmark. Thus, the standard approximation gives a better upper bound on the ratio between the optimal pen testing algorithm and the residual ink gathered by a given pen testing algorithm.
Now that we have established the consumer surplus is a quantity of interest for pen testing, we review consumer surplus optimization through virtual values in the next section.
2.1 Consumer Surplus Optimization and Virtual Valuations
Let us begin in a single-agent environment with one good. Let the agent’s value be drawn from the distribution . The quantile of an agent with value is the measure with respect to of stronger values, i.e, . For , let be the inverse demand function of satisfying . In other words, . Throughout the paper, we assume and . It can be shown that these assumptions can be made without loss of generality.
Let be the price-posting surplus curve. Notice that is the expected surplus from posting a price to the agent, thereby allocating the good with probability . By our assumption on , . Let be the price-posting consumer surplus curve. Similar to the price-posting surplus curve, is the consumer-surplus from posting a price . Note that , and thus, .
Let be the marginal price-posting consumer surplus curve. Note that is monotonously non-increasing, and hence its derivative is non-positive. Thus, and is monotone non-decreasing. A quick note on derivatives: whenever is well defined, is well defined. At other points, can be calculated using any sub-derivative of . At points of discontinuity of , and thus .
Theorem 4 (Myerson [22]).
In a Bayesian incentive-compatible mechanism with allocation rule and payment rule (over quantiles), the expected consumer surplus of an agent satisfies
Thus, the expected consumer surplus equals the expected virtual surplus with marginal consumer surplus as virtual values. Myerson [22] states that an allocation rule can be implemented as a truthful auction if and only if the allocation rule is monotone non-increasing in quantile space. Thus, the consumer surplus optimal mechanism optimizes for virtual surplus subject to the allocation rule being monotonously non-increasing.
When the marginal price-posting consumer surplus curve is non-increasing (and hence is concave), optimizing for virtual surplus automatically ensures monotonicity of the allocation rule. We will call these distributions with non-increasing marginal price-posting consumer surplus curves consumer surplus regular.
However, is not monotone non-increasing for common distributions like the uniform and normal distribution. In fact, is monotone non-decreasing for these distributions. In such a case, the allocation that pointwise optimizes for virtual surplus might not satisfy monotonicity. Myerson [22] prescribes an ironing procedure to optimize for virtual surplus subject to monotonicity.
-
1.
Construct the concave hull of the price-posting consumer surplus curve
-
2.
Define
-
3.
Find the virtual surplus optimal mechanism using as virtual surplus
Since is concave, is non-increasing, and hence optimizing for the ironed virtual surplus is easier. Throughout the paper, we will follow the convention of attaching a bar on top of a curve to describe the ironed curve.
In a multi-agent environment, the interim allocation rule for an agent is the single-agent allocation rule that arises in expectation over the quantiles of all other agents. This captures the perspective of the agent after knowing its value, but before learning the values of the other agents.
Theorem 5.
Let be the interim allocation rule for some agent, satisfying for all such that . Then
In other words, if the allocation rule does not differentiate between agents in an ironed region, then the ironed virtual surplus can be used to compute the consumer surplus instead of the actual virtual surplus.
Theorem 6 (Alaei et al. [1, 2]).
Let be some allocation rule with interim allocation rule monotonously non-increasing for each agent . Then, there exists a mechanism with interim allocation rule for agent such that the expected consumer surplus for agent equals
In other words, the expected consumer surplus of is the consumer surplus of the original mechanism if the virtual values were given by for agent instead of
2.2 Near-Optimal Deferred-Acceptance Mechanisms for Consumer Surplus
In this section, we prescribe a recipe similar to Feng et al. [12] to convert any approximately optimal deferred-acceptance mechanism for surplus into an approximately optimal deferred-acceptance mechanism for consumer surplus.
Definition 7 (Virtual-Pricing Transformation of a Deferred-Acceptance Mechanism).
Let be a deferred-acceptance mechanism designed to optimize surplus. Let be the inverse demand function and be the virtual value function (for consumer surplus) for agent . The virtual-pricing transformation on , denoted by , implements in ironed virtual price space, i.e, whenever posts a price to agent , posts a price satisfying
We will make some preliminary observations about the transformation:
-
1.
Consider two runs of the transformed mechanism with some agent having two different values, both corresponding to the same ironed virtual value (which can happen if both these values are within the same ironed interval). From the definition of the transform, the agent faces the smallest price needed to differentiate itself from virtual values smaller than the threshold set by the mechanism. Consequently, the mechanism does not post prices from the middle of an ironed interval. Thus, in both these runs of the mechanism, the agent behaves identically. Since the mechanism does not discriminate between values with the same ironed virtual value, the consumer surplus of the mechanism equals the ironed virtual surplus.
-
2.
If the original mechanism is a -approximation to the optimal surplus, in expectation over all product distributions, then the transformed mechanism is a -approximation to the optimal ironed virtual surplus, and hence, to the optimal consumer surplus.
-
3.
Finally, it is straightforward to see that the transformed mechanism posts non-decreasing prices to each agent. Hence, is also a deferred-acceptance mechanism.
Recall that the standard approximation is the ratio between the consumer surplus of the optimal deferred-acceptance mechanism and our proposed mechanism while the optimal omniscient approximation is the ratio between the optimal surplus and the optimal consumer surplus. The proof of Theorem 1 is now straightforward.
Proof of Theorem 1.
Let denote the expected optimal consumer surplus, denote the expected optimal surplus, and denote the expected consumer surplus of the deferred acceptance mechanism in Definition 7. Then,
where the first inequality follows from the definition of standard approximation and bullet in the discussion above, while the second follows from the definition of optimal omniscient approximation . The equivalent pen testing algorithm yields the necessary approximation ratios against the two benchmarks.
In the next section, we analyze the optimal omniscient approximation for general combinatorial environments.
3 Consumer Surplus versus Surplus
Our approximation procedure follows two steps. In Section 3.1, we show that the optimal surplus is at most times the consumer surplus in a single-agent environment where the ex-ante probability of allocation is constrained to be at most . In Section 3.2, we move from the single-agent environment to multi-agent environments combining approaches from Feng et al. [12] and Hartline and Taggart [17] to show a approximation between the consumer surplus and optimal surplus in -agent environments with general feasibility constraints.
3.1 The Single-Agent Problem
Recall that and are the price-posting surplus curve, price-posting consumer surplus curve and the ironed consumer surplus curve respectively. In this section, we establish a “closeness property” like in Feng et al. [12].
Theorem 8.
For an ex-ante allocation probability , the ratio of the optimal surplus to the optimal consumer surplus is at most .
We will begin by proving Theorem 8 for consumer surplus regular distributions, and then extend the result to all distributions.
Lemma 9.
For any consumer surplus regular distribution and an ex-ante allocation probability , the ratio of the optimal surplus to the optimal consumer surplus is at most .
Proof.
Conditional on generating a surplus , we want to compute the distribution that minimizes consumer surplus .
| (1) |
We substitute in the last equation.
Simultaneously, we enforce , and is non-increasing (since the distribution is consumer surplus regular), and .
Finding the minimum consumer surplus distribution reduces to the following program.
subject to
The constraints in the last line can be enforced after solving for .
Consumer surplus regularity dictates for . Hence, minimizing would correspond to setting for . Similarly, for . Minimizing would mean minimizing which is achieved by setting for . Let . For a constant marginal price-posting consumer surplus curve,
Thus, for any consumer surplus regular distribution, . The constant marginal consumer surplus curve that achieves this ratio corresponds to the exponential distribution.
Lemma 10.
Let the consumer surplus curves satisfy and for all . Then, for the corresponding surplus curves, .
Proof.
We rewrite equation (1) as follow.
| (2) |
The second equality is obtained through integrating by parts. Increasing pointwise clearly results in an increase in .
Proof of Theorem 8.
The optimal consumer surplus for an ex-ante allocation probability equals .
Consider the distribution with price-posting consumer surplus curve . Let be the price-posting surplus curve corresponding to . From Lemma 10 and Lemma 9,
The second inequality holds from Lemma 9, since is the price-posting surplus curve of a consumer surplus regular distribution (we made concave through ironing).
See the full online version for a brief discussion on the surplus generated by the single-agent optimal consumer surplus auction.
3.2 Multi-Agent Environments
In this section, we convert the single-agent bound from Theorem 8 into an upper bound on the ratio between the optimal surplus and optimal consumer surplus (aka the optimal omniscient approximation) for multi-agent environments. We outline our approach before delving into the proof. Consider the interim allocation rule for agent in the surplus optimal mechanism . Without loss of generality, assume . The expected surplus for agent equals
This equality comes from integrating by parts ( vanishes at both, and , since and ). Suppose there exists a constant such that for all , then,
The above inequality connects the expected surplus to the expected ironed consumer surplus of the interim allocation rule . Since need not be equal to whenever , . But, Theorem 6 shows the existence of a mechanism with expected consumer surplus of each agent equal to . Thus, the expected consumer surplus of is an -approximation to the optimal surplus. The consumer surplus optimal mechanism will only do better.
Unfortunately, as , the bound from Theorem 8 of diverges to . Thus, we need to show that the loss from ignoring the small (strong) quantiles is not much. Suppose we find a mechanism with interim allocation for agent , near optimal surplus, and for . Then, in this region. For , we know . Thus,
Hartline and Taggart [17] give the construction of such a mechanism , originally intended to design good mechanisms with only sample access to the distributions of bidders’ bids.
Definition 11 (Hartline and Taggart [17]; -buffering rule).
Given an allocation rule and a quantile , the -buffering rule for simulates with quantiles transformed on each agent as follows:
-
Top inflate: for any , return
-
For any , return
-
Bottom deflate: for any , return
In essence, the top inflate branch makes the -buffering rule treat all agents with a small quantile as if they had quantile and thus, for (the bottom deflate branch is unnecessary for the analysis of monotone payoff curves like surplus and consumer surplus ; see Remark 15).
Theorem 12 (Hartline and Taggart [17]).
Let be the -buffering rule of the surplus optimal mechanism . Then, .
Theorem 13.
Consider an -agent environment with an arbitrary feasibility constraint. The optimal omniscient approximation is at most .
Proof.
The proof follows by combining Theorem 12 with the discussion above. Let be the -buffering rule of the surplus optimal mechanism . Then,
Corollary 14.
In -agent environments with an arbitrary feasibility constraint, the optimal omniscient approximation .
Proof.
Setting in Theorem 13, we get
Remark 15.
The -buffering rule of Hartline and Taggart [17] can be implemented without a bottom deflate branch. This would strengthen the approximation guarantee to . However, substituting will not result in a bound tighter than .
Remark 16.
Fotakis et al. [13] show that the optimal omniscient approximation as a function of the number of outcomes in the feasibility constraint is at most . Corollary 14 improves their bound whenever the feasibility constraint has cardinality at least . Note that all downward-closed constraints have at least feasible outcomes. For such feasibility constraints, Corollary 14 yields an exponential improvement. As the number of feasible outcomes become exponential in , their bound declines to .
Remark 17.
The upper bound from Corollary 14 matches the lower bound of Qiao and Valiant [23]. In the single-item environment with values of agents drawn IID from the exponential distribution, they show that the optimal surplus is times the optimal consumer surplus, where is the harmonic number.
3.3 The -Identical Goods Environment
Theorem 13 and Corollary 14 give an over-arching bound for the optimal omniscient approximation that holds in any environment. However, specific environments admit better approximations. For instance, we will give a tighter bound for for the -identical goods environment, where any set of at most -agents can be allocated simultaneously.
Theorem 18.
In the -agent -identical goods environment, the optimal omniscient approximation satisfies
-
1.
when , and
-
2.
, when .
To prove Theorem 18, we make use of the following lemma, whose proof we defer to the end of the section.
Lemma 19.
Consider the special case with agents and identical goods. The optimal omniscient approximation for any .
Lemma 19 is particularly useful when . For a constant , is while is a constant.
Proof of Theorem 18.
We show the theorem separately for the three cases, when , but , and .
-
: The proof is a direct consequence of Corollary 14, where we show . The equality follows since is a constant.
-
, but : By setting in Lemma 19, we get
The first term is since . The second and the third terms are and respectively, since . Summarizing, .
The above two cases conclude the proof of the first part of the theorem. The next case clinches the proof of the second part.
-
: We set as in the previous case. Once again since . We plot the ratio between our upper bound and the lower bound from Lemma 20 as a function of in Figure 1(a) to observe that the ratio is at most , exactly as needed. We skip a more detailed numerical analysis substantiating this observation.
We modify the lower bound from Qiao and Valiant [23] sketched in Remark 17 to get an analogous result for the -identical goods environment.
Lemma 20.
Consider the -agent -identical goods environment with values drawn IID from the exponential distribution with mean . Assume, for convenience, that is an integer. The optimal omniscient approximation .
Proof.
The exponential distribution with mean has a constant marginal consumer surplus curve , and thus, the optimal consumer surplus mechanism allocates to some set of agents to get an expected consumer surplus equal to .
Now, consider the following mechanism. Group agents into batches of agents each, and award the agent with the highest value in each group. Each group is now identical to a single item environment with agents, and thus, generates an expected surplus of (see Remark 17 on the example sketched in [23]). The batches together produce an expected surplus equal to . The optimal mechanism creates a larger surplus and hence, is at least .
Thus, the upper bound from Theorem 18 is tight up to terms when and up to a multiplicative when . The best known bound for the optimal omniscient approximation is due to Hartline and Roughgarden [18], where they show . Apart from closing the gap between their upper bound and the lower bound when , our bound is an improvement by a factor of at least when (Figure 1(b)).
We conclude the section with the proof of Lemma 19. We will adopt an approach similar to the proof of Theorem 13. Increasing the allocation to the agents with a small quantile so that for vastly increases the consumer surplus of without noticeably decreasing the surplus of the mechanism. We consider online posted-price mechanisms with the constraint whenever for all agents . Posted-price mechanisms are known to achieve a near optimal surplus and we bound the loss in surplus due to the additional constraint.
Consistent with prior literature on online sequential price-posting [6, 26] we will approximate the performance of the mechanism against the optimal ex-ante relaxed mechanism . In the ex-ante relaxation, the feasibility constraint is relaxed to bind ex-ante rather than ex-post. In other words, the expected number of agents served by the mechanism must be at most . The optimal ex-ante mechanism obtains a larger surplus than , since is also a feasible ex-ante mechanism. Note that, if agent is being allocated with probability by , it would allocate to the smallest quantiles (quantiles corresponding to the largest values) without violating the ex-ante constraint, thereby getting an expected surplus from agent .
Let be the set of all ex-ante feasible allocations for selling up to items such that and . Let be the ex-ante relaxed mechanism that allocates to the smallest quantiles of agent . Note that is well-defined even if as long as . We will abuse notation to denote the surplus from by . Thus, .
Yan [26] showed that the following sequential posted-price mechanism has a surplus comparable to for all .
-
1.
Order agents in decreasing order of , the expected surplus conditioned on selling to agent .
-
2.
In the order above, offer agent a price until all agents interact with the mechanism or until all goods are sold.
As usual, we will abuse notation to denote the surplus of by .
Theorem 21 (Yan [26]).
For , .
We propose a sequential-posted-price-mechanism with near optimal surplus and interim allocation rules satisfying the additional properties outlined above. For a vector , let the -inflated sequential posted-price mechanism denote , where . Observe that if has an unsold good while interacting with agent , it always sells the good if the agent has a quantile . Thus, the interim allocation rule is a constant function in and in this range. Next, we argue that approximates the surplus of .
Lemma 22.
For all ,
Proof.
The proof proceeds in two steps. First, for with , we show that is not too far away from the set of ex-ante feasible allocations . We then show that the projection of back onto satisfies and .
Step 1.
.
Without loss of generality, we assume . Increasing so that their sum equals would not decrease the value of . The claim reduces to showing the maximum value of is at most .
Note that , equality holding exactly when . Thus, whenever , decreasing them to zero increases the value of . However, this would contradict . For , . Increasing to whenever does not change the value of , but allows other values to be set to zero without violating . A simple greedy argument would suggest that is maximized when for and when , in which case, .
Step 2.
Order (and re-index) agents in decreasing order of . Consider the ex-ante allocation that offers agent with fraction of a good whenever . This has a surplus and allocates up to units, since . Let be an alternate ex-ante feasible allocation that allocates for agents for some until all items are allocated. redistributes goods from agents with a smaller surplus allocated by to agents with a larger to get a surplus
Step 3.
allocates the first agents with probability and allocates the others with probability . allocates the first agents with probability and further allocates any unsold good to the others. Thus, .
Proof of Lemma 19.
Let be such that . From the review on sequential-price-posting and Lemma 22,
Further, has a constant interim allocation rule for agent when its quantile is in the range . Arguments identical to the proof of Theorem 13 can be used to conclude
4 Pen Testing Corollaries from Deferred-Acceptance Mechanisms
There are many environments in which deferred-acceptance mechanisms are known to be good. By Theorem 1, these imply good pen testing algorithms (up to an additional factor in the omniscient approximation, i.e, the ratio of the performances of the omniscient algorithm that knows the amount of ink in each pen and our proposed pen testing algorithm). In this section, we discuss a few notable examples. A summary of these results was given previously in Table 1.
Deferred-acceptance mechanisms that achieve the ex-post surplus optimal outcome are known for various feasibility constraints. While the simple auction that uniformly increases the price until exactly bidders remain active is surplus optimal in the -identical goods environment, Milgrom and Segal [21] and Bikhchandani et al. [4] generalize the mechanism for matroid feasibility constraints. We get the following corollaries from the above auctions.
Corollary 23.
For a pen testing environment with a matroid feasibility constraint, there exists a pen testing algorithm with an omniscient approximation ratio .
Corollary 24.
When choosing any of the pens is feasible, there exists a pen testing algorithm with an omniscient approximation ratio when and when .
Dütting et al. [8] initiated the study of prior-free deferred-acceptance approximation mechanisms in environments where deferred-acceptance mechanisms are known to be suboptimal. Their bounds can be improved in settings like ours where prior distributions of the agents’ values are known. For general downward-closed constraints with a prior distribution on values, Feldman et al. [11] give a approximation to the optimal surplus, where is the number of maximal feasible sets. Note that (every subset of agents might be feasible) and hence, is at most . Thus, we have a poly-logarithmic approximate pen testing algorithm for any downward-closed feasibility environment.
Note, however, that the pen testing model is inherently downward-closed. Specifically, suppose an algorithm wanted to select a set , but is not feasible while is. Since all pens have non-negative residual ink, even the ones that failed their tests, there is no loss in selecting instead of the set . Thus, near optimal pen testing algorithms for downward-closed constraints can be extended to give near optimal algorithms for general combinatorial constraints, giving the same performance guarantee as the downward-closed environment.
Corollary 25.
For any combinatorial pen testing environment, there exists a pen testing algorithm with an omniscient approximation ratio .
Remark 26.
Note that this reduction cannot be used to design deferred-acceptance mechanisms for general feasibility constraints. Allocating to agents that have dropped out of the auction can make the mechanism non-truthful (i.e, staying active till the price reaches the agent’s value and dropping out subsequently might not be in the agent’s best interests).
The bound for downward-closed and general combinatorial environments can be improved upon in special cases; for example, in the full online version, we give a natural deferred-acceptance mechanism for knapsack constraints that is a 2-approximation. For this knapsack problem, the agents have sizes along with values and feasible subsets are precisely those with a total size at most the knapsack capacity.
Corollary 27.
For a pen testing environment with a knapsack feasibility constraint, there exists a pen testing algorithm with an omniscient approximation ratio .
Online pricing mechanisms are a special case of deferred-acceptance mechanisms; thus, Theorem 1 converts online pricing mechanisms with good surplus into good online pen testing algorithms. For online pen testing problems, every pen can be tested exactly once and must either be selected or discarded immediately after testing and before testing another pen. The algorithm might have the ability to determine the order of testing pens (the sequential version) or the order might be adversarially determined (the oblivious version).
For the sequential pricing problem, Chawla et al. [6] gave an -approximation mechanism for the single-item environment. Yan [26] connected the sequential pricing problem to the correlation gap and generalized the -approximation to matroid environments. Both of these papers considered optimizing revenue, but the results can be easily adapted to optimize surplus.
Corollary 28.
In the online sequential pen testing problem with a matroid feasibility constraint, there exists a pen testing algorithm that achieves an omniscient approximation ratio .
Hajiaghayi et al. [16] and Chawla et al. [6] adapt the prophet inequality of Samuel-Cahn [25] to give a non-adaptive -approximation for the oblivious price posting problem with good. Kleinberg and Weinberg [19] extend this 2-approximation to matroid environments and any arrival order of agents but with adaptive prices.
Corollary 29.
In the online oblivious pen testing problem to select a single pen, there exists a non-adaptive pen testing algorithm that achieves an omniscient approximation ratio .
Corollary 30.
In the online oblivious pen testing problem with a matroid feasibility constraint, there exists a pen testing algorithm that achieves an omniscient approximation ratio .
The bounds of Corollary 28 and Corollary 29 improve on the online pen testing bounds of Qiao and Valiant [23].
5 The Online IID Environment
Consider the special case of the online pen testing problem where the ink levels are drawn independently from the same distribution. We move away from the reduction framework of Theorem 1 to obtain a better bound for this environment.
Theorem 31.
In online single-item environments with IID agents, there exists a price-posting strategy with an omniscient approximation at most . Equivalently, in the online pen testing problem with IID ink levels, there exists an algorithm that achieves .
First, note that the above theorem is tight. In Remark 17, we saw an instance (originally from [23]) with IID agents whose values are drawn from the exponential distribution with consumer surplus only an approximation to the optimal surplus. The optimal online consumer surplus cannot do better. Hence, the approximation ratio in Theorem 31 is tight.
Before proving Theorem 31, we briefly discuss the guarantee obtained through the reduction framework of Theorem 1. For the online environment with IID agents, it is known that [6]. Thus, the framework would give us an bound. The direct analysis below gives a tight bound up to an additive factor .
Proof of Theorem 31.
Similar to the proof of Theorem 8, we will phrase our program as an optimization problem and compute the optimal solution.
The surplus optimal auction always allocates the good to the agent with the highest value (smallest quantile). Let be the random variable denoting the value of the winner.
Thus, the expected surplus of the surplus optimal mechanism equals
The first equality follows by integrating from and . We integrate by parts for the second and fourth equation.
Next, we look at the consumer surplus through price-posting. For a price , we allocate the good if at least one agent has a quantile in . Conditioned on selling the good, we get an expected consumer surplus of . Thus, the expected consumer surplus equals
As before, we first show Theorem 31 for consumer surplus regular distributions. Without loss of generality, assume that the maximum achievable consumer surplus through anonymous price posting is at most . We want to find the consumer surplus regular distribution with the largest surplus.
subject to
First, observe that is convex (the proof for convexity is deferred to the full online version). We want to fit a concave curve under the convex curve . Let be the tangent to at . is a feasible solution for for all . Further, given that touches the curve at , for all (from the concavity of ). Also, it is clear that the objective increases with a pointwise increase in . Thus, we conclude that the optimal solution to the program is achieved at at some . In particular, the worst-case ratio occurs when is a straight-line.
Without loss of generality, we can normalize the slope to . Let . The surplus equals
See the full online version for the proof of the above equality.
By posting a price , we generate a consumer surplus
If , the ratio between the surplus and the consumer surplus is at most by setting . If , the surplus is less than , and by setting , the consumer surplus is at least . Thus, the worst-case ratio between surplus and consumer surplus is at most .
We now show the result for all distributions. We rewrite the optimal surplus as follows.
The equality follows through integration by parts. From the above expression, it can be concluded that the optimal surplus increases with a pointwise increase in .
Let be the price-posting surplus curve of the distribution with a price-posting consumer surplus curve . is pointwise larger than , and hence, from Lemma 10, is pointwise larger than . Hence, the optimal surplus is larger in the distribution with price-posting consumer surplus curve . However, from Theorem 5, the consumer surplus from posting prices is identical to both the distributions. Thus, the consumer surplus irregular distribution has a better ratio between surplus and consumer surplus than the consumer surplus regular distribution. Thus, the ratio between optimal surplus to optimal consumer surplus is at most for all distributions.
6 Conclusion
In this section, we discuss the various advantages and disadvantages of using the reduction framework of Theorem 1. Connecting the pen testing problem to auction theory and building an easy-to-use reduction framework enable us to design near optimal pen testing algorithms for any general feasibility constraint. We are able to construct algorithms that are only a constant factor away from the lower bound described in Qiao and Valiant [23] for various feasibility constraints like matroids () and knapsack (). However, there is a scope for improvement on the following two fronts:
-
1.
obtaining tight approximation guarantees, and
-
2.
obtaining approximation guarantees under other models of access to the prior distributions.
Tight Approximation Guarantees.
The approximation ratio of any pen testing algorithm is at least the optimal omniscient approximation . This follows since the optimal consumer surplus in the underlying auction environment is an upper bound on the performance of the optimal pen testing algorithm. The framework matches this bound up to a multiplicative factor of the approximation of deferred-acceptance mechanisms. Reducing this gap for various feasibility environments stands as an open problem.
For instance, consider the online environments discussed in the paper, where . Our framework yielded an omniscient approximation in the oblivious version () and in the sequential version (). We conjecture that the approximation guarantees obtained by applying the reduction framework are not tight, and can be strengthened to in both these environments, like in the IID setting (Section 5).
Other Models of Access to Distributions.
Our framework crucially makes use of virtual values, which in turn need complete knowledge of the distribution of ink levels. Thus, tailoring this approach to models where the algorithm gets partial access to the priors is challenging.
Qiao and Valiant [23] study two such models. They study the online oblivious problem where the algorithm gets access to just one sample from the distribution of each pen. They give a -approximation to the omniscient benchmark in this setting. They also look at the online secretary setting, where the pens arrive according to a permutation chosen uniformly at random. The quantity of ink in the pens are adversarially determined. The algorithm is told the quantity of ink in the pen with the largest ink level. They give a -approximation in this environment. Further, when the algorithm is given access to the ink levels in all the pens (with random arrival order), they tighten their bound to . It is an open question to identify reduction frameworks for these different models of access to the distributions.
References
- [1] Saeed Alaei, Hu Fu, Nima Haghpanah, Jason D Hartline, and Azarakhsh Malekian. Bayesian optimal auctions via multi-to single-agent reduction. In 13th ACM Conference on Electronic Commerce, EC’12, page 17, 2012. doi:10.1145/2229012.2229017.
- [2] Saeed Alaei, Hu Fu, Nima Haghpanah, Jason D Hartline, and Azarakhsh Malekian. Efficient computation of optimal auctions via reduced forms. Mathematics of Operations Research, 44(3):1058–1086, 2019. doi:10.1287/MOOR.2018.0958.
- [3] Vivi Alatas, Abhijit Banerjee, Rema Hanna, Benjamin A Olken, Ririn Purnamasari, and Matthew Wai-Poi. Ordeal mechanisms in targeting: Theory and evidence from a field experiment in indonesia. Technical report, National Bureau of Economic Research, 2013.
- [4] Sushil Bikhchandani, Sven De Vries, James Schummer, and Rakesh V Vohra. An ascending vickrey auction for selling bases of a matroid. Operations research, 59(2):400–413, 2011. doi:10.1287/OPRE.1100.0888.
- [5] Mark Braverman, Jing Chen, and Sampath Kannan. Optimal provision-after-wait in healthcare. Mathematics of Operations Research, 41(1):352–376, 2016. doi:10.1287/MOOR.2015.0731.
- [6] Shuchi Chawla, Jason D Hartline, David L Malec, and Balasubramanian Sivan. Multi-parameter mechanism design and sequential posted pricing. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 311–320, 2010. doi:10.1145/1806689.1806733.
- [7] Hao Chung and Elaine Shi. Foundations of transaction fee mechanism design. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3856–3899. SIAM, 2023. doi:10.1137/1.9781611977554.CH150.
- [8] Paul Dütting, Vasilis Gkatzelis, and Tim Roughgarden. The performance of deferred-acceptance auctions. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 187–204, 2014. doi:10.1145/2600057.2602861.
- [9] Cynthia Dwork, Andrew Goldberg, and Moni Naor. On memory-bound functions for fighting spam. In Advances in Cryptology-CRYPTO 2003: 23rd Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 2003. Proceedings 23, pages 426–444. Springer, 2003. doi:10.1007/978-3-540-45146-4_25.
- [10] Cynthia Dwork and Moni Naor. Pricing via processing or combatting junk mail. In Annual international cryptology conference, pages 139–147. Springer, 1992. doi:10.1007/3-540-48071-4_10.
- [11] Michal Feldman, Vasilis Gkatzelis, Nick Gravin, and Daniel Schoepflin. Bayesian and randomized clock auctions. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 820–845, 2022. doi:10.1145/3490486.3538247.
- [12] Yiding Feng, Jason D Hartline, and Yingkai Li. Simple mechanisms for non-linear agents. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3802–3816. SIAM, 2023. doi:10.1137/1.9781611977554.CH148.
- [13] Dimitris Fotakis, Dimitris Tsipras, Christos Tzamos, and Emmanouil Zampetakis. Efficient money burning in general domains. Theory of Computing Systems, 59:619–640, 2016. doi:10.1007/S00224-016-9720-2.
- [14] Aadityan Ganesh, Clayton Thomas, and S. Matthew Weinberg. Revisiting the primitives of transaction fee mechanism design. In Proceedings of the 25th ACM Conference on Economics and Computation (EC ’24), July 8–11, 2024, New Haven, CT, USA. ACM, New York, NY, USA, 1 page., 2024. doi:10.1145/123456.654321.
- [15] Kira Goldner and Taylor Lundy. Simple mechanisms for utility maximization: Approximating welfare in the iid unit-demand setting. arXiv preprint arXiv:2402.12340, 2024. doi:10.48550/arXiv.2402.12340.
- [16] Mohammad Taghi Hajiaghayi, Robert Kleinberg, and Tuomas Sandholm. Automated online mechanism design and prophet inequalities. In AAAI, volume 7, pages 58–65, 2007. URL: http://www.aaai.org/Library/AAAI/2007/aaai07-009.php.
- [17] Jason Hartline and Samuel Taggart. Sample complexity for non-truthful mechanisms. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 399–416, 2019. doi:10.1145/3328526.3329632.
- [18] Jason D Hartline and Tim Roughgarden. Optimal mechanism design and money burning. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 75–84, 2008. doi:10.1145/1374376.1374390.
- [19] Robert Kleinberg and Seth Matthew Weinberg. Matroid prophet inequalities. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 123–136, 2012.
- [20] Henrik Jacobsen Kleven and Wojciech Kopczuk. Transfer program complexity and the take-up of social benefits. American Economic Journal: Economic Policy, 3(1):54–90, 2011.
- [21] Paul Milgrom and Ilya Segal. Deferred-acceptance auctions and radio spectrum reallocation. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 185–186, 2014. doi:10.1145/2600057.2602834.
- [22] Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58–73, 1981. doi:10.1287/MOOR.6.1.58.
- [23] Mingda Qiao and Gregory Valiant. Online pen testing. 14th Innovations in Theoretical Computer Science Conference, (ITCS) 2023, January 10-13, 2023, MIT, Cambridge, USA, 2023. doi:10.48550/arXiv.2210.00655.
- [24] Tim Roughgarden. Transaction fee mechanism design. In Proceedings of the 22nd ACM Conference on Economics and Computation, EC ’21, page 792, New York, NY, USA, 2021. Association for Computing Machinery. doi:10.1145/3465456.3467591.
- [25] Ester Samuel-Cahn. Comparison of threshold stop rules and maximum for independent nonnegative random variables. Ann. Probab., 12(4):1213–1216, 1984. URL: http://dml.mathdoc.fr/item/1176993150.
- [26] Qiqi Yan. Mechanism design via correlation gap. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pages 710–719. SIAM, 2011. doi:10.1137/1.9781611973082.56.
