Search Results

Documents authored by Chen, Lin



Chen, Lin

Document
Track A: Algorithms, Complexity and Games
Approximation Algorithms for Interdiction Problem with Packing Constraints

Authors: Lin Chen, Xiaoyu Wu, and Guochuan Zhang

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study a bilevel optimization problem which is a zero-sum Stackelberg game. In this problem, there are two players, a leader and a follower, who pick items from a common set. Both the leader and the follower have their own (multi-dimensional) budgets, respectively. Each item is associated with a profit, which is the same to the leader and the follower, and will consume the leader’s (follower’s) budget if it is selected by the leader (follower). The leader and the follower will select items in a sequential way: First, the leader selects items within the leader’s budget. Then the follower selects items from the remaining items within the follower’s budget. The goal of the leader is to minimize the maximum profit that the follower can obtain. Let s_A and s_B be the dimension of the leader’s and follower’s budget, respectively. A special case of our problem is the bilevel knapsack problem studied by Caprara et al. [SIAM Journal on Optimization, 2014], where s_A = s_B = 1. We consider the general problem and obtain an (s_B+ε)-approximation algorithm when s_A and s_B are both constant. In particular, if s_B = 1, our algorithm implies a PTAS for the bilevel knapsack problem, which is the first 𝒪(1)-approximation algorithm. We also complement our result by showing that there does not exist any (4/3-ε)-approximation algorithm even if s_A = 1 and s_B = 2. We also consider a variant of our problem with resource augmentation when s_A and s_B are both part of the input. We obtain an 𝒪(1)-approximation algorithm with 𝒪(1)-resource augmentation, that is, we give an algorithm that returns a solution which exceeds the given leader’s budget by 𝒪(1) times, and the objective value achieved by the solution is 𝒪(1) times the optimal objective value that respects the leader’s budget.

Cite as

Lin Chen, Xiaoyu Wu, and Guochuan Zhang. Approximation Algorithms for Interdiction Problem with Packing Constraints. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2022.39,
  author =	{Chen, Lin and Wu, Xiaoyu and Zhang, Guochuan},
  title =	{{Approximation Algorithms for Interdiction Problem with Packing Constraints}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.39},
  URN =		{urn:nbn:de:0030-drops-163806},
  doi =		{10.4230/LIPIcs.ICALP.2022.39},
  annote =	{Keywords: Bilevel Integer Programming, Interdiction Constraints, Knapsack}
}
Document
Feature Cross Search via Submodular Optimization

Authors: Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In this paper, we study feature cross search as a fundamental primitive in feature engineering. The importance of feature cross search especially for the linear model has been known for a while, with well-known textbook examples. In this problem, the goal is to select a small subset of features, combine them to form a new feature (called the crossed feature) by considering their Cartesian product, and find feature crosses to learn an accurate model. In particular, we study the problem of maximizing a normalized Area Under the Curve (AUC) of the linear model trained on the crossed feature column. First, we show that it is not possible to provide an n^{1/log log n}-approximation algorithm for this problem unless the exponential time hypothesis fails. This result also rules out the possibility of solving this problem in polynomial time unless 𝖯 = NP. On the positive side, by assuming the naïve Bayes assumption, we show that there exists a simple greedy (1-1/e)-approximation algorithm for this problem. This result is established by relating the AUC to the total variation of the commutator of two probability measures and showing that the total variation of the commutator is monotone and submodular. To show this, we relate the submodularity of this function to the positive semi-definiteness of a corresponding kernel matrix. Then, we use Bochner’s theorem to prove the positive semi-definiteness by showing that its inverse Fourier transform is non-negative everywhere. Our techniques and structural results might be of independent interest.

Cite as

Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, and Qian Yu. Feature Cross Search via Submodular Optimization. In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 31:1-31:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2021.31,
  author =	{Chen, Lin and Esfandiari, Hossein and Fu, Gang and Mirrokni, Vahab S. and Yu, Qian},
  title =	{{Feature Cross Search via Submodular Optimization}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{31:1--31:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.31},
  URN =		{urn:nbn:de:0030-drops-146124},
  doi =		{10.4230/LIPIcs.ESA.2021.31},
  annote =	{Keywords: Feature engineering, feature cross, submodularity}
}
Document
New Bounds on Augmenting Steps of Block-Structured Integer Programs

Authors: Lin Chen, Martin Koutecký, Lei Xu, and Weidong Shi

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
Iterative augmentation has recently emerged as an overarching method for solving Integer Programs (IP) in variable dimension, in stark contrast with the volume and flatness techniques of IP in fixed dimension. Here we consider 4-block n-fold integer programs, which are the most general class considered so far. A 4-block n-fold IP has a constraint matrix which consists of n copies of small matrices A, B, and D, and one copy of C, in a specific block structure. Iterative augmentation methods rely on the so-called Graver basis of the constraint matrix, which constitutes a set of fundamental augmenting steps. All existing algorithms rely on bounding the 𝓁₁- or 𝓁_∞-norm of elements of the Graver basis. Hemmecke et al. [Math. Prog. 2014] showed that 4-block n-fold IP has Graver elements of 𝓁_∞-norm at most 𝒪_FPT(n^{2^{s_D}}), leading to an algorithm with a similar runtime; here, s_D is the number of rows of matrix D and 𝒪_FPT hides a multiplicative factor that is only dependent on the small matrices A,B,C,D, However, it remained open whether their bounds are tight, in particular, whether they could be improved to 𝒪_FPT(1), perhaps at least in some restricted cases. We prove that the 𝓁_∞-norm of the Graver elements of 4-block n-fold IP is upper bounded by 𝒪_FPT(n^{s_D}), improving significantly over the previous bound 𝒪_FPT(n^{2^{s_D}}). We also provide a matching lower bound of Ω(n^{s_D}) which even holds for arbitrary non-zero lattice elements, ruling out augmenting algorithm relying on even more restricted notions of augmentation than the Graver basis. We then consider a special case of 4-block n-fold in which C is a zero matrix, called 3-block n-fold IP. We show that while the 𝓁_∞-norm of its Graver elements is Ω(n^{s_D}), there exists a different decomposition into lattice elements whose 𝓁_∞-norm is bounded by 𝒪_FPT(1), which allows us to provide improved upper bounds on the 𝓁_∞-norm of Graver elements for 3-block n-fold IP. The key difference between the respective decompositions is that a Graver basis guarantees a sign-compatible decomposition; this property is critical in applications because it guarantees each step of the decomposition to be feasible. Consequently, our improved upper bounds let us establish faster algorithms for 3-block n-fold IP and 4-block IP, and our lower bounds strongly hint at parameterized hardness of 4-block and even 3-block n-fold IP. Furthermore, we show that 3-block n-fold IP is without loss of generality in the sense that 4-block n-fold IP can be solved in FPT oracle time by taking an algorithm for 3-block n-fold IP as an oracle.

Cite as

Lin Chen, Martin Koutecký, Lei Xu, and Weidong Shi. New Bounds on Augmenting Steps of Block-Structured Integer Programs. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 33:1-33:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ESA.2020.33,
  author =	{Chen, Lin and Kouteck\'{y}, Martin and Xu, Lei and Shi, Weidong},
  title =	{{New Bounds on Augmenting Steps of Block-Structured Integer Programs}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{33:1--33:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.33},
  URN =		{urn:nbn:de:0030-drops-128994},
  doi =		{10.4230/LIPIcs.ESA.2020.33},
  annote =	{Keywords: Integer Programming, Graver basis, Fixed parameter tractable}
}
Document
Smart Contract Execution - the (+-)-Biased Ballot Problem

Authors: Lin Chen, Lei Xu, Zhimin Gao, Nolan Shah, Yang Lu, and Weidong Shi

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
Transaction system build on top of blockchain, especially smart contract, is becoming an important part of world economy. However, there is a lack of formal study on the behavior of users in these systems, which leaves the correctness and security of such system without a solid foundation. Unlike mining, in which the reward for mining a block is fixed, different execution results of a smart contract may lead to significantly different payoffs of users, which gives more incentives for some user to follow a branch that contains a wrong result, even if the branch is shorter. It is thus important to understand the exact probability that a branch is being selected by the system. We formulate this problem as the (+-)-Biased Ballot Problem as follows: there are n voters one by one voting for either of the two candidates A and B. The probability of a user voting for A or B depends on whether the difference between the current votes of A and B is positive or negative. Our model takes into account the behavior of three different kinds of users when a branch occurs in the system -- users having preference over a certain branch based on the history of their transactions, and users being indifferent and simply follow the longest chain. We study two important probabilities that are closely related with a blockchain based system - the probability that A wins at last, and the probability that A receives d votes first. We show how to recursively calculate the two probabilities for any fixed n and d, and also discuss their asymptotic values when n and d are sufficiently large.

Cite as

Lin Chen, Lei Xu, Zhimin Gao, Nolan Shah, Yang Lu, and Weidong Shi. Smart Contract Execution - the (+-)-Biased Ballot Problem. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 21:1-21:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2017.21,
  author =	{Chen, Lin and Xu, Lei and Gao, Zhimin and Shah, Nolan and Lu, Yang and Shi, Weidong},
  title =	{{Smart Contract Execution - the (+-)-Biased Ballot Problem}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{21:1--21:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.21},
  URN =		{urn:nbn:de:0030-drops-82388},
  doi =		{10.4230/LIPIcs.ISAAC.2017.21},
  annote =	{Keywords: Blockchain, Probability, Random Walk, Smart Contract}
}
Document
Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix

Authors: Lin Chen, Dániel Marx, Deshi Ye, and Guochuan Zhang

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
We study approximation and parameterized algorithms for R||C_max, focusing on the problem when the rank of the matrix formed by job processing times is small. Bhaskara et al. initiated the study of approximation algorithms with respect to the rank, showing that R||C_max admits a QPTAS (Quasi-polynomial time approximation scheme) when the rank is 2, and becomes APX-hard when the rank is 4. We continue this line of research. We prove that R||C_max is APX-hard even if the rank is 3, resolving an open problem. We then show that R||C_max is FPT parameterized by the rank and the largest job processing time p_max. This generalizes the parameterized results on P||C_max and R||C_max with few different types of machines. We also provide nearly tight lower bounds under Exponential Time Hypothesis which suggests that the running time of the FPT algorithm is unlikely to be improved significantly.

Cite as

Lin Chen, Dániel Marx, Deshi Ye, and Guochuan Zhang. Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2017.22,
  author =	{Chen, Lin and Marx, D\'{a}niel and Ye, Deshi and Zhang, Guochuan},
  title =	{{Parameterized and Approximation Results for Scheduling with a Low Rank Processing Time Matrix}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.22},
  URN =		{urn:nbn:de:0030-drops-70110},
  doi =		{10.4230/LIPIcs.STACS.2017.22},
  annote =	{Keywords: APX-hardness, Parameterized algorithm, Scheduling, Exponential Time Hypothesis}
}
Document
Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources

Authors: Lin Chen, Deshi Ye, and Guochuan Zhang

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
We consider the problem of scheduling with renewable speed-up resources. Given m identical machines, n jobs and c different discrete resources, the task is to schedule each job non-preemptively onto one of the machines so as to minimize the makespan. In our problem, a job has its original processing time, which could be reduced by utilizing one of the resources. As resources are different, the amount of the time reduced for each job is different depending on the resource it uses. Once a resource is being used by one job, it can not be used simultaneously by any other job until this job is finished, hence the scheduler should take into account the job-to-machine assignment together with the resource-to-job assignment. We observe that, the classical unrelated machine scheduling problem is actually a special case of our problem when m=c, i.e., the number of resources equals the number of machines. Extending the techniques for the unrelated machine scheduling, we give a 2-approximation algorithm when both m and c are part of the input. We then consider two special cases for the problem, with m or c being a constant, and derive PTASes (Polynomial Time Approximation Schemes) respectively. We also establish the relationship between the two parameters m and c, through which we are able to transform the PTAS for the case when m is constant to the case when c is a constant. The relationship between the two parameters reveals the structure within the problem, and may be of independent interest.

Cite as

Lin Chen, Deshi Ye, and Guochuan Zhang. Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 5:1-5:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2016.5,
  author =	{Chen, Lin and Ye, Deshi and Zhang, Guochuan},
  title =	{{Approximation Algorithms for Parallel Machine Scheduling with Speed-up Resources}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{5:1--5:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.5},
  URN =		{urn:nbn:de:0030-drops-66283},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.5},
  annote =	{Keywords: approximation algorithms, scheduling, linear programming}
}
Document
Packing Groups of Items into Multiple Knapsacks

Authors: Lin Chen and Guochuan Zhang

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items which are partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor delta of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to delta are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on delta, changes substantially, which has a clear difference from the classical multiple knapsack problem.

Cite as

Lin Chen and Guochuan Zhang. Packing Groups of Items into Multiple Knapsacks. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 28:1-28:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.STACS.2016.28,
  author =	{Chen, Lin and Zhang, Guochuan},
  title =	{{Packing Groups of Items into Multiple Knapsacks}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.28},
  URN =		{urn:nbn:de:0030-drops-57299},
  doi =		{10.4230/LIPIcs.STACS.2016.28},
  annote =	{Keywords: approximation algorithms, lower bound, multiple knapsack, bin packing}
}
Document
Stochastic and Robust Scheduling in the Cloud

Authors: Lin Chen, Nicole Megow, Roman Rischke, and Leen Stougie

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Users of cloud computing services are offered rapid access to computing resources via the Internet. Cloud providers use different pricing options such as (i) time slot reservation in advance at a fixed price and (ii) on-demand service at a (hourly) pay-as-used basis. Choosing the best combination of pricing options is a challenging task for users, in particular, when the instantiation of computing jobs underlies uncertainty. We propose a natural model for two-stage scheduling under uncertainty that captures such resource provisioning and scheduling problem in the cloud. Reserving a time unit for processing jobs incurs some cost, which depends on when the reservation is made: a priori decisions, based only on distributional information, are much cheaper than on-demand decisions when the actual scenario is known. We consider both stochastic and robust versions of scheduling unrelated machines with objectives of minimizing the sum of weighted completion times and the makespan. Our main contribution is an (8+eps)-approximation algorithm for the min-sum objective for the stochastic polynomial-scenario model. The same technique gives a (7.11+eps)-approximation for minimizing the makespan. The key ingredient is an LP-based separation of jobs and time slots to be considered in either the first or the second stage only, and then approximately solving the separated problems. At the expense of another epsilon our results hold for any arbitrary scenario distribution given by means of a black-box. Our techniques also yield approximation algorithms for robust two-stage scheduling.

Cite as

Lin Chen, Nicole Megow, Roman Rischke, and Leen Stougie. Stochastic and Robust Scheduling in the Cloud. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 175-186, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2015.175,
  author =	{Chen, Lin and Megow, Nicole and Rischke, Roman and Stougie, Leen},
  title =	{{Stochastic and Robust Scheduling in the Cloud}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{175--186},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.175},
  URN =		{urn:nbn:de:0030-drops-53028},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.175},
  annote =	{Keywords: Approximation Algorithms, Robust Optimization, Stochastic Optimization, Unrelated Machine Scheduling, Cloud Computing}
}

Chen, Ho-Lin

Document
Complete Volume
LIPIcs, Volume 276, DNA 29, Complete Volume

Authors: Ho-Lin Chen and Constantine G. Evans

Published in: LIPIcs, Volume 276, 29th International Conference on DNA Computing and Molecular Programming (DNA 29) (2023)


Abstract
LIPIcs, Volume 276, DNA 29, Complete Volume

Cite as

29th International Conference on DNA Computing and Molecular Programming (DNA 29). Leibniz International Proceedings in Informatics (LIPIcs), Volume 276, pp. 1-230, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Proceedings{chen_et_al:LIPIcs.DNA.29,
  title =	{{LIPIcs, Volume 276, DNA 29, Complete Volume}},
  booktitle =	{29th International Conference on DNA Computing and Molecular Programming (DNA 29)},
  pages =	{1--230},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-297-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{276},
  editor =	{Chen, Ho-Lin and Evans, Constantine G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.29},
  URN =		{urn:nbn:de:0030-drops-187827},
  doi =		{10.4230/LIPIcs.DNA.29},
  annote =	{Keywords: LIPIcs, Volume 276, DNA 29, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Ho-Lin Chen and Constantine G. Evans

Published in: LIPIcs, Volume 276, 29th International Conference on DNA Computing and Molecular Programming (DNA 29) (2023)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

29th International Conference on DNA Computing and Molecular Programming (DNA 29). Leibniz International Proceedings in Informatics (LIPIcs), Volume 276, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.DNA.29.0,
  author =	{Chen, Ho-Lin and Evans, Constantine G.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{29th International Conference on DNA Computing and Molecular Programming (DNA 29)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-297-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{276},
  editor =	{Chen, Ho-Lin and Evans, Constantine G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.29.0},
  URN =		{urn:nbn:de:0030-drops-187839},
  doi =		{10.4230/LIPIcs.DNA.29.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Tight Competitive Analyses of Online Car-Sharing Problems

Authors: Ya-Chun Liang, Kuan-Yun Lai, Ho-Lin Chen, and Kazuo Iwama

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
The car-sharing problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called kS2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2.0 for k = 2 and 1.5 for a restricted value of k, i.e., a multiple of three. In this paper, we remove all the holes of unknown CR’s; namely we prove that the CR is 2k/(k + ⌊k/3⌋) for all k ≥ 2. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even further, precisely we can achieve a CR of (2+R)/3 if the number of requests in each stage is at most Rk, 1 ≤ R ≤ 2, where we do not have to know the value of R in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR’s.

Cite as

Ya-Chun Liang, Kuan-Yun Lai, Ho-Lin Chen, and Kazuo Iwama. Tight Competitive Analyses of Online Car-Sharing Problems. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 50:1-50:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{liang_et_al:LIPIcs.ISAAC.2021.50,
  author =	{Liang, Ya-Chun and Lai, Kuan-Yun and Chen, Ho-Lin and Iwama, Kazuo},
  title =	{{Tight Competitive Analyses of Online Car-Sharing Problems}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{50:1--50:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.50},
  URN =		{urn:nbn:de:0030-drops-154835},
  doi =		{10.4230/LIPIcs.ISAAC.2021.50},
  annote =	{Keywords: Car-sharing, Competitive analysis, On-line scheduling, Randomized algorithm}
}
Document
An Improved Tax Scheme for Selfish Routing

Authors: Te-Li Wang, Chih-Kuan Yeh, and Ho-Lin Chen

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
We study the problem of routing traffic for independent selfish users in a congested network to minimize the total latency. The inefficiency of selfish routing motivates regulating the flow of the system to lower the total latency of the Nash Equilibrium by economic incentives or penalties. When applying tax to the routes, we follow the definition of [Christodoulou et al, Algorithmica, 2014] to define ePoA as the Nash total cost including tax in the taxed network over the optimal cost in the original network. We propose a simple tax scheme consisting of step functions imposed on the links. The tax scheme can be applied to routing games with parallel links, affine cost functions and single-commodity networks to lower the ePoA to at most 4/3 - epsilon, where epsilon only depends on the discrepancy between the links. We show that there exists a tax scheme in the two link case with an ePoA upperbound less than 1.192 which is almost tight. Moreover, we design another tax scheme that lowers ePoA down to 1.281 for routing games with groups of links such that links in the same group are similar to each other and groups are sufficiently different.

Cite as

Te-Li Wang, Chih-Kuan Yeh, and Ho-Lin Chen. An Improved Tax Scheme for Selfish Routing. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 61:1-61:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ISAAC.2016.61,
  author =	{Wang, Te-Li and Yeh, Chih-Kuan and Chen, Ho-Lin},
  title =	{{An Improved Tax Scheme for Selfish Routing}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{61:1--61:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.61},
  URN =		{urn:nbn:de:0030-drops-68308},
  doi =		{10.4230/LIPIcs.ISAAC.2016.61},
  annote =	{Keywords: selfish routing, price of anarchy, tax}
}
Document
Pattern Overlap Implies Runaway Growth in Hierarchical Tile Systems

Authors: Ho-Lin Chen, David Doty, Ján Manuch, Arash Rafiey, and Ladislav Stacho

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
We show that in the hierarchical tile assembly model, if there is a producible assembly that overlaps a nontrivial translation of itself consistently (i.e., the pattern of tile types in the overlap region is identical in both translations), then arbitrarily large assemblies are producible. The significance of this result is that tile systems intended to controllably produce finite structures must avoid pattern repetition in their producible assemblies that would lead to such overlap. This answers an open question of Chen and Doty (SODA 2012), who showed that so-called "partial-order" systems producing a unique finite assembly and avoiding such overlaps must require time linear in the assembly diameter. An application of our main result is that any system producing a unique finite assembly is automatically guaranteed to avoid such overlaps, simplifying the hypothesis of Chen and Doty's main theorem.

Cite as

Ho-Lin Chen, David Doty, Ján Manuch, Arash Rafiey, and Ladislav Stacho. Pattern Overlap Implies Runaway Growth in Hierarchical Tile Systems. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 360-373, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SOCG.2015.360,
  author =	{Chen, Ho-Lin and Doty, David and Manuch, J\'{a}n and Rafiey, Arash and Stacho, Ladislav},
  title =	{{Pattern Overlap Implies Runaway Growth in Hierarchical Tile Systems}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{360--373},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.360},
  URN =		{urn:nbn:de:0030-drops-50935},
  doi =		{10.4230/LIPIcs.SOCG.2015.360},
  annote =	{Keywords: self-assembly, hierarchical, pumping}
}

Lin, Chengyu

Document
Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers

Authors: Chengyu Lin and Shengyu Zhang

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
The Sensitivity Conjecture and the Log-rank Conjecture are among the most important and challenging problems in concrete complexity. Incidentally, the Sensitivity Conjecture is known to hold for monotone functions, and so is the Log-rank Conjecture for f(x and y) and f(x xor y) with monotone functions f, where and and xor are bit-wise AND and XOR , respectively. In this paper, we extend these results to functions f which alternate values for a relatively small number of times on any monotone path from 0^n to 1^n. These deepen our understandings of the two conjectures, and contribute to the recent line of research on functions with small alternating numbers.

Cite as

Chengyu Lin and Shengyu Zhang. Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 51:1-51:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ICALP.2017.51,
  author =	{Lin, Chengyu and Zhang, Shengyu},
  title =	{{Sensitivity Conjecture and Log-Rank Conjecture for Functions with Small Alternating Numbers}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{51:1--51:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.51},
  URN =		{urn:nbn:de:0030-drops-74045},
  doi =		{10.4230/LIPIcs.ICALP.2017.51},
  annote =	{Keywords: Analysis of Boolean functions, Sensitivity Conjecture, Log-rank Conjecture, Alternating Number}
}

Chen, Yanlin

Document
Track A: Algorithms, Complexity and Games
Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints

Authors: Yanlin Chen and Ronald de Wolf

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Lasso and Ridge are important minimization problems in machine learning and statistics. They are versions of linear regression with squared loss where the vector θ ∈ ℝ^d of coefficients is constrained in either 𝓁₁-norm (for Lasso) or in 𝓁₂-norm (for Ridge). We study the complexity of quantum algorithms for finding ε-minimizers for these minimization problems. We show that for Lasso we can get a quadratic quantum speedup in terms of d by speeding up the cost-per-iteration of the Frank-Wolfe algorithm, while for Ridge the best quantum algorithms are linear in d, as are the best classical algorithms. As a byproduct of our quantum lower bound for Lasso, we also prove the first classical lower bound for Lasso that is tight up to polylog-factors.

Cite as

Yanlin Chen and Ronald de Wolf. Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2023.38,
  author =	{Chen, Yanlin and de Wolf, Ronald},
  title =	{{Quantum Algorithms and Lower Bounds for Linear Regression with Norm Constraints}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.38},
  URN =		{urn:nbn:de:0030-drops-180907},
  doi =		{10.4230/LIPIcs.ICALP.2023.38},
  annote =	{Keywords: Quantum algorithms, Regularized linear regression, Lasso, Ridge, Lower bounds}
}
Document
Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

Authors: Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. 1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q^{13n+o(n)} time and requires poly(n)⋅ q^{16n/q²} memory. This tradeoff which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. 2) A quantum algorithm that runs in time 2^{0.9533n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [Divesh Aggarwal et al., 2015] that has a time and space complexity 2^{n+o(n)}. 3) A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [Yanlin Chen et al., 2018] that has the same space complexity. The time complexity of our classical and quantum algorithms are expressed using a quantity related to the kissing number of a lattice. A known upper bound of this quantity is 2^{0.402n}, but in practice for most lattices, it can be much smaller and even 2^o(n). In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.

Cite as

Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen. Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.STACS.2021.4,
  author =	{Aggarwal, Divesh and Chen, Yanlin and Kumar, Rajendra and Shen, Yixin},
  title =	{{Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.4},
  URN =		{urn:nbn:de:0030-drops-136494},
  doi =		{10.4230/LIPIcs.STACS.2021.4},
  annote =	{Keywords: Lattices, Shortest Vector Problem, Discrete Gaussian Sampling, Time-Space Tradeoff, Quantum computation, Bounded distance decoding}
}

Chen, Tonglin

Document
Parallel Pairwise Operations on Data Stored in DNA: Sorting, Shifting, and Searching

Authors: Tonglin Chen, Arnav Solanki, and Marc Riedel

Published in: LIPIcs, Volume 205, 27th International Conference on DNA Computing and Molecular Programming (DNA 27) (2021)


Abstract
Prior research has introduced the Single-Instruction-Multiple-Data paradigm for DNA computing (SIMD DNA). It offers the potential for storing information and performing in-memory computations on DNA, with massive parallelism. This paper introduces three new SIMD DNA operations: sorting, shifting, and searching. Each is a fundamental operation in computer science. Our implementations demonstrate the effectiveness of parallel pairwise operations with this new paradigm.

Cite as

Tonglin Chen, Arnav Solanki, and Marc Riedel. Parallel Pairwise Operations on Data Stored in DNA: Sorting, Shifting, and Searching. In 27th International Conference on DNA Computing and Molecular Programming (DNA 27). Leibniz International Proceedings in Informatics (LIPIcs), Volume 205, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.DNA.27.11,
  author =	{Chen, Tonglin and Solanki, Arnav and Riedel, Marc},
  title =	{{Parallel Pairwise Operations on Data Stored in DNA: Sorting, Shifting, and Searching}},
  booktitle =	{27th International Conference on DNA Computing and Molecular Programming (DNA 27)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-205-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{205},
  editor =	{Lakin, Matthew R. and \v{S}ulc, Petr},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.27.11},
  URN =		{urn:nbn:de:0030-drops-146780},
  doi =		{10.4230/LIPIcs.DNA.27.11},
  annote =	{Keywords: Molecular Computing, DNA Computing, DNA Storage, Parallel Computing, Strand Displacement}
}

Chen, Yiling

Document
Cursed yet Satisfied Agents

Authors: Yiling Chen, Alon Eden, and Juntao Wang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
In real-life auctions, a widely observed phenomenon is the winner’s curse - the winner’s high bid implies that the winner often overestimates the value of the good for sale, resulting in an incurred negative utility. The seminal work of Eyster and Rabin [Econometrica'05] introduced a behavioral model aimed to explain this observed anomaly. We term agents who display this bias "cursed agents." We adopt their model in the interdependent value setting, and aim to devise mechanisms that prevent the agents from obtaining negative utility. We design mechanisms that are cursed ex-post incentive compatible, that is, incentivize agents to bid their true signal even though they are cursed, while ensuring that the outcome is ex-post individually rational (EPIR) - the price the agents pay is no more than the agents' true value. Since the agents might overestimate the value of the allocated good, such mechanisms might require the seller to make positive (monetary) transfers to the agents in order to prevent agents from over-paying for the good. While the revenue of the seller not requiring EPIR might increase when agents are cursed, when imposing EPIR, cursed agents will always pay less than fully rational agents (due to the positive transfers the seller makes). We devise revenue and welfare maximizing mechanisms for cursed agents. For revenue maximization, we give the optimal deterministic and anonymous mechanism. For welfare maximization, we require ex-post budget balance (EPBB), as positive transfers might cause the seller to have negative revenue. We propose a masking operation that takes any deterministic mechanism, and masks the allocation whenever the seller requires to make positive transfers. The masking operation ensures that the mechanism is both EPIR and EPBB. We show that in typical settings, EPBB implies that the mechanism cannot make any positive transfers. Thus, applying the masking operation on the fully efficient mechanism results in a socially optimal EPBB mechanism. This further implies that if the valuation function is the maximum of agents' signals, the optimal EPBB mechanism obtains zero welfare. In contrast, we show that for sum-concave valuations, which include weighted-sum valuations and 𝓁_p-norms, the welfare optimal EPBB mechanism obtains half of the optimal welfare as the number of agents grows large.

Cite as

Yiling Chen, Alon Eden, and Juntao Wang. Cursed yet Satisfied Agents. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, p. 44:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2022.44,
  author =	{Chen, Yiling and Eden, Alon and Wang, Juntao},
  title =	{{Cursed yet Satisfied Agents}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{44:1--44:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.44},
  URN =		{urn:nbn:de:0030-drops-156407},
  doi =		{10.4230/LIPIcs.ITCS.2022.44},
  annote =	{Keywords: Mechanism Design, Interdependent Valuation Auction, Bounded Rationality, Cursed Equilibrium, Winner’s curse}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail