LIPIcs.STACS.2024.37.pdf
- Filesize: 0.95 MB
- 20 pages
In the Online Simple Knapsack problem, an algorithm has to pack a knapsack of unit size as full as possible with items that arrive sequentially. The algorithm has no prior knowledge of the length or nature of the instance. Its performance is then measured against the best possible packing of all items of the same instance, over all possible instances. In the classical model for online computation, it is well known that there exists no constant bound for the ratio between the size of an optimal packing and the size of an online algorithm’s packing. A recent variation of the classical online model is that of predictions. In this model, an algorithm is given knowledge about the instance in advance, which is in reality distorted by some factor δ that is commonly unknown to the algorithm. The algorithm only learns about the actual nature of the elements of an input once they are revealed and an irrevocable and immediate decision has to be made. In this work, we study a slight variation of this model in which the error term, and thus the range of sizes that an announced item may actually lay in, is given to the algorithm in advance. It thus knows the range of sizes from which the actual size of each item is selected from. We find that the analysis of the Online Simple Knapsack problem under this model is surprisingly involved. For values of 0 < δ ≤ 1/7, we prove a tight competitive ratio of 2. From there on, we are able to prove that there are at least three alternating functions that describe the competitive ratio. We provide partially tight bounds for the whole range of 0 < δ < 1, showing in particular that the function of the competitive ratio depending on δ is not continuous.
Feedback for Dagstuhl Publishing