Adaptive Regularized Submodular Maximization

Authors Shaojie Tang , Jing Yuan



PDF
Thumbnail PDF

File

LIPIcs.ISAAC.2021.69.pdf
  • Filesize: 0.65 MB
  • 13 pages

Document Identifiers

Author Details

Shaojie Tang
  • Naveen Jindal School of Management, University of Texas at Dallas, TX, USA
Jing Yuan
  • Department of Computer Science, University of Texas at Dallas, TX, USA

Cite AsGet BibTex

Shaojie Tang and Jing Yuan. Adaptive Regularized Submodular Maximization. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 69:1-69:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)
https://doi.org/10.4230/LIPIcs.ISAAC.2021.69

Abstract

In this paper, we study the problem of maximizing the difference between an adaptive submodular (revenue) function and a non-negative modular (cost) function. The input of our problem is a set of n items, where each item has a particular state drawn from some known prior distribution The revenue function g is defined over items and states, and the cost function c is defined over items, i.e., each item has a fixed cost. The state of each item is unknown initially and one must select an item in order to observe its realized state. A policy π specifies which item to pick next based on the observations made so far. Denote by g_{avg}(π) the expected revenue of π and let c_{avg}(π) denote the expected cost of π. Our objective is to identify the best policy π^o ∈ arg max_π g_{avg}(π)-c_{avg}(π) under a k-cardinality constraint. Since our objective function can take on both negative and positive values, the existing results of submodular maximization may not be applicable. To overcome this challenge, we develop a series of effective solutions with performance guarantees. Let π^o denote the optimal policy. For the case when g is adaptive monotone and adaptive submodular, we develop an effective policy π^l such that g_{avg}(π^l) - c_{avg}(π^l) ≥ (1-1/e-ε)g_{avg}(π^o) - c_{avg}(π^o), using only O(nε^{-2}log ε^{-1}) value oracle queries. For the case when g is adaptive submodular, we present a randomized policy π^r such that g_{avg}(π^r) - c_{avg}(π^r) ≥ 1/eg_{avg}(π^o) - c_{avg}(π^o).

Subject Classification

ACM Subject Classification
  • Theory of computation → Approximation algorithms analysis
  • Mathematics of computing → Approximation algorithms
  • Mathematics of computing → Submodular optimization and polymatroids
Keywords
  • Adaptive submodularity
  • approximation algorithms
  • active learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. Submodular maximization with cardinality constraints. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pages 1433-1452. SIAM, 2014. Google Scholar
  2. Moran Feldman. Guess free maximization of submodular and linear sums. Algorithmica, pages 1-26, 2020. Google Scholar
  3. Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42:427-486, 2011. Google Scholar
  4. Chris Harshaw, Moran Feldman, Justin Ward, and Amin Karbasi. Submodular maximization beyond non-negativity: Guarantees, fast algorithms, and applications. In International Conference on Machine Learning, pages 2634-2643. PMLR, 2019. Google Scholar
  5. Tianyuan Jin, Yu Yang, Renchi Yang, Jieming Shi, Keke Huang, and Xiaokui Xiao. Unconstrained submodular maximization with modular costs: Tight approximation and application to profit maximization. Proceedings of the VLDB Endowment, 14(10), 2021. Google Scholar
  6. Ehsan Kazemi, Shervin Minaee, Moran Feldman, and Amin Karbasi. Regularized submodular maximization at scale. In International Conference on Machine Learning, pages 5356-5366. PMLR, 2021. Google Scholar
  7. Andreas Krause and Carlos Guestrin. Near-optimal observation selection using submodular functions. In AAAI, volume 7, pages 1650-1654, 2007. Google Scholar
  8. Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 510-520, 2011. Google Scholar
  9. Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization techniques, pages 234-243. Springer, 1978. Google Scholar
  10. Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrák, and Andreas Krause. Lazier than lazy greedy. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. Google Scholar
  11. George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions-i. Mathematical programming, 14(1):265-294, 1978. Google Scholar
  12. Sofia Maria Nikolakaki, Alina Ene, and Evimaria Terzi. An efficient framework for balancing submodularity and cost. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1256-1266, 2021. Google Scholar
  13. Maxim Sviridenko, Jan Vondrák, and Justin Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. Mathematics of Operations Research, 42(4):1197-1218, 2017. Google Scholar
  14. Shaojie Tang. Beyond pointwise submodularity: Non-monotone adaptive submodular maximization in linear time. Theoretical Computer Science, 850:249-261, 2021. Google Scholar
  15. Shaojie Tang. Beyond pointwise submodularity: Non-monotone adaptive submodular maximization subject to knapsack and k-system constraints. In International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences. Springer, 2021. Google Scholar
  16. Shaojie Tang and Jing Yuan. Influence maximization with partial feedback. Operations Research Letters, 48(1):24-28, 2020. Google Scholar
  17. Jing Yuan and Shaojie Tang. No time to observe: adaptive influence maximization with partial feedback. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3908-3914, 2017. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail