2 Search Results for "Saig, Eden"


Document
The Complexity of User Retention

Authors: Eli Ben-Sasson and Eden Saig

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
This paper studies families of distributions T that are amenable to retentive learning, meaning that an expert can retain users that seek to predict their future, assuming user attributes are sampled from T and exposed gradually over time. Limited attention span is the main problem experts face in our model. We make two contributions. First, we formally define the notions of retentively learnable distributions and properties. Along the way, we define a retention complexity measure of distributions and a natural class of retentive scoring rules that model the way users evaluate experts they interact with. These rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory since the 1950's [McCarthy, PNAS 1956]. Second, we take a first step towards relating retention complexity to other measures of significance in computational complexity. In particular, we show that linear properties (over the binary field) are retentively learnable, whereas random Low Density Parity Check (LDPC) codes have, with high probability, maximal retention complexity. Intriguingly, these results resemble known results from the field of property testing and suggest that deeper connections between retentive distributions and locally testable properties may exist.

Cite as

Eli Ben-Sasson and Eden Saig. The Complexity of User Retention. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 12:1-12:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ITCS.2019.12,
  author =	{Ben-Sasson, Eli and Saig, Eden},
  title =	{{The Complexity of User Retention}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{12:1--12:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.12},
  URN =		{urn:nbn:de:0030-drops-101053},
  doi =		{10.4230/LIPIcs.ITCS.2019.12},
  annote =	{Keywords: retentive learning, retention complexity, information elicitation, proper scoring rules}
}
Document
Brief Announcement
Brief Announcement: Towards an Abstract Model of User Retention Dynamics

Authors: Eli Ben-Sasson and Eden Saig

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
A theoretical model is suggested for abstracting the interaction between an expert system and its users, with a focus on reputation and incentive compatibility. The model assumes users interact with the system while keeping in mind a single "retention parameter" that measures the strength of their belief in its predictive power, and the system's objective is to reinforce and maximize this parameter through "informative" and "correct" predictions. We define a natural class of retentive scoring rules to model the way users update their retention parameter and thus evaluate the experts they interact with. Assuming agents in the model have an incentive to report their true belief, these rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory. The difference between users and experts is modeled by imposing different limits on their predictive abilities, characterized by a parameter called memory span. We prove the monotonicity theorem ("more knowledge is better"), which shows that experts with larger memory span retain better in expectation. Finally, we focus on the intrinsic properties of phenomena that are amenable to collaborative discovery with a an expert system. Assuming user types (or "identities") are sampled from a distribution D, the retention complexity of D is the minimal initial retention value (or "strength of faith") that a user must have before approaching the expert, in order for the expert to retain that user throughout the collaborative discovery, during which the user "discovers" his true "identity". We then take a first step towards relating retention complexity to other established computational complexity measures by studying retention dynamics when D is a uniform distribution over a linear space.

Cite as

Eli Ben-Sasson and Eden Saig. Brief Announcement: Towards an Abstract Model of User Retention Dynamics. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 164:1-164:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ICALP.2018.164,
  author =	{Ben-Sasson, Eli and Saig, Eden},
  title =	{{Brief Announcement: Towards an Abstract Model of User Retention Dynamics}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{164:1--164:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.164},
  URN =		{urn:nbn:de:0030-drops-91682},
  doi =		{10.4230/LIPIcs.ICALP.2018.164},
  annote =	{Keywords: information elicitation, proper scoring rules, retention complexity}
}
  • Refine by Author
  • 2 Ben-Sasson, Eli
  • 2 Saig, Eden

  • Refine by Classification
  • 2 Theory of computation → Models of computation

  • Refine by Keyword
  • 2 information elicitation
  • 2 proper scoring rules
  • 2 retention complexity
  • 1 retentive learning

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2018
  • 1 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail