Search Results

Documents authored by Wu, Yi


Found 3 Possible Name Variants:

Wu, Yi

Document
Computational Complexity of Certifying Restricted Isometry Property

Authors: Abhiram Natarajan and Yi Wu

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Given a matrix A with n rows, a number k < n, and 0 < delta < 1, A is (k,delta)-RIP (Restricted Isometry Property) if, for any vector x in R^n, with at most k non-zero co-ordinates, (1-delta)|x|^2 <= |Ax|^2 <= (1+delta)|Ax|^2. In many applications, such as compressed sensing and sparse recovery, it is desirable to construct RIP matrices with a large k and a small delta;. Given the efficacy of random constructions in generating useful RIP matrices, the problem of certifying the RIP parameters of a matrix has become important. In this paper, we prove that it is hard to approximate the RIP parameters of a matrix assuming the Small-Set-Expansion-Hypothesis. Specifically, we prove that for any arbitrarily large constant C>0 and any arbitrarily small constant 0<delta<1, there exists some k such that given a matrix M, it is SSE-Hard to distinguish the following two cases: i) (Highly RIP) M is (k,delta)-RIP; ii) (Far away from RIP) M is not (k/C,1-delta)-RIP. Most of the previous results on the topic of hardness of RIP certification only hold for certification when delta=o(1). In practice, it is of interest to understand the complexity of certifying a matrix with delta being close to sqrt(2) - 1, as it suffices for many real applications to have matrices with delta=sqrt(2)-1. Our hardness result holds for any constant delta. Specifically, our result proves that even if delta is indeed very small, i.e. the matrix is in fact strongly RIP, certifying that the matrix exhibits weak RIP itself is SSE-Hard. In order to prove the hardness result, we prove a variant of the Cheeger's Inequality for sparse vectors.

Cite as

Abhiram Natarajan and Yi Wu. Computational Complexity of Certifying Restricted Isometry Property. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 371-380, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:LIPIcs.APPROX-RANDOM.2014.371,
  author =	{Natarajan, Abhiram and Wu, Yi},
  title =	{{Computational Complexity of Certifying Restricted Isometry Property}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{371--380},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  URN =		{urn:nbn:de:0030-drops-47092},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  annote =	{Keywords: Restricted Isometry Property, RIP, RIP Certification, Sparse Cheeger Inequality, SSE Hard}
}

Wu, Yifan

Document
Smooth Calibration and Decision Making

Authors: Jason Hartline, Yifan Wu, and Yunran Yang

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Calibration requires predictor outputs to be consistent with their Bayesian posteriors. For machine learning predictors that do not distinguish between small perturbations, calibration errors are continuous in predictions, e.g. smooth calibration error [Foster and Hart, 2018], distance to calibration [Błasiok et al., 2023]. On the contrary, decision-makers who use predictions make optimal decisions discontinuously in probabilistic space, experiencing loss from miscalibration discontinuously. Calibration errors for decision-making are thus discontinuous, e.g., Expected Calibration Error [Foster and Vohra, 1997], and Calibration Decision Loss [Hu and Wu, 2024]. Thus, predictors with a low calibration error for machine learning may suffer a high calibration error for decision-making, i.e. they may not be trustworthy for decision-makers optimizing assuming their predictions are correct. It is natural to ask if post-processing a predictor with a low calibration error for machine learning is without loss to achieve a low calibration error for decision-making. In our paper, we show post-processing an online predictor with ε distance to calibration achieves O(√{ε}) ECE and CDL, which is asymptotically optimal. The post-processing algorithm adds noise to make predictions differentially private. The optimal bound from low distance to calibration predictors from post-processing is non-optimal compared with existing online calibration algorithms that directly optimize for ECE and CDL.

Cite as

Jason Hartline, Yifan Wu, and Yunran Yang. Smooth Calibration and Decision Making. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hartline_et_al:LIPIcs.FORC.2025.16,
  author =	{Hartline, Jason and Wu, Yifan and Yang, Yunran},
  title =	{{Smooth Calibration and Decision Making}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.16},
  URN =		{urn:nbn:de:0030-drops-231438},
  doi =		{10.4230/LIPIcs.FORC.2025.16},
  annote =	{Keywords: Calibration, calibration errors, decision making, differential privacy}
}
Document
Is a Dataframe Just a Table?

Authors: Yifan Wu

Published in: OASIcs, Volume 76, 10th Workshop on Evaluation and Usability of Programming Languages and Tools (PLATEAU 2019)


Abstract
Querying data is core to databases and data science. However, the two communities have seemingly different concepts and use cases. As a result, both designers and users of the query languages disagree on whether the core abstractions - dataframes (data science) and tables (databases) - and the operations are the same. To investigate the difference from a PL-HCI perspective, we identify the basic affordances provided by tables and dataframes and how programming experiences over tables and dataframes differ. We show that the data structures nudge programmers to query and store their data in different ways. We hope the case study could clarify confusions, dispel misinformation, increase cross-pollination between the two communities, and identify open PL-HCI questions.

Cite as

Yifan Wu. Is a Dataframe Just a Table?. In 10th Workshop on Evaluation and Usability of Programming Languages and Tools (PLATEAU 2019). Open Access Series in Informatics (OASIcs), Volume 76, pp. 6:1-6:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wu:OASIcs.PLATEAU.2019.6,
  author =	{Wu, Yifan},
  title =	{{Is a Dataframe Just a Table?}},
  booktitle =	{10th Workshop on Evaluation and Usability of Programming Languages and Tools (PLATEAU 2019)},
  pages =	{6:1--6:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-135-1},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{76},
  editor =	{Chasins, Sarah and Glassman, Elena L. and Sunshine, Joshua},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.PLATEAU.2019.6},
  URN =		{urn:nbn:de:0030-drops-119600},
  doi =		{10.4230/OASIcs.PLATEAU.2019.6},
  annote =	{Keywords: Usability of Programming Languages}
}

Wu, Yi-Chieh

Document
The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard

Authors: Matthew LeMay, Yi-Chieh Wu, and Ran Libeskind-Hadas

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
The maximum parsimony phylogenetic reconciliation problem seeks to explain incongruity between a gene phylogeny and a species phylogeny with respect to a set of evolutionary events. While the reconciliation problem is well-studied for species and gene trees subject to events such as duplication, transfer, loss, and deep coalescence, recent work has examined species phylogenies that incorporate hybridization and are thus represented by networks rather than trees. In this paper, we show that the problem of computing a maximum parsimony reconciliation for a gene tree and species network is NP-hard even when only considering deep coalescence. This result suggests that future work on maximum parsimony reconciliation for species networks should explore approximation algorithms and heuristics.

Cite as

Matthew LeMay, Yi-Chieh Wu, and Ran Libeskind-Hadas. The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 1:1-1:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lemay_et_al:LIPIcs.WABI.2021.1,
  author =	{LeMay, Matthew and Wu, Yi-Chieh and Libeskind-Hadas, Ran},
  title =	{{The Most Parsimonious Reconciliation Problem in the Presence of Incomplete Lineage Sorting and Hybridization Is NP-Hard}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{1:1--1:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.1},
  URN =		{urn:nbn:de:0030-drops-143546},
  doi =		{10.4230/LIPIcs.WABI.2021.1},
  annote =	{Keywords: phylogenetics, reconciliation, deep coalescence, hybridization, NP-hardness}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail