Search Results

Documents authored by Li, Jian


Found 3 Possible Name Variants:

Li, Jian

Document
Approximation Algorithms for Clustering with Dynamic Points

Authors: Shichuan Deng, Jian Li, and Yuval Rabani

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
In many classic clustering problems, we seek to sketch a massive data set of n points (a.k.a clients) in a metric space, by segmenting them into k categories or clusters, each cluster represented concisely by a single point in the metric space (a.k.a. the cluster’s center or its facility). The goal is to find such a sketch that minimizes some objective that depends on the distances between the clients and their respective facilities (the objective is a.k.a. the service cost). Two notable examples are the k-center/k-supplier problem where the objective is to minimize the maximum distance from any client to its facility, and the k-median problem where the objective is to minimize the sum over all clients of the distance from the client to its facility. In practical applications of clustering, the data set may evolve over time, reflecting an evolution of the underlying clustering model. Thus, in such applications, a good clustering must simultaneously represent the temporal data set well, but also not change too drastically between time steps. In this paper, we initiate the study of a dynamic version of clustering problems that aims to capture these considerations. In this version there are T time steps, and in each time step t ∈ {1,2,… ,T}, the set of clients needed to be clustered may change, and we can move the k facilities between time steps. The general goal is to minimize certain combinations of the service cost and the facility movement cost, or minimize one subject to some constraints on the other. More specifically, we study two concrete problems in this framework: the Dynamic Ordered k-Median and the Dynamic k-Supplier problem. Our technical contributions are as follows: - We consider the Dynamic Ordered k-Median problem, where the objective is to minimize the weighted sum of ordered distances over all time steps, plus the total cost of moving the facilities between time steps. We present one constant-factor approximation algorithm for T = 2 and another approximation algorithm for fixed T ≥ 3. - We consider the Dynamic k-Supplier problem, where the objective is to minimize the maximum distance from any client to its facility, subject to the constraint that between time steps the maximum distance moved by any facility is no more than a given threshold. When the number of time steps T is 2, we present a simple constant factor approximation algorithm and a bi-criteria constant factor approximation algorithm for the outlier version, where some of the clients can be discarded. We also show that it is NP-hard to approximate the problem with any factor for T ≥ 3.

Cite as

Shichuan Deng, Jian Li, and Yuval Rabani. Approximation Algorithms for Clustering with Dynamic Points. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 37:1-37:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{deng_et_al:LIPIcs.ESA.2020.37,
  author =	{Deng, Shichuan and Li, Jian and Rabani, Yuval},
  title =	{{Approximation Algorithms for Clustering with Dynamic Points}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.37},
  URN =		{urn:nbn:de:0030-drops-129037},
  doi =		{10.4230/LIPIcs.ESA.2020.37},
  annote =	{Keywords: clustering, dynamic points, multi-objective optimization}
}
Document
Algorithms and Adaptivity Gaps for Stochastic k-TSP

Authors: Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Given a metric (V,d) and a root ∈ V, the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [Ene et al., 2018], each vertex v in the given metric (V,d) contains a stochastic reward R_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost C_v, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [Singla, 2018] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels' inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a "critical" scale.

Cite as

Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla. Algorithms and Adaptivity Gaps for Stochastic k-TSP. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 45:1-45:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ITCS.2020.45,
  author =	{Jiang, Haotian and Li, Jian and Liu, Daogao and Singla, Sahil},
  title =	{{Algorithms and Adaptivity Gaps for Stochastic k-TSP}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{45:1--45:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.45},
  URN =		{urn:nbn:de:0030-drops-117308},
  doi =		{10.4230/LIPIcs.ITCS.2020.45},
  annote =	{Keywords: approximation algorithms, stochastic optimization, travelling salesman problem}
}
Document
A PTAS for a Class of Stochastic Dynamic Programs

Authors: Hao Fu, Jian Li, and Pan Xu

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We develop a framework for obtaining polynomial time approximation schemes (PTAS) for a class of stochastic dynamic programs. Using our framework, we obtain the first PTAS for the following stochastic combinatorial optimization problems: 1) Probemax [Munagala, 2016]: We are given a set of n items, each item i in [n] has a value X_i which is an independent random variable with a known (discrete) distribution pi_i. We can probe a subset P subseteq [n] of items sequentially. Each time after {probing} an item i, we observe its value realization, which follows the distribution pi_i. We can adaptively probe at most m items and each item can be probed at most once. The reward is the maximum among the m realized values. Our goal is to design an adaptive probing policy such that the expected value of the reward is maximized. To the best of our knowledge, the best known approximation ratio is 1-1/e, due to Asadpour et al. [Asadpour and Nazerzadeh, 2015]. We also obtain PTAS for some generalizations and variants of the problem. 2) Committed Pandora's Box [Weitzman, 1979; Singla, 2018]: We are given a set of n boxes. For each box i in [n], the cost c_i is deterministic and the value X_i is an independent random variable with a known (discrete) distribution pi_i. Opening a box i incurs a cost of c_i. We can adaptively choose to open the boxes (and observe their values) or stop. We want to maximize the expectation of the realized value of the last opened box minus the total opening cost. 3) Stochastic Target [{I}lhan et al., 2011]: Given a predetermined target T and n items, we can adaptively insert the items into a knapsack and insert at most m items. Each item i has a value X_i which is an independent random variable with a known (discrete) distribution. Our goal is to design an adaptive policy such that the probability of the total values of all items inserted being larger than or equal to T is maximized. We provide the first bi-criteria PTAS for the problem. 4) Stochastic Blackjack Knapsack [Levin and Vainer, 2014]: We are given a knapsack of capacity C and probability distributions of n independent random variables X_i. Each item i in [n] has a size X_i and a profit p_i. We can adaptively insert the items into a knapsack, as long as the capacity constraint is not violated. We want to maximize the expected total profit of all inserted items. If the capacity constraint is violated, we lose all the profit. We provide the first bi-criteria PTAS for the problem.

Cite as

Hao Fu, Jian Li, and Pan Xu. A PTAS for a Class of Stochastic Dynamic Programs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.ICALP.2018.56,
  author =	{Fu, Hao and Li, Jian and Xu, Pan},
  title =	{{A PTAS for a Class of Stochastic Dynamic Programs}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.56},
  URN =		{urn:nbn:de:0030-drops-90609},
  doi =		{10.4230/LIPIcs.ICALP.2018.56},
  annote =	{Keywords: stochastic optimization, dynamic program, markov decision process, block policy, approximation algorithm}
}
Document
Odd Yao-Yao Graphs are Not Spanners

Authors: Yifei Jin, Jian Li, and Wei Zhan

Published in: LIPIcs, Volume 99, 34th International Symposium on Computational Geometry (SoCG 2018)


Abstract
It is a long standing open problem whether Yao-Yao graphs YY_{k} are all spanners [Li et al. 2002]. Bauer and Damian [Bauer and Damian, 2012] showed that all YY_{6k} for k >= 6 are spanners. Li and Zhan [Li and Zhan, 2016] generalized their result and proved that all even Yao-Yao graphs YY_{2k} are spanners (for k >= 42). However, their technique cannot be extended to odd Yao-Yao graphs, and whether they are spanners are still elusive. In this paper, we show that, surprisingly, for any integer k >= 1, there exist odd Yao-Yao graph YY_{2k+1} instances, which are not spanners.

Cite as

Yifei Jin, Jian Li, and Wei Zhan. Odd Yao-Yao Graphs are Not Spanners. In 34th International Symposium on Computational Geometry (SoCG 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 99, pp. 49:1-49:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.SoCG.2018.49,
  author =	{Jin, Yifei and Li, Jian and Zhan, Wei},
  title =	{{Odd Yao-Yao Graphs are Not Spanners}},
  booktitle =	{34th International Symposium on Computational Geometry (SoCG 2018)},
  pages =	{49:1--49:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-066-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{99},
  editor =	{Speckmann, Bettina and T\'{o}th, Csaba D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2018.49},
  URN =		{urn:nbn:de:0030-drops-87621},
  doi =		{10.4230/LIPIcs.SoCG.2018.49},
  annote =	{Keywords: Odd Yao-Yao Graph, Spanner, Counterexample}
}
Document
SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms

Authors: Lingxiao Huang, Yifei Jin, and Jian Li

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
We study two important SVM variants: hard-margin SVM (for linearly separable cases) and nu-SVM (for linearly non-separable cases). We propose new algorithms from the perspective of saddle point optimization. Our algorithms achieve (1-epsilon)-approximations with running time O~(nd+n sqrt{d / epsilon}) for both variants, where n is the number of points and d is the dimensionality. To the best of our knowledge, the current best algorithm for nu-SVM is based on quadratic programming approach which requires Omega(n^2 d) time in worst case [Joachims, 1998; Platt, 1999]. In the paper, we provide the first nearly linear time algorithm for nu-SVM. The current best algorithm for hard margin SVM achieved by Gilbert algorithm [Gärtner and Jaggi, 2009] requires O(nd / epsilon) time. Our algorithm improves the running time by a factor of sqrt{d}/sqrt{epsilon}. Moreover, our algorithms can be implemented in the distributed settings naturally. We prove that our algorithms require O~(k(d +sqrt{d/epsilon})) communication cost, where k is the number of clients, which almost matches the theoretical lower bound. Numerical experiments support our theory and show that our algorithms converge faster on high dimensional, large and dense data sets, as compared to previous methods.

Cite as

Lingxiao Huang, Yifei Jin, and Jian Li. SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 25:1-25:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.SWAT.2018.25,
  author =	{Huang, Lingxiao and Jin, Yifei and Li, Jian},
  title =	{{SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.25},
  URN =		{urn:nbn:de:0030-drops-88515},
  doi =		{10.4230/LIPIcs.SWAT.2018.25},
  annote =	{Keywords: nu-SVM, hard-margin SVM, saddle point optimization, distributed algorithm}
}
Document
k-Regret Minimizing Set: Efficient Algorithms and Hardness

Authors: Wei Cao, Jian Li, Haitao Wang, Kangning Wang, Ruosong Wang, Raymond Chi-Wing Wong, and Wei Zhan

Published in: LIPIcs, Volume 68, 20th International Conference on Database Theory (ICDT 2017)


Abstract
We study the k-regret minimizing query (k-RMS), which is a useful operator for supporting multi-criteria decision-making. Given two integers k and r, a k-RMS returns r tuples from the database which minimize the k-regret ratio, defined as one minus the worst ratio between the k-th maximum utility score among all tuples in the database and the maximum utility score of the r tuples returned. A solution set contains only r tuples, enjoying the benefits of both top-k queries and skyline queries. Proposed in 2012, the query has been studied extensively in recent years. In this paper, we advance the theory and the practice of k-RMS in the following aspects. First, we develop efficient algorithms for k-RMS (and its decision version) when the dimensionality is 2. The running time of our algorithms outperforms those of previous ones. Second, we show that k-RMS is NP-hard even when the dimensionality is 3. This provides a complete characterization of the complexity of k-RMS, and answers an open question in previous studies. In addition, we present approximation algorithms for the problem when the dimensionality is 3 or larger.

Cite as

Wei Cao, Jian Li, Haitao Wang, Kangning Wang, Ruosong Wang, Raymond Chi-Wing Wong, and Wei Zhan. k-Regret Minimizing Set: Efficient Algorithms and Hardness. In 20th International Conference on Database Theory (ICDT 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 68, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{cao_et_al:LIPIcs.ICDT.2017.11,
  author =	{Cao, Wei and Li, Jian and Wang, Haitao and Wang, Kangning and Wang, Ruosong and Chi-Wing Wong, Raymond and Zhan, Wei},
  title =	{{k-Regret Minimizing Set: Efficient Algorithms and Hardness}},
  booktitle =	{20th International Conference on Database Theory (ICDT 2017)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-024-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{68},
  editor =	{Benedikt, Michael and Orsi, Giorgio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2017.11},
  URN =		{urn:nbn:de:0030-drops-70569},
  doi =		{10.4230/LIPIcs.ICDT.2017.11},
  annote =	{Keywords: multi-criteria decision-making, regret minimizing set, top-k query}
}
Document
epsilon-Kernel Coresets for Stochastic Points

Authors: Lingxiao Huang, Jian Li, Jeff M. Phillips, and Haitao Wang

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
With the dramatic growth in the number of application domains that generate probabilistic, noisy and uncertain data, there has been an increasing interest in designing algorithms for geometric or combinatorial optimization problems over such data. In this paper, we initiate the study of constructing epsilon-kernel coresets for uncertain points. We consider uncertainty in the existential model where each point's location is fixed but only occurs with a certain probability, and the locational model where each point has a probability distribution describing its location. An epsilon-kernel coreset approximates the width of a point set in any direction. We consider approximating the expected width (an epsilon-EXP-KERNEL), as well as the probability distribution on the width (an (epsilon, tau)-QUANT-KERNEL) for any direction. We show that there exists a set of O(epsilon^{-(d-1)/2}) deterministic points which approximate the expected width under the existential and locational models, and we provide efficient algorithms for constructing such coresets. We show, however, it is not always possible to find a subset of the original uncertain points which provides such an approximation. However, if the existential probability of each point is lower bounded by a constant, an epsilon-EXP-KERNEL is still possible. We also provide efficient algorithms for construct an (epsilon, tau)-QUANT-KERNEL coreset in nearly linear time. Our techniques utilize or connect to several important notions in probability and geometry, such as Kolmogorov distances, VC uniform convergence and Tukey depth, and may be useful in other geometric optimization problem in stochastic settings. Finally, combining with known techniques, we show a few applications to approximating the extent of uncertain functions, maintaining extent measures for stochastic moving points and some shape fitting problems under uncertainty.

Cite as

Lingxiao Huang, Jian Li, Jeff M. Phillips, and Haitao Wang. epsilon-Kernel Coresets for Stochastic Points. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 50:1-50:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ESA.2016.50,
  author =	{Huang, Lingxiao and Li, Jian and Phillips, Jeff M. and Wang, Haitao},
  title =	{{epsilon-Kernel Coresets for Stochastic Points}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{50:1--50:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.50},
  URN =		{urn:nbn:de:0030-drops-63921},
  doi =		{10.4230/LIPIcs.ESA.2016.50},
  annote =	{Keywords: e-kernel, coreset, stochastic point, shape fitting}
}
Document
Almost All Even Yao-Yao Graphs Are Spanners

Authors: Jian Li and Wei Zhan

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
It is an open problem whether Yao-Yao graphs YY_{k} (also known as sparse-Yao graphs) are all spanners when the integer parameter k is large enough. In this paper we show that, for any integer k >= 42, the Yao-Yao graph YY_{2k} is a t_k-spanner, with stretch factor t_k = 6.03+O(k^{-1}) when k tends to infinity. Our result generalizes the best known result which asserts that all YY_{6k} are spanners for k >= 6 [Bauer and Damian, SODA'13]. Our proof is also somewhat simpler.

Cite as

Jian Li and Wei Zhan. Almost All Even Yao-Yao Graphs Are Spanners. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 62:1-62:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2016.62,
  author =	{Li, Jian and Zhan, Wei},
  title =	{{Almost All Even Yao-Yao Graphs Are Spanners}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{62:1--62:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.62},
  URN =		{urn:nbn:de:0030-drops-64033},
  doi =		{10.4230/LIPIcs.ESA.2016.62},
  annote =	{Keywords: Yao-Yao graph, geometric spanner, curved trapezoid}
}
Document
Ranking with Diverse Intents and Correlated Contents

Authors: Jian Li and Zeyu Zhang

Published in: LIPIcs, Volume 24, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)


Abstract
We consider the following document ranking problem: We have a collection of documents, each containing some topics (e.g. sports, politics, economics). We also have a set of users with diverse interests. Assume that user u is interested in a subset I_u of topics. Each user u is also associated with a positive integer K_u, which indicates that u can be satisfied by any K_u topics in I_u. Each document s contains information for a subset C_s of topics. The objective is to pick one document at a time such that the average satisfying time is minimized, where a user's satisfying time is the first time that at least K_u topics in I_u are covered in the documents selected so far. Our main result is an O(rho)-approximation algorithm for the problem, where rho is the algorithmic integrality gap of the linear programming relaxation of the set cover instance defined by the documents and topics. This result generalizes the constant approximations for generalized min-sum set cover and ranking with unrelated intents and the logarithmic approximation for the problem of ranking with submodular valuations (when the submodular function is the coverage function), and can be seen as an interpolation between these results. We further extend our model to the case when each user may be interested in more than one sets of topics and when the user's valuation function is XOS, and obtain similar results for these models.

Cite as

Jian Li and Zeyu Zhang. Ranking with Diverse Intents and Correlated Contents. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 24, pp. 351-362, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.FSTTCS.2013.351,
  author =	{Li, Jian and Zhang, Zeyu},
  title =	{{Ranking with Diverse Intents and Correlated Contents}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)},
  pages =	{351--362},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-64-4},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{24},
  editor =	{Seth, Anil and Vishnoi, Nisheeth K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2013.351},
  URN =		{urn:nbn:de:0030-drops-43856},
  doi =		{10.4230/LIPIcs.FSTTCS.2013.351},
  annote =	{Keywords: Approximation Algorithm, Diversification, min-sum Set Cover}
}
Document
Energy Efficient Scheduling via Partial Shutdown

Authors: Samir Khuller, Jian Li, and Barna Saha

Published in: Dagstuhl Seminar Proceedings, Volume 10071, Scheduling (2010)


Abstract
We define a collection of new problems referred to as ``machine activation'' problems. The central framework we introduce considers a collection of M machines (unrelated or related), with machine $i$ having an activation cost of $a_i$. There is also a collection of N jobs that need to be performed, and $p_{ij}$ is the processing time of job $j$ on machine $i$. Standard scheduling models assume that the set of machines is fixed and all machines are available. We assume that there is an activation cost budget of $A$ -- we would like to select a subset S of the machines to activate with total cost $a(S)le A$ and find a schedule for the jobs on the machines in $S$ minimizing the makespan. In this work we develop bi-criteria approximation algorithms for this problem based on both LP rounding and a greedy approach.

Cite as

Samir Khuller, Jian Li, and Barna Saha. Energy Efficient Scheduling via Partial Shutdown. In Scheduling. Dagstuhl Seminar Proceedings, Volume 10071, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{khuller_et_al:DagSemProc.10071.5,
  author =	{Khuller, Samir and Li, Jian and Saha, Barna},
  title =	{{Energy Efficient Scheduling via Partial Shutdown}},
  booktitle =	{Scheduling},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10071},
  editor =	{Susanne Albers and Sanjoy K. Baruah and Rolf H. M\"{o}hring and Kirk Pruhs},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10071.5},
  URN =		{urn:nbn:de:0030-drops-25435},
  doi =		{10.4230/DagSemProc.10071.5},
  annote =	{Keywords: Unrelated parallel machine scheduling, approximation algorithms}
}

Li, Jianqi

Document
Termination of Dependently Typed Rewrite Rules

Authors: Jean-Pierre Jouannaud and Jianqi Li

Published in: LIPIcs, Volume 38, 13th International Conference on Typed Lambda Calculi and Applications (TLCA 2015)


Abstract
Our interest is in automated termination proofs of higher-order rewrite rules in presence of dependent types modulo a theory T on base types. We first describe an original transformation to a type discipline without type dependencies which preserves non-termination. Since the user must reason on expressions of the transformed language, we then introduce an extension of the computability path ordering CPO for comparing dependently typed expressions named DCPO. Using the previous result, we show that DCPO is a well-founded order, behaving well in practice.

Cite as

Jean-Pierre Jouannaud and Jianqi Li. Termination of Dependently Typed Rewrite Rules. In 13th International Conference on Typed Lambda Calculi and Applications (TLCA 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 38, pp. 257-272, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{jouannaud_et_al:LIPIcs.TLCA.2015.257,
  author =	{Jouannaud, Jean-Pierre and Li, Jianqi},
  title =	{{Termination of Dependently Typed Rewrite Rules}},
  booktitle =	{13th International Conference on Typed Lambda Calculi and Applications (TLCA 2015)},
  pages =	{257--272},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-87-3},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{38},
  editor =	{Altenkirch, Thorsten},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TLCA.2015.257},
  URN =		{urn:nbn:de:0030-drops-51684},
  doi =		{10.4230/LIPIcs.TLCA.2015.257},
  annote =	{Keywords: rewriting, dependent types, strong normalization, path orderings}
}
Document
Church-Rosser Properties of Normal Rewriting

Authors: Jean-Pierre Jouannaud and Jianqi Li

Published in: LIPIcs, Volume 16, Computer Science Logic (CSL'12) - 26th International Workshop/21st Annual Conference of the EACSL (2012)


Abstract
We prove a general purpose abstract Church-Rosser result that captures most existing such results that rely on termination of computations. This is achieved by studying abstract normal rewriting in a way that allows to incorporate positions at the abstract level. New concrete Church-Rosser results are obtained, in particular for higher-order rewriting at higher types.

Cite as

Jean-Pierre Jouannaud and Jianqi Li. Church-Rosser Properties of Normal Rewriting. In Computer Science Logic (CSL'12) - 26th International Workshop/21st Annual Conference of the EACSL. Leibniz International Proceedings in Informatics (LIPIcs), Volume 16, pp. 350-365, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{jouannaud_et_al:LIPIcs.CSL.2012.350,
  author =	{Jouannaud, Jean-Pierre and Li, Jianqi},
  title =	{{Church-Rosser Properties of Normal Rewriting}},
  booktitle =	{Computer Science Logic (CSL'12) - 26th International Workshop/21st Annual Conference of the EACSL},
  pages =	{350--365},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-42-2},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{16},
  editor =	{C\'{e}gielski, Patrick and Durand, Arnaud},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2012.350},
  URN =		{urn:nbn:de:0030-drops-36839},
  doi =		{10.4230/LIPIcs.CSL.2012.350},
  annote =	{Keywords: abstract normal rewriting, Church-Rosser property}
}

Li, Zhijiang

Document
Computing Maximum Agreement Forests without Cluster Partitioning is Folly

Authors: Zhijiang Li and Norbert Zeh

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
Computing a maximum (acyclic) agreement forest (M(A)AF) of a pair of phylogenetic trees is known to be fixed-parameter tractable; the two main techniques are kernelization and depth-bounded search. In theory, kernelization-based algorithms for this problem are not competitive, but they perform remarkably well in practice. We shed light on why this is the case. Our results show that, probably unsurprisingly, the kernel is often much smaller in practice than the theoretical worst case, but not small enough to fully explain the good performance of these algorithms. The key to performance is cluster partitioning, a technique used in almost all fast M(A)AF algorithms. In theory, cluster partitioning does not help: some instances are highly clusterable, others not at all. However, our experiments show that cluster partitioning leads to substantial performance improvements for kernelization-based M(A)AF algorithms. In contrast, kernelizing the individual clusters before solving them using exponential search yields only very modest performance improvements or even hurts performance; for the vast majority of inputs, kernelization leads to no reduction in the maximal cluster size at all. The choice of the algorithm applied to solve individual clusters also significantly impacts performance, even though our limited experiment to evaluate this produced no clear winner; depth-bounded search, exponential search interleaved with kernelization, and an ILP-based algorithm all achieved competitive performance.

Cite as

Zhijiang Li and Norbert Zeh. Computing Maximum Agreement Forests without Cluster Partitioning is Folly. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 56:1-56:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ESA.2017.56,
  author =	{Li, Zhijiang and Zeh, Norbert},
  title =	{{Computing Maximum Agreement Forests without Cluster Partitioning is Folly}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{56:1--56:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.56},
  URN =		{urn:nbn:de:0030-drops-78819},
  doi =		{10.4230/LIPIcs.ESA.2017.56},
  annote =	{Keywords: fixed-parameter tractability, agreement forests, hybridization, subtree prune-and-regraft}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail