Abstract
We study the effect that the amount of correlation in a bipartite distribution has on the communication complexity of a problem under that distribution. We introduce a new family of complexity measures that interpolates between the two previously studied extreme cases: the (standard) randomised communication complexity and the case of distributional complexity under product distributions.
 We give a tight characterisation of the randomised complexity of Disjointness under distributions with mutual information k, showing that it is Theta(sqrt(n(k+1))) for all 0 <= k <= n. This smoothly interpolates between the lower bounds of Babai, Frankl and Simon for the product distribution case (k=0), and the bound of Razborov for the randomised case. The upper bounds improve and generalise what was known for product distributions, and imply that any tight bound for Disjointness needs Omega(n) bits of mutual information in the corresponding distribution.
 We study the same question in the distributional quantum setting, and show a lower bound of Omega((n(k+1))^{1/4}), and an upper bound (via constructing communication protocols), matching up to a logarithmic factor.
 We show that there are total Boolean functions f_d that have distributional communication complexity O(log(n)) under all distributions of information up to o(n), while the (interactive) distributional complexity maximised over all distributions is Theta(log(d)) for n <= d <= 2^{n/100}. This shows, in particular, that the correlation needed to show that a problem is hard can be much larger than the communication complexity of the problem.
 We show that in the setting of oneway communication under product distributions, the dependence of communication cost on the allowed error epsilon is multiplicative in log(1/epsilon)  the previous upper bounds had the dependence of more than 1/epsilon. This result, for the first time, explains how oneway communication complexity under product distributions is stronger than PAClearning: both tasks are characterised by the VCdimension, but have very different error dependence (learning from examples, it costs more to reduce the error).
BibTeX  Entry
@InProceedings{bottesch_et_al:LIPIcs:2015:5323,
author = {Ralph Christian Bottesch and Dmitry Gavinsky and Hartmut Klauck},
title = {{Correlation in Hard Distributions in Communication Complexity}},
booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
pages = {544572},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {9783939897897},
ISSN = {18688969},
year = {2015},
volume = {40},
editor = {Naveen Garg and Klaus Jansen and Anup Rao and Jos{\'e} D. P. Rolim},
publisher = {Schloss DagstuhlLeibnizZentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2015/5323},
URN = {urn:nbn:de:0030drops53234},
doi = {10.4230/LIPIcs.APPROXRANDOM.2015.544},
annote = {Keywords: communication complexity; information theory}
}
Keywords: 

communication complexity; information theory 
Seminar: 

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015) 
Issue Date: 

2015 
Date of publication: 

28.07.2015 