Abstract
Given a nonnegative real matrix A, the matrix scaling problem is to determine if it is possible to scale the rows and columns so that each row and each column sums to a specified target value for it.
The matrix scaling problem arises in many algorithmic applications, perhaps most notably as a preconditioning step in solving linear system of equations. One of the most natural and by now classical approach to matrix scaling is the SinkhornKnopp algorithm (also known as the RAS method) where one alternately scales either all rows or all columns to meet the target values. In addition to being extremely simple and natural, another appeal of this procedure is that it easily lends itself to parallelization. A central question is to understand the rate of convergence of the SinkhornKnopp algorithm.
Specifically, given a suitable error metric to measure deviations from target values, and an error bound epsilon, how quickly does the SinkhornKnopp algorithm converge to an error below epsilon? While there are several nontrivial convergence results known about the SinkhornKnopp algorithm, perhaps somewhat surprisingly, even for natural error metrics such as ell_1error or ell_2error, this is not entirely understood.
In this paper, we present an elementary convergence analysis for the SinkhornKnopp algorithm that improves upon the previous best bound. In a nutshell, our approach is to show (i) a simple bound on the number of iterations needed so that the KLdivergence between the current rowsums and the target rowsums drops below a specified threshold delta, and (ii) then show that for a suitable choice of delta, whenever KLdivergence is below delta, then the ell_1error or the ell_2error is below epsilon. The wellknown Pinsker's inequality immediately allows us to translate a bound on the KL divergence to a bound on ell_1error. To bound the ell_2error in terms of the KLdivergence, we establish a new inequality, referred to as (KL vs ell_1/ell_2) inequality in the paper. This new inequality is a strengthening of the Pinsker's inequality that we believe is of independent interest. Our analysis of ell_2error significantly improves upon the best previous convergence bound for ell_2error.
The idea of studying SinkhornKnopp convergence via KLdivergence is not new and has indeed been previously explored. Our contribution is an elementary, selfcontained presentation of this approach and an interesting new inequality that yields a significantly stronger convergence guarantee for the extensively studied ell_2error.
BibTeX  Entry
@InProceedings{chakrabarty_et_al:OASIcs:2018:8304,
author = {Deeparnab Chakrabarty and Sanjeev Khanna},
title = {{Better and Simpler Error Analysis of the SinkhornKnopp Algorithm for Matrix Scaling}},
booktitle = {1st Symposium on Simplicity in Algorithms (SOSA 2018)},
pages = {4:14:11},
series = {OpenAccess Series in Informatics (OASIcs)},
ISBN = {9783959770644},
ISSN = {21906807},
year = {2018},
volume = {61},
editor = {Raimund Seidel},
publisher = {Schloss DagstuhlLeibnizZentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2018/8304},
URN = {urn:nbn:de:0030drops83045},
doi = {10.4230/OASIcs.SOSA.2018.4},
annote = {Keywords: Matrix Scaling, Entropy Minimization, KL Divergence Inequalities}
}
Keywords: 

Matrix Scaling, Entropy Minimization, KL Divergence Inequalities 
Seminar: 

1st Symposium on Simplicity in Algorithms (SOSA 2018) 
Issue Date: 

2018 
Date of publication: 

05.01.2018 