Jones, Lee ;
Rybnikov, Konstantin
Local Minimax Learning of Approximately Polynomial Functions
Abstract
Suppose we have a number of noisy measurements of an unknown realvalued function $f$ near
point of interest $mathbf{x}_0 in mathbb{R}^d$. Suppose also that nothing can be assumed
about the noise distribution, except for zero mean and bounded covariance matrix. We want
to estimate $f$ at $mathbf{x=x}_0$ using a general linear parametric family
$f(mathbf{x};mathbf{a}) = a_0 h_0 (mathbf{x}) ++ a_q h_q (mathbf{x})$, where
$mathbf{a} in mathbb{R}^q$ and $h_i$'s are bounded functions on a neighborhood $B$ of
$mathbf{x}_0$ which contains all points of measurement. Typically, $B$ is a Euclidean ball
or cube in $mathbb{R}^d$ (more generally, a ball in an $l_p$norm). In the case when the
$h_i$'s are polynomial functions in $x_1,ldots,x_d$ the model is called
locallypolynomial. In particular, if the $h_i$'s form a basis of the linear space of
polynomials of degree at most two, the model is called locallyquadratic (if the degree is
at most three, the model is locallycubic, etc.). Often, there is information, which is
called context, about the function $f$ (restricted to $B$ ) available, such as that it
takes values in a known interval, or that it satisfies a Lipschitz condition. The theory of
local minimax estimation with context for locallypolynomial models and approximately
locally polynomial models has been recently initiated by Jones. In the case of local
linearity and a bound on the change of $f$ on $B$, where $B$ is a ball, the solution for
squared error loss is in the form of ridge regression, where the ridge parameter is
identified; hence, minimax justification for ridge regression is given together with
explicit best error bounds. The analysis of polynomial models of degree above 1 leads to
interesting and difficult questions in real algebraic geometry and nonlinear optimization.
We show that in the case when $f$ is a probability function, the optimal (in the minimax
sense) estimator is effectively computable (with any given precision), thanks to Tarski's
elimination principle.
BibTeX  Entry
@InProceedings{jones_et_al:DagSemProc.06201.3,
author = {Jones, Lee and Rybnikov, Konstantin},
title = {{Local Minimax Learning of Approximately Polynomial Functions}},
booktitle = {Combinatorial and Algorithmic Foundations of Pattern and Association Discovery},
pages = {112},
series = {Dagstuhl Seminar Proceedings (DagSemProc)},
ISSN = {18624405},
year = {2007},
volume = {6201},
editor = {Rudolf Ahlswede and Alberto Apostolico and Vladimir I. Levenshtein},
publisher = {Schloss Dagstuhl  LeibnizZentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2007/891},
URN = {urn:nbn:de:0030drops8912},
doi = {10.4230/DagSemProc.06201.3},
annote = {Keywords: Local learning, statistical learning, estimator, minimax, convex optimization, quantifier elimination, semialgebraic, ridge regression, polynomial}
}
13.02.2007
Keywords: 

Local learning, statistical learning, estimator, minimax, convex optimization, quantifier elimination, semialgebraic, ridge regression, polynomial 
Seminar: 

06201  Combinatorial and Algorithmic Foundations of Pattern and Association Discovery

Issue date: 

2007 
Date of publication: 

13.02.2007 