• search hit 15 of 23
Back to Result List

Convergence rates of kernel conjugate gradient for random design regression

  • We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeledWe prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.show moreshow less

Download full text files

  • SHA-1:28aa89c59265d79002ca608152b369e6d0c41b4a

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author:Gilles BlanchardGND, Nicole Krämer
URN:urn:nbn:de:kobv:517-opus4-94195
ISSN:2193-6943 (online)
Series (Serial Number):Preprints des Instituts für Mathematik der Universität Potsdam (5 (2016) 8)
Publisher:Universitätsverlag Potsdam
Place of publication:Potsdam
Document Type:Preprint
Language:English
Year of first Publication:2016
Year of Completion:2016
Publishing Institution:Universität Potsdam
Publishing Institution:Universitätsverlag Potsdam
Release Date:2016/08/08
Tag:conjugate gradient; minimax convergence rates; nonparametric regression; partial least squares; reproducing kernel Hilbert space
Volume:5
Issue:8
Pagenumber:31
Organizational units:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Mathematik
Dewey Decimal Classification:5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik
MSC Classification:62-XX STATISTICS / 62Gxx Nonparametric inference / 62G08 Nonparametric regression
62-XX STATISTICS / 62Gxx Nonparametric inference / 62G20 Asymptotic properties
62-XX STATISTICS / 62Lxx Sequential methods / 62L15 Optimal stopping [See also 60G40, 91A60]
Publication Way:Universitätsverlag Potsdam
Collections:Universität Potsdam / Schriftenreihen / Preprints des Instituts für Mathematik der Universität Potsdam, ISSN 2193-6943 / 2016
Licence (German):License LogoKeine Nutzungslizenz vergeben - es gilt das deutsche Urheberrecht