Parallelizing spectrally regularized kernel algorithms
- We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
Verfasserangaben: | Nicole Mücke, Gilles BlanchardGND |
---|---|
ISSN: | 1532-4435 |
Titel des übergeordneten Werks (Englisch): | Journal of machine learning research |
Verlag: | Microtome Publishing |
Verlagsort: | Cambridge, Mass. |
Publikationstyp: | Wissenschaftlicher Artikel |
Sprache: | Englisch |
Jahr der Erstveröffentlichung: | 2018 |
Erscheinungsjahr: | 2018 |
Datum der Freischaltung: | 28.02.2022 |
Freies Schlagwort / Tag: | Distributed Learning; Minimax Optimality; Spectral Regularization |
Band: | 19 |
Seitenanzahl: | 29 |
Fördernde Institution: | DFGGerman Research Foundation (DFG) [1735, SFB 1294] |
Organisationseinheiten: | Mathematisch-Naturwissenschaftliche Fakultät / Institut für Mathematik |
DDC-Klassifikation: | 5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik |
Peer Review: | Referiert |
Lizenz (Deutsch): | CC-BY - Namensnennung 4.0 International |