TY - JOUR A1 - Mücke, Nicole A1 - Blanchard, Gilles T1 - Parallelizing spectrally regularized kernel algorithms JF - Journal of machine learning research N2 - We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis. KW - Distributed Learning KW - Spectral Regularization KW - Minimax Optimality Y1 - 2018 SN - 1532-4435 VL - 19 PB - Microtome Publishing CY - Cambridge, Mass. ER - TY - JOUR A1 - Blanchard, Gilles A1 - Mücke, Nicole T1 - Optimal rates for regularization of statistical inverse learning problems JF - Foundations of Computational Mathematics N2 - We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set. KW - Reproducing kernel Hilbert space KW - Spectral regularization KW - Inverse problem KW - Statistical learning KW - Minimax convergence rates Y1 - 2018 U6 - https://doi.org/10.1007/s10208-017-9359-7 SN - 1615-3375 SN - 1615-3383 VL - 18 IS - 4 SP - 971 EP - 1013 PB - Springer CY - New York ER -