TY - INPR A1 - Blanchard, Gilles A1 - Krämer, Nicole T1 - Convergence rates of kernel conjugate gradient for random design regression N2 - We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 8 KW - nonparametric regression KW - reproducing kernel Hilbert space KW - conjugate gradient KW - partial least squares KW - minimax convergence rates Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-94195 SN - 2193-6943 VL - 5 IS - 8 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Blanchard, Gilles A1 - Kawanabe, Motoaki A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - In search of non-Gaussian components of a high-dimensional distribution N2 - Finding non-Gaussian components of high-dimensional data is an important preprocessing step for efficient information processing. This article proposes a new linear method to identify the '' non-Gaussian subspace '' within a very general semi-parametric framework. Our proposed method, called NGCA (non-Gaussian component analysis), is based on a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector belonging to the low dimensional non-Gaussian target subspace, up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family, Numerical examples demonstrate the usefulness of our method Y1 - 2006 UR - http://portal.acm.org/affiliated/jmlr/ SN - 1532-4435 ER - TY - JOUR A1 - Kawanabe, Motoaki A1 - Blanchard, Gilles A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - A novel dimension reduction procedure for searching non-Gaussian subspaces N2 - In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method Y1 - 2006 UR - http://www.springerlink.com/content/105633/ U6 - https://doi.org/10.1007/11679363_19 SN - 0302-9743 ER - TY - JOUR A1 - Blanchard, Gilles A1 - Mücke, Nicole T1 - Optimal rates for regularization of statistical inverse learning problems JF - Foundations of Computational Mathematics N2 - We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set. KW - Reproducing kernel Hilbert space KW - Spectral regularization KW - Inverse problem KW - Statistical learning KW - Minimax convergence rates Y1 - 2018 U6 - https://doi.org/10.1007/s10208-017-9359-7 SN - 1615-3375 SN - 1615-3383 VL - 18 IS - 4 SP - 971 EP - 1013 PB - Springer CY - New York ER - TY - JOUR A1 - Blanchard, Gilles A1 - Zadorozhnyi, Oleksandr T1 - Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods JF - Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability N2 - We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process. KW - Banach-valued process KW - Bernstein inequality KW - concentration KW - spectral regularization KW - weak dependence Y1 - 2019 U6 - https://doi.org/10.3150/18-BEJ1095 SN - 1350-7265 SN - 1573-9759 VL - 25 IS - 4B SP - 3421 EP - 3458 PB - International Statistical Institute CY - Voorburg ER - TY - GEN A1 - Blanchard, Gilles A1 - Scott, Clayton T1 - Corrigendum to: Classification with asymmetric label noise BT - Consistency and maximal denoising T2 - Electronic journal of statistics N2 - We point out a flaw in Lemma 15 of [1]. We also indicate how the main results of that section are still valid using a modified argument. Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1422 SN - 1935-7524 VL - 12 IS - 1 SP - 1779 EP - 1781 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Mücke, Nicole A1 - Blanchard, Gilles T1 - Parallelizing spectrally regularized kernel algorithms JF - Journal of machine learning research N2 - We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis. KW - Distributed Learning KW - Spectral Regularization KW - Minimax Optimality Y1 - 2018 SN - 1532-4435 VL - 19 PB - Microtome Publishing CY - Cambridge, Mass. ER - TY - JOUR A1 - Blanchard, Gilles A1 - Hoffmann, Marc A1 - Reiss, Markus T1 - Optimal adaptation for early stopping in statistical inverse problems JF - SIAM/ASA Journal on Uncertainty Quantification N2 - For linear inverse problems Y = A mu + zeta, it is classical to recover the unknown signal mu by iterative regularization methods ((mu) over cap,(m) = 0,1, . . .) and halt at a data-dependent iteration tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error parallel to A((mu) over cap (()(tau)) - mu)parallel to(2) is controlled. In the context of statistical estimation with stochastic noise zeta, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared- error E[parallel to((mu) over cap (()(tau)) - mu)parallel to(2)]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularization methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L-2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically. KW - linear inverse problems KW - early stopping KW - discrepancy principle KW - adaptive estimation KW - oracle inequality KW - Landweber iteration Y1 - 2018 U6 - https://doi.org/10.1137/17M1154096 SN - 2166-2525 VL - 6 IS - 3 SP - 1043 EP - 1075 PB - Society for Industrial and Applied Mathematics CY - Philadelphia ER - TY - JOUR A1 - Blanchard, Gilles A1 - Hoffmann, Marc A1 - Reiss, Markus T1 - Early stopping for statistical inverse problems via truncated SVD estimation JF - Electronic journal of statistics N2 - We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity. KW - Linear inverse problems KW - truncated SVD KW - spectral cut-off KW - early stopping KW - discrepancy principle KW - adaptive estimation KW - oracle inequalities Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1482 SN - 1935-7524 VL - 12 IS - 2 SP - 3204 EP - 3231 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Blanchard, Gilles A1 - Carpentier, Alexandra A1 - Gutzeit, Maurilio T1 - Minimax Euclidean separation rates for testing convex hypotheses in R-d JF - Electronic journal of statistics N2 - We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C. KW - Minimax hypothesis testing KW - Gaussian sequence model KW - nonasymptotic minimax separation rate Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1472 SN - 1935-7524 VL - 12 IS - 2 SP - 3713 EP - 3735 PB - Institute of Mathematical Statistics CY - Cleveland ER -