TY - JOUR A1 - Kawanabe, Motoaki A1 - Blanchard, Gilles A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - A novel dimension reduction procedure for searching non-Gaussian subspaces N2 - In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method Y1 - 2006 UR - http://www.springerlink.com/content/105633/ U6 - https://doi.org/10.1007/11679363_19 SN - 0302-9743 ER - TY - JOUR A1 - Blanchard, Gilles A1 - Flaska, Marek A1 - Handy, Gregory A1 - Pozzi, Sara A1 - Scott, Clayton T1 - Classification with asymmetric label noise: Consistency and maximal denoising JF - Electronic journal of statistics N2 - In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are "mutually irreducible," a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to "mixture proportion estimation," which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. KW - Classification KW - label noise KW - mixture proportion estimation KW - surrogate loss KW - consistency Y1 - 2016 U6 - https://doi.org/10.1214/16-EJS1193 SN - 1935-7524 VL - 10 SP - 2780 EP - 2824 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Mieth, Bettina A1 - Kloft, Marius A1 - Rodriguez, Juan Antonio A1 - Sonnenburg, Soren A1 - Vobruba, Robin A1 - Morcillo-Suarez, Carlos A1 - Farre, Xavier A1 - Marigorta, Urko M. A1 - Fehr, Ernst A1 - Dickhaus, Thorsten A1 - Blanchard, Gilles A1 - Schunk, Daniel A1 - Navarro, Arcadi A1 - Müller, Klaus-Robert T1 - Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies JF - Scientific reports N2 - The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. Y1 - 2016 U6 - https://doi.org/10.1038/srep36671 SN - 2045-2322 VL - 6 PB - Nature Publ. Group CY - London ER - TY - JOUR A1 - Blanchard, Gilles A1 - Zadorozhnyi, Oleksandr T1 - Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods JF - Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability N2 - We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process. KW - Banach-valued process KW - Bernstein inequality KW - concentration KW - spectral regularization KW - weak dependence Y1 - 2019 U6 - https://doi.org/10.3150/18-BEJ1095 SN - 1350-7265 SN - 1573-9759 VL - 25 IS - 4B SP - 3421 EP - 3458 PB - International Statistical Institute CY - Voorburg ER - TY - INPR A1 - Blanchard, Gilles A1 - Krämer, Nicole T1 - Convergence rates of kernel conjugate gradient for random design regression N2 - We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 8 KW - nonparametric regression KW - reproducing kernel Hilbert space KW - conjugate gradient KW - partial least squares KW - minimax convergence rates Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-94195 SN - 2193-6943 VL - 5 IS - 8 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Blanchard, Gilles A1 - Kraemer, Nicole T1 - Convergence rates of Kernel Conjugate Gradient for random design regression JF - Analysis and applications N2 - We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against over-fitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L-2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available. KW - Nonparametric regression KW - reproducing kernel Hilbert space KW - conjugate gradient KW - partial least squares KW - minimax convergence rates Y1 - 2016 U6 - https://doi.org/10.1142/S0219530516400017 SN - 0219-5305 SN - 1793-6861 VL - 14 SP - 763 EP - 794 PB - World Scientific CY - Singapore ER - TY - GEN A1 - Blanchard, Gilles A1 - Scott, Clayton T1 - Corrigendum to: Classification with asymmetric label noise BT - Consistency and maximal denoising T2 - Electronic journal of statistics N2 - We point out a flaw in Lemma 15 of [1]. We also indicate how the main results of that section are still valid using a modified argument. Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1422 SN - 1935-7524 VL - 12 IS - 1 SP - 1779 EP - 1781 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Katz-Samuels, Julian A1 - Blanchard, Gilles A1 - Scott, Clayton T1 - Decontamination of Mutual Contamination Models JF - Journal of machine learning research N2 - Many machine learning problems can be characterized by mutual contamination models. In these problems, one observes several random samples from different convex combinations of a set of unknown base distributions and the goal is to infer these base distributions. This paper considers the general setting where the base distributions are defined on arbitrary probability spaces. We examine three popular machine learning problems that arise in this general setting: multiclass classification with label noise, demixing of mixed membership models, and classification with partial labels. In each case, we give sufficient conditions for identifiability and present algorithms for the infinite and finite sample settings, with associated performance guarantees. KW - multiclass classification with label noise KW - classification with partial labels KW - mixed membership models KW - topic modeling KW - mutual contamination models Y1 - 2019 UR - http://arxiv.org/abs/1710.01167 SN - 1532-4435 VL - 20 PB - Microtome Publishing CY - Cambridge, Mass. ER - TY - JOUR A1 - Blanchard, Gilles A1 - Mathe, Peter T1 - Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration JF - Inverse problems : an international journal of inverse problems, inverse methods and computerised inversion of data N2 - The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which corrects both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise. Y1 - 2012 U6 - https://doi.org/10.1088/0266-5611/28/11/115011 SN - 0266-5611 VL - 28 IS - 11 PB - IOP Publ. Ltd. CY - Bristol ER - TY - INPR A1 - Blanchard, Gilles A1 - Mathé, Peter T1 - Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration N2 - The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration this modification is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1 (2012) 7 Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-57117 ER - TY - JOUR A1 - Blanchard, Gilles A1 - Hoffmann, Marc A1 - Reiss, Markus T1 - Early stopping for statistical inverse problems via truncated SVD estimation JF - Electronic journal of statistics N2 - We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity. KW - Linear inverse problems KW - truncated SVD KW - spectral cut-off KW - early stopping KW - discrepancy principle KW - adaptive estimation KW - oracle inequalities Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1482 SN - 1935-7524 VL - 12 IS - 2 SP - 3204 EP - 3231 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Beinrucker, Andre A1 - Dogan, Urun A1 - Blanchard, Gilles T1 - Extensions of stability selection using subsamples of observations and covariates JF - Statistics and Computing N2 - We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010). We propose to apply a base selection method repeatedly to random subsamples of observations and subsets of covariates under scrutiny, and to select covariates based on their selection frequency. We analyse the effects and benefits of these extensions. Our analysis generalizes the theoretical results of Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010) from the case of half-samples to subsamples of arbitrary size. We study, in a theoretical manner, the effect of taking random covariate subsets using a simplified score model. Finally we validate these extensions on numerical experiments on both synthetic and real datasets, and compare the obtained results in detail to the original stability selection method. KW - Variable selection KW - Stability selection KW - Subsampling Y1 - 2016 U6 - https://doi.org/10.1007/s11222-015-9589-y SN - 0960-3174 SN - 1573-1375 VL - 26 SP - 1059 EP - 1077 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Blanchard, Gilles A1 - Kawanabe, Motoaki A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - In search of non-Gaussian components of a high-dimensional distribution N2 - Finding non-Gaussian components of high-dimensional data is an important preprocessing step for efficient information processing. This article proposes a new linear method to identify the '' non-Gaussian subspace '' within a very general semi-parametric framework. Our proposed method, called NGCA (non-Gaussian component analysis), is based on a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector belonging to the low dimensional non-Gaussian target subspace, up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family, Numerical examples demonstrate the usefulness of our method Y1 - 2006 UR - http://portal.acm.org/affiliated/jmlr/ SN - 1532-4435 ER - TY - JOUR A1 - Blanchard, Gilles A1 - Mücke, Nicole T1 - Kernel regression, minimax rates and effective dimensionality BT - beyond the regular case JF - Analysis and applications N2 - We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales. KW - Kernel regression KW - minimax optimality KW - eigenvalue decay Y1 - 2020 U6 - https://doi.org/10.1142/S0219530519500258 SN - 0219-5305 SN - 1793-6861 VL - 18 IS - 4 SP - 683 EP - 696 PB - World Scientific CY - New Jersey ER - TY - BOOK A1 - Blanchard, Gilles T1 - Komplexitätsanalyse in Statistik und Lerntheorie : Antrittsvorlesung 2011-05-04 N2 - Gilles Blanchards Vortrag gewährt Einblicke in seine Arbeiten zur Entwicklung und Analyse statistischer Eigenschaften von Lernalgorithmen. In vielen modernen Anwendungen, beispielsweise bei der Schrifterkennung oder dem Spam- Filtering, kann ein Computerprogramm auf der Basis vorgegebener Beispiele automatisch lernen, relevante Vorhersagen für weitere Fälle zu treffen. Mit der mathematischen Analyse der Eigenschaften solcher Methoden beschäftigt sich die Lerntheorie, die mit der Statistik eng zusammenhängt. Dabei spielt der Begriff der Komplexität der erlernten Vorhersageregel eine wichtige Rolle. Ist die Regel zu einfach, wird sie wichtige Einzelheiten ignorieren. Ist sie zu komplex, wird sie die vorgegebenen Beispiele "auswendig" lernen und keine Verallgemeinerungskraft haben. Blanchard wird erläutern, wie Mathematische Werkzeuge dabei helfen, den richtigen Kompromiss zwischen diesen beiden Extremen zu finden. Y1 - 2011 UR - http://info.ub.uni-potsdam.de/multimedia/show_multimediafile.php?mediafile_id=551 PB - Univ.-Bibl. CY - Potsdam ER - TY - JOUR A1 - Blanchard, Gilles A1 - Carpentier, Alexandra A1 - Gutzeit, Maurilio T1 - Minimax Euclidean separation rates for testing convex hypotheses in R-d JF - Electronic journal of statistics N2 - We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C. KW - Minimax hypothesis testing KW - Gaussian sequence model KW - nonasymptotic minimax separation rate Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1472 SN - 1935-7524 VL - 12 IS - 2 SP - 3713 EP - 3735 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Blanchard, Gilles A1 - Dickhaus, Thorsten A1 - Roquain, Etienne A1 - Villers, Fanny T1 - On least favorable configurations for step-up-down tests JF - Statistica Sinica KW - False discovery rate KW - least favorable configuration KW - multiple testing; Y1 - 2014 U6 - https://doi.org/10.5705/ss.2011.205 SN - 1017-0405 SN - 1996-8507 VL - 24 IS - 1 SP - 1 EP - U31 PB - Statistica Sinica, Institute of Statistical Science, Academia Sinica CY - Taipei ER - TY - JOUR A1 - Kloft, Marius A1 - Blanchard, Gilles T1 - On the Convergence Rate of l(p)-Norm Multiple Kernel Learning JF - JOURNAL OF MACHINE LEARNING RESEARCH N2 - We derive an upper bound on the local Rademacher complexity of l(p)-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p - 1 only while our analysis covers all cases 1 <= p <= infinity, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O( n(-)1+alpha/alpha where alpha is the minimum eigenvalue decay rate of the individual kernels. KW - multiple kernel learning KW - learning kernels KW - generalization bounds KW - local Rademacher complexity Y1 - 2012 SN - 1532-4435 VL - 13 SP - 2465 EP - 2502 PB - MICROTOME PUBL CY - BROOKLINE ER - TY - JOUR A1 - Bachoc, Francois A1 - Blanchard, Gilles A1 - Neuvial, Pierre T1 - On the post selection inference constant under restricted isometry properties JF - Electronic journal of statistics N2 - Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound. KW - Inference post model-selection KW - confidence intervals KW - PoSI constants KW - linear regression KW - high-dimensional inference KW - sparsity KW - restricted isometry property Y1 - 2018 U6 - https://doi.org/10.1214/18-EJS1490 SN - 1935-7524 VL - 12 IS - 2 SP - 3736 EP - 3757 PB - Institute of Mathematical Statistics CY - Cleveland ER - TY - JOUR A1 - Blanchard, Gilles A1 - Hoffmann, Marc A1 - Reiss, Markus T1 - Optimal adaptation for early stopping in statistical inverse problems JF - SIAM/ASA Journal on Uncertainty Quantification N2 - For linear inverse problems Y = A mu + zeta, it is classical to recover the unknown signal mu by iterative regularization methods ((mu) over cap,(m) = 0,1, . . .) and halt at a data-dependent iteration tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error parallel to A((mu) over cap (()(tau)) - mu)parallel to(2) is controlled. In the context of statistical estimation with stochastic noise zeta, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared- error E[parallel to((mu) over cap (()(tau)) - mu)parallel to(2)]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularization methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L-2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically. KW - linear inverse problems KW - early stopping KW - discrepancy principle KW - adaptive estimation KW - oracle inequality KW - Landweber iteration Y1 - 2018 U6 - https://doi.org/10.1137/17M1154096 SN - 2166-2525 VL - 6 IS - 3 SP - 1043 EP - 1075 PB - Society for Industrial and Applied Mathematics CY - Philadelphia ER -