TY - JOUR A1 - Blanchard, Gilles A1 - Kawanabe, Motoaki A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - In search of non-Gaussian components of a high-dimensional distribution N2 - Finding non-Gaussian components of high-dimensional data is an important preprocessing step for efficient information processing. This article proposes a new linear method to identify the '' non-Gaussian subspace '' within a very general semi-parametric framework. Our proposed method, called NGCA (non-Gaussian component analysis), is based on a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector belonging to the low dimensional non-Gaussian target subspace, up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family, Numerical examples demonstrate the usefulness of our method Y1 - 2006 UR - http://portal.acm.org/affiliated/jmlr/ SN - 1532-4435 ER - TY - JOUR A1 - Kawanabe, Motoaki A1 - Blanchard, Gilles A1 - Sugiyama, Masashi A1 - Spokoiny, Vladimir G. A1 - Müller, Klaus-Robert T1 - A novel dimension reduction procedure for searching non-Gaussian subspaces N2 - In this article, we consider high-dimensional data which contains a low-dimensional non-Gaussian structure contaminated with Gaussian noise and propose a new linear method to identify the non-Gaussian subspace. Our method NGCA (Non-Gaussian Component Analysis) is based on a very general semi-parametric framework and has a theoretical guarantee that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. NGCA can be used not only as preprocessing for ICA, but also for extracting and visualizing more general structures like clusters. A numerical study demonstrates the usefulness of our method Y1 - 2006 UR - http://www.springerlink.com/content/105633/ U6 - https://doi.org/10.1007/11679363_19 SN - 0302-9743 ER - TY - BOOK A1 - Blanchard, Gilles T1 - Komplexitätsanalyse in Statistik und Lerntheorie : Antrittsvorlesung 2011-05-04 N2 - Gilles Blanchards Vortrag gewährt Einblicke in seine Arbeiten zur Entwicklung und Analyse statistischer Eigenschaften von Lernalgorithmen. In vielen modernen Anwendungen, beispielsweise bei der Schrifterkennung oder dem Spam- Filtering, kann ein Computerprogramm auf der Basis vorgegebener Beispiele automatisch lernen, relevante Vorhersagen für weitere Fälle zu treffen. Mit der mathematischen Analyse der Eigenschaften solcher Methoden beschäftigt sich die Lerntheorie, die mit der Statistik eng zusammenhängt. Dabei spielt der Begriff der Komplexität der erlernten Vorhersageregel eine wichtige Rolle. Ist die Regel zu einfach, wird sie wichtige Einzelheiten ignorieren. Ist sie zu komplex, wird sie die vorgegebenen Beispiele "auswendig" lernen und keine Verallgemeinerungskraft haben. Blanchard wird erläutern, wie Mathematische Werkzeuge dabei helfen, den richtigen Kompromiss zwischen diesen beiden Extremen zu finden. Y1 - 2011 UR - http://info.ub.uni-potsdam.de/multimedia/show_multimediafile.php?mediafile_id=551 PB - Univ.-Bibl. CY - Potsdam ER - TY - JOUR A1 - Blanchard, Gilles A1 - Mathe, Peter T1 - Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration JF - Inverse problems : an international journal of inverse problems, inverse methods and computerised inversion of data N2 - The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which corrects both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise. Y1 - 2012 U6 - https://doi.org/10.1088/0266-5611/28/11/115011 SN - 0266-5611 VL - 28 IS - 11 PB - IOP Publ. Ltd. CY - Bristol ER - TY - JOUR A1 - Kloft, Marius A1 - Blanchard, Gilles T1 - On the Convergence Rate of l(p)-Norm Multiple Kernel Learning JF - JOURNAL OF MACHINE LEARNING RESEARCH N2 - We derive an upper bound on the local Rademacher complexity of l(p)-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p - 1 only while our analysis covers all cases 1 <= p <= infinity, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O( n(-)1+alpha/alpha where alpha is the minimum eigenvalue decay rate of the individual kernels. KW - multiple kernel learning KW - learning kernels KW - generalization bounds KW - local Rademacher complexity Y1 - 2012 SN - 1532-4435 VL - 13 SP - 2465 EP - 2502 PB - MICROTOME PUBL CY - BROOKLINE ER - TY - INPR A1 - Blanchard, Gilles A1 - Mathé, Peter T1 - Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration N2 - The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration this modification is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1 (2012) 7 Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-57117 ER - TY - INPR A1 - Blanchard, Gilles A1 - Delattre, Sylvain A1 - Roquain, Étienne T1 - Testing over a continuum of null hypotheses N2 - We introduce a theoretical framework for performing statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses. This extends the standard statistical setting for multiple hypotheses testing, which is restricted to a finite set. This work is motivated by numerous modern applications where the observed signal is modeled by a stochastic process over a continuum. As a measure of type I error, we extend the concept of false discovery rate (FDR) to this setting. The FDR is defined as the average ratio of the measure of two random sets, so that its study presents some challenge and is of some intrinsic mathematical interest. Our main result shows how to use the p-value process to control the FDR at a nominal level, either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting, the latter one leading to a less conservative procedure. The interest of this approach is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables. Conceptually, an interesting feature of the setting advocated here is that it focuses directly on the intrinsic hypothesis space associated with a testing model on a random process, without referring to an arbitrary discretization. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1 (2012) 1 Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-56877 ER - TY - JOUR A1 - Blanchard, Gilles A1 - Dickhaus, Thorsten A1 - Roquain, Etienne A1 - Villers, Fanny T1 - On least favorable configurations for step-up-down tests JF - Statistica Sinica KW - False discovery rate KW - least favorable configuration KW - multiple testing; Y1 - 2014 U6 - https://doi.org/10.5705/ss.2011.205 SN - 1017-0405 SN - 1996-8507 VL - 24 IS - 1 SP - 1 EP - U31 PB - Statistica Sinica, Institute of Statistical Science, Academia Sinica CY - Taipei ER - TY - JOUR A1 - Blanchard, Gilles A1 - Delattre, Sylvain A1 - Roquain, Etienne T1 - Testing over a continuum of null hypotheses with False Discovery Rate control JF - Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability N2 - We consider statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses, under the assumption that a suitable single test (and corresponding p-value) is known for each individual hypothesis. We extend to this setting the notion of false discovery rate (FDR) as a measure of type I error. Our main result studies specific procedures based on the observation of the p-value process. Control of the FDR at a nominal level is ensured either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting. Its interest is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables. KW - continuous testing KW - false discovery rate KW - multiple testing KW - positive correlation KW - step-up KW - stochastic process Y1 - 2014 U6 - https://doi.org/10.3150/12-BEJ488 SN - 1350-7265 SN - 1573-9759 VL - 20 IS - 1 SP - 304 EP - 333 PB - International Statistical Institute CY - Voorburg ER - TY - INPR A1 - Blanchard, Gilles A1 - Krämer, Nicole T1 - Convergence rates of kernel conjugate gradient for random design regression N2 - We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 8 KW - nonparametric regression KW - reproducing kernel Hilbert space KW - conjugate gradient KW - partial least squares KW - minimax convergence rates Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-94195 SN - 2193-6943 VL - 5 IS - 8 PB - Universitätsverlag Potsdam CY - Potsdam ER -