Refine
Document Type
- Article (19)
- Preprint (4)
- Monograph/Edited Volume (1)
- Other (1)
Is part of the Bibliography
- yes (25) (remove)
Keywords
- adaptive estimation (2)
- conjugate gradient (2)
- discrepancy principle (2)
- early stopping (2)
- minimax convergence rates (2)
- partial least squares (2)
- reproducing kernel Hilbert space (2)
- Banach-valued process (1)
- Bernstein inequality (1)
- Classification (1)
- Distributed Learning (1)
- False discovery rate (1)
- Gaussian sequence model (1)
- Inference post model-selection (1)
- Inverse problem (1)
- Kernel regression (1)
- Landweber iteration (1)
- Linear inverse problems (1)
- Minimax Optimality (1)
- Minimax convergence rates (1)
- Minimax hypothesis testing (1)
- Nonparametric regression (1)
- PoSI constants (1)
- Reproducing kernel Hilbert space (1)
- Spectral Regularization (1)
- Spectral regularization (1)
- Stability selection (1)
- Statistical learning (1)
- Subsampling (1)
- Variable selection (1)
- classification with partial labels (1)
- concentration (1)
- confidence intervals (1)
- consistency (1)
- continuous testing (1)
- eigenvalue decay (1)
- false discovery rate (1)
- generalization bounds (1)
- high-dimensional inference (1)
- kernel method (1)
- label noise (1)
- learning kernels (1)
- least favorable configuration (1)
- linear inverse problems (1)
- linear regression (1)
- local Rademacher complexity (1)
- minimax optimality (1)
- minimax rate (1)
- mixed membership models (1)
- mixture proportion estimation (1)
- multiclass classification with label noise (1)
- multiple kernel learning (1)
- multiple testing (1)
- multiple testing; (1)
- mutual contamination models (1)
- nonasymptotic minimax separation rate (1)
- nonparametric regression (1)
- oracle inequalities (1)
- oracle inequality (1)
- positive correlation (1)
- restricted isometry property (1)
- sparsity (1)
- spectral cut-off (1)
- spectral regularization (1)
- statistical inverse problem (1)
- step-up (1)
- stochastic process (1)
- surrogate loss (1)
- topic modeling (1)
- truncated SVD (1)
- weak dependence (1)
Institute
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against over-fitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L-2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are "mutually irreducible," a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to "mixture proportion estimation," which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration this modification is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
We introduce a theoretical framework for performing statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses. This extends the standard statistical setting for multiple hypotheses testing, which is restricted to a finite set. This work is motivated by numerous modern applications where the observed signal is modeled by a stochastic process over a continuum. As a measure of type I error, we extend the concept of false discovery rate (FDR) to this setting. The FDR is defined as the average ratio of the measure of two random sets, so that its study presents some challenge and is of some intrinsic mathematical interest. Our main result shows how to use the p-value process to control the FDR at a nominal level, either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting, the latter one leading to a less conservative procedure. The interest of this approach is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables. Conceptually, an interesting feature of the setting advocated here is that it focuses directly on the intrinsic hypothesis space associated with a testing model on a random process, without referring to an arbitrary discretization.