TY - JOUR A1 - Blanchard, Gilles A1 - Zadorozhnyi, Oleksandr T1 - Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods JF - Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability N2 - We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process. KW - Banach-valued process KW - Bernstein inequality KW - concentration KW - spectral regularization KW - weak dependence Y1 - 2019 U6 - https://doi.org/10.3150/18-BEJ1095 SN - 1350-7265 SN - 1573-9759 VL - 25 IS - 4B SP - 3421 EP - 3458 PB - International Statistical Institute CY - Voorburg ER - TY - THES A1 - Zadorozhnyi, Oleksandr T1 - Contributions to the theoretical analysis of the algorithms with adversarial and dependent data N2 - In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition). KW - Machine learning KW - nonparametric regression KW - kernel methods KW - regularisation KW - concentration inequalities KW - learning rates KW - sequential learning KW - multi-armed bandits KW - Sobolev spaces Y1 - 2021 ER -