• search hit 10 of 14
Back to Result List

Contributions to the theoretical analysis of the algorithms with adversarial and dependent data

  • In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the firstIn this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Oleksandr ZadorozhnyiORCiD
Reviewer(s):Aurélien Garivier, Ingo SteinwartORCiDGND
Supervisor(s):Gilles Blanchard, Tobias Scheffer
Publication type:Doctoral Thesis
Language:English
Year of first publication:2021
Publication year:2021
Publishing institution:Universität Potsdam
Granting institution:Universität Potsdam
Date of final exam:2021/07/08
Release date:2021/07/13
Tag:Machine learning; Sobolev spaces; concentration inequalities; kernel methods; learning rates; multi-armed bandits; nonparametric regression; regularisation; sequential learning
Number of pages:144
Organizational units:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Mathematik
DDC classification:5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik
License (German):License LogoKeine öffentliche Lizenz: Unter Urheberrechtsschutz
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.