65J22 Inverse problems
Refine
Has Fulltext
- yes (2)
Document Type
- Preprint (2)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- Hölder-type source condition (1)
- Runge-Kutta methods (1)
- ill-posed problems (1)
- kernel method (1)
- minimax rate (1)
- regularization methods (1)
- statistical inverse problem (1)
- stopping rules (1)
Institute
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.