Refine
Year of publication
- 2016 (71) (remove)
Document Type
- Article (48)
- Preprint (11)
- Doctoral Thesis (10)
- Monograph/Edited Volume (1)
- Master's Thesis (1)
Language
- English (71) (remove)
Is part of the Bibliography
- yes (71) (remove)
Keywords
- Cox model (2)
- Scientific discovery learning (2)
- conjugate gradient (2)
- geodesic distance (2)
- index (2)
- local time (2)
- minimax convergence rates (2)
- partial least squares (2)
- reproducing kernel Hilbert space (2)
- stochastic differential equations (2)
- (generalised) wealdy differentiable function (1)
- (sub-) tropical Africa (1)
- (sub-) tropisches Afrika (1)
- AFM (1)
- Aerosole (1)
- Asymptotic variance of maximum partial likelihood estimate (1)
- Asymptotics of solutions (1)
- Bivariant K-theory (1)
- Brownian motion with discontinuous drift (1)
- Calculation (1)
- Case-Cohort-Design (1)
- Cauchy problem (1)
- Cheeger inequalities (1)
- Classification (1)
- Clifford algebra (1)
- Computer simulations (1)
- Conceptual change (1)
- Cone and edge pseudo-differential operators (1)
- Cox-Modell (1)
- Critical mathematics education (1)
- Critique (1)
- Data assimilation (1)
- Definitions (1)
- Detektion multipler Übergänge (1)
- Determinante (1)
- Diagrams (1)
- Dichte eines Maßes (1)
- Digital technology (1)
- Dirac operator (1)
- Dirichlet-to-Neumann operator (1)
- DySEM (1)
- Edge degenerate operators (1)
- Ellipticity of edge-degenerate operators (1)
- Ensemble Kalman filter (1)
- Estimability (1)
- FIB patterning (1)
- Fitness (1)
- Foucault (1)
- Fourier transform (1)
- Fredholm operator (1)
- Fredholm property (1)
- Fuzzy logic (1)
- Geomagnetic field (1)
- Geomagnetic models (1)
- Geometry (1)
- Gestures (1)
- Gibbs processes (1)
- Goal specificity (1)
- Graph Laplacians (1)
- HIV (1)
- HIV Erkrankung (1)
- Hodge theory (1)
- Holder-type source condition (1)
- Integrability (1)
- Intrinsic metrics for Dirichlet forms (1)
- Inversion (1)
- Ionospheric current (1)
- Kombinationstherapie (1)
- Laplace expansion (1)
- Lidar (1)
- Linearized equation (1)
- Lyapunov function (1)
- Lévy type processes (1)
- Markov-field property (1)
- Marx (1)
- Mellin and Green operators edge symbols (1)
- Mellin operators (1)
- Mellin oscillatory integrals (1)
- Mellin-Symbole (1)
- Mellin-Symbols (1)
- Meromorphic operator-valued symbols (1)
- Mikrophysik (1)
- Misconceptions (1)
- Moduli space (1)
- Multiple problem spaces (1)
- Navier-Stokes equations (1)
- Neumann problem (1)
- Nonlinear ill-posed problems (1)
- Nonparametric regression (1)
- Operator algebras (1)
- Ordnungs-Filtrierung (1)
- Paleoclimate reconstruction (1)
- Papangelou processes (1)
- Pfadintegrale (1)
- Pharmakokinetik (1)
- Physics concepts (1)
- Plio-Pleistocene (1)
- Plio-Pleistozän (1)
- Poincare inequality (1)
- Positive mass theorem (1)
- Problem solving (1)
- Proving (1)
- Proxy forward modeling (1)
- Pseudo-differential operators (1)
- Quasilinear equations (1)
- Rectifiable varifold (1)
- Regularisierung (1)
- Removable sets (1)
- Retrieval (1)
- Rho invariants (1)
- Ricci solitons (1)
- Riemann-Hilbert problem (1)
- Runge-Kutta methods (1)
- Skew Diffusionen (1)
- Sobolev Poincare inequality (1)
- Spectral theory of graphs (1)
- Stability selection (1)
- Subsampling (1)
- Surgery (1)
- Symplectic manifold (1)
- Technology (1)
- Three-space theory (1)
- Variable selection (1)
- Varifaltigkeit (1)
- Visuospatial reasoning (1)
- Wasserstein distance (1)
- Wiener measure (1)
- Wärmekern (1)
- Wärmeleitungsgleichung (1)
- Yamabe operator (1)
- aerosols (1)
- approximate differentiability (1)
- asymptotic expansion (1)
- asymptotische Entwicklung (1)
- boundary value problems (1)
- calculus of variations (1)
- characterization of point processes (1)
- clone (1)
- coarea formula (1)
- composition of terms (1)
- consistency (1)
- curvature varifold (1)
- decomposition (1)
- density of a measure (1)
- determinant (1)
- direct and indirect climate observations (1)
- direkte und indirekte Klimaobservablen (1)
- discontinuous drift (1)
- discrete spectrum (1)
- diskontinuierliche Drift (1)
- distributional boundary (1)
- division of spaces (1)
- eigenvalue asymptotics (1)
- elliptic complex (1)
- elliptic complexes (1)
- enlargement of filtration (1)
- erste Variation (1)
- essential position in terms (1)
- exact simulation (1)
- exact simulation method (1)
- exakte Simulation (1)
- exit calculus (1)
- first variation (1)
- generating sets (1)
- geodätischer Abstand (1)
- hard core interaction (1)
- heat asymptotics (1)
- heat equation (1)
- heat kernel (1)
- heavy-tailed distributions (1)
- ill-posed (1)
- indecomposable varifold (1)
- independent splittings (1)
- integral representation method (1)
- intrinsic diameter (1)
- intrinsischer Diameter (1)
- inversion (1)
- isoperimetric estimates (1)
- isoperimetric inequality (1)
- isoperimetrische Ungleichung (1)
- kernel method (1)
- kernel-based Bayesian inference (1)
- kernel-basierte Bayes'sche Inferenz (1)
- label noise (1)
- lattice packing and covering (1)
- lidar (1)
- linear hyperidentity (1)
- linear hypersubstitution (1)
- linear identity (1)
- linear term (1)
- logistic regression analysis (1)
- logistische Regression (1)
- manifold with boundary (1)
- mathematical modelling (1)
- mathematische Modellierung (1)
- mean curvature (1)
- microphysics (1)
- minimax rate (1)
- mittlere Krümmung (1)
- mixture proportion estimation (1)
- modal analysis (1)
- model selection (1)
- multi-change point detection (1)
- multilevel Monte Carlo (1)
- multiplicative Lévy noise (1)
- nonparametric regression (1)
- normal reflection (1)
- operator valued symbols (1)
- optimal transport (1)
- order filtration (1)
- p-Laplace equation (1)
- p-Laplace operator (1)
- parameter estimation (1)
- partial clone (1)
- path integral (1)
- periodic Gaussian process (1)
- periodic Ornstein-Uhlenbeck process (1)
- pharmacokinetics (1)
- polyhedra and polytopes (1)
- rectifiable varifold (1)
- regular figures (1)
- regularization (1)
- regularization methods (1)
- rektifizierbare Varifaltigkeit (1)
- relative isoperimetric inequality (1)
- relative ranks (1)
- restricted range (1)
- retrieval (1)
- reversible measure (1)
- schlecht gestellt (1)
- sequential data assimilation (1)
- singular manifolds (1)
- singuläre Mannigfaltigkeiten (1)
- skew Brownian motion (1)
- skew diffusion (1)
- skew diffusions (1)
- stable variety (1)
- star product (1)
- statistical inverse problem (1)
- stochastic completeness (1)
- stopping rules (1)
- structured cantilever (1)
- surrogate loss (1)
- survival analysis (1)
- terrigener Staub (1)
- terrigenous dust (1)
- time series (1)
- trace (1)
- transformation semigroups (1)
- unzerlegbare Varifaltigkeit (1)
- varifold (1)
- viral fitness (1)
- weighted Hölder spaces (1)
- weighted Sobolev spaces (1)
Institute
- Institut für Mathematik (71) (remove)
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.
Lyapunov Exponents
(2016)
Lyapunov exponents lie at the heart of chaos theory, and are widely used in studies of complex dynamics. Utilising a pragmatic, physical approach, this self-contained book provides a comprehensive description of the concept. Beginning with the basic properties and numerical methods, it then guides readers through to the most recent advances in applications to complex systems. Practical algorithms are thoroughly reviewed and their performance is discussed, while a broad set of examples illustrate the wide range of potential applications. The description of various numerical and analytical techniques for the computation of Lyapunov exponents offers an extensive array of tools for the characterization of phenomena such as synchronization, weak and global chaos in low and high-dimensional set-ups, and localization. This text equips readers with all the investigative expertise needed to fully explore the dynamical properties of complex systems, making it ideal for both graduate students and experienced researchers.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for
the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on
a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
The aim of this paper is to bring together two areas which are of great importance for the study of overdetermined boundary value problems. The first area is homological algebra which is the main tool in constructing the formal theory of overdetermined problems. And the second area is the global calculus of pseudodifferential operators which allows one to develop explicit analysis.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
We elaborate a boundary Fourier method for studying an analogue of the Hilbert problem for analytic functions within the framework of generalised Cauchy-Riemann equations. The boundary value problem need not satisfy the Shapiro-Lopatinskij condition and so it fails to be Fredholm in Sobolev spaces. We show a solvability condition of the Hilbert problem, which looks like those for ill-posed
problems, and construct an explicit formula for approximate solutions.