Refine
Year of publication
Document Type
- Article (939)
- Preprint (363)
- Monograph/Edited Volume (353)
- Doctoral Thesis (124)
- Postprint (28)
- Other (25)
- Review (12)
- Conference Proceeding (6)
- Master's Thesis (3)
- Part of a Book (1)
Language
- English (1854) (remove)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (1854) (remove)
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
In this paper, using an algorithm based on the retrospective rejection sampling scheme introduced in [A. Beskos, O. Papaspiliopoulos, and G. O. Roberts,Methodol. Comput. Appl. Probab., 10 (2008), pp. 85-104] and [P. Etore and M. Martinez, ESAIM Probab.Stat., 18 (2014), pp. 686-702], we propose an exact simulation of a Brownian di ff usion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical di ffi culty due to the presence of t w o jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
In this study, we investigate the climatology of high-latitude total electron content (TEC) variations as observed by the dual-frequency Global Navigation Satellite Systems (GNSS) receivers onboard the Swarm satellite constellation. The distribution of TEC perturbations as a function of geographic/magnetic coordinates and seasons reasonably agrees with that of the Challenging Minisatellite Payload observations published earlier. Categorizing the high-latitude TEC perturbations according to line-of-sight directions between Swarm and GNSS satellites, we can deduce their morphology with respect to the geomagnetic field lines. In the Northern Hemisphere, the perturbation shapes are mostly aligned with the L shell surface, and this anisotropy is strongest in the nightside auroral (substorm) and subauroral regions and weakest in the central polar cap. The results are consistent with the well-known two-cell plasma convection pattern of the high-latitude ionosphere, which is approximately aligned with L shells at auroral regions and crossing different L shells for a significant part of the polar cap. In the Southern Hemisphere, the perturbation structures exhibit noticeable misalignment to the local L shells. Here the direction toward the Sun has an additional influence on the plasma structure, which we attribute to photoionization effects. The larger offset between geographic and geomagnetic poles in the south than in the north is responsible for the hemispheric difference.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
We analyze an inverse noisy regression model under random design with the aim of estimating the unknown target function based on a given set of data, drawn according to some unknown probability distribution. Our estimators are all constructed by kernel methods, which depend on a Reproducing Kernel Hilbert Space structure using spectral regularization methods.
A first main result establishes upper and lower bounds for the rate of convergence under a given source condition assumption, restricting the class of admissible distributions. But since kernel methods scale poorly when massive datasets are involved, we study one example for saving computation time and memory requirements in more detail. We show that Parallelizing spectral algorithms also leads to minimax optimal rates of convergence provided the number of machines is chosen appropriately.
We emphasize that so far all estimators depend on the assumed a-priori smoothness of the target function and on the eigenvalue decay of the kernel covariance operator, which are in general unknown. To obtain good purely data driven estimators constitutes the problem of adaptivity which we handle for the single machine problem via a version of the Lepskii principle.
In a bounded domain with smooth boundary in R^3 we consider the stationary Maxwell equations
for a function u with values in R^3 subject to a nonhomogeneous condition
(u,v)_x = u_0 on
the boundary, where v is a given vector field and u_0 a function on the boundary. We specify this problem within the framework of the Riemann-Hilbert boundary value problems for the Moisil-Teodorescu system. This latter is proved to satisfy the Shapiro-Lopaniskij condition if an only if the vector v is at no point tangent to the boundary. The Riemann-Hilbert problem for the Moisil-Teodorescu system fails to possess an adjoint boundary value problem with respect to the Green formula, which satisfies the Shapiro-Lopatinskij condition. We develop the construction of Green formula to get a proper concept of adjoint boundary value problem.
We introduce an abstract concept of quantum field theory on categories fibered in groupoids over the category of spacetimes. This provides us with a general and flexible framework to study quantum field theories defined on spacetimes with extra geometric structures such as bundles, connections and spin structures. Using right Kan extensions, we can assign to any such theory an ordinary quantum field theory defined on the category of spacetimes and we shall clarify under which conditions it satisfies the axioms of locally covariant quantum field theory. The same constructions can be performed in a homotopy theoretic framework by using homotopy right Kan extensions, which allows us to obtain first toy-models of homotopical quantum field theories resembling some aspects of gauge theories.
Background: Cells are able to communicate and coordinate their function within tissues via secreted factors. Aberrant secretion by cancer cells can modulate this intercellular communication, in particular in highly organised tissues such as the liver. Hepatocytes, the major cell type of the liver, secrete Dickkopf (Dkk), which inhibits Wnt/beta-catenin signalling in an autocrine and paracrine manner. Consequently, Dkk modulates the expression of Wnt/beta-catenin target genes. We present a mathematical model that describes the autocrine and paracrine regulation of hepatic gene expression by Dkk under wild-type conditions as well as in the presence of mutant cells. Results: Our spatial model describes the competition of Dkk and Wnt at receptor level, intra-cellular Wnt/beta-catenin signalling, and the regulation of target gene expression for 21 individual hepatocytes. Autocrine and paracrine regulation is mediated through a feedback mechanism via Dkk and Dkk diffusion along the porto-central axis. Along this axis an APC concentration gradient is modelled as experimentally detected in liver. Simulations of mutant cells demonstrate that already a single mutant cell increases overall Dkk concentration. The influence of the mutant cell on gene expression of surrounding wild-type hepatocytes is limited in magnitude and restricted to hepatocytes in close proximity. To explore the underlying molecular mechanisms, we perform a comprehensive analysis of the model parameters such as diffusion coefficient, mutation strength and feedback strength. Conclusions: Our simulations show that Dkk concentration is elevated in the presence of a mutant cell. However, the impact of these elevated Dkk levels on wild-type hepatocytes is confined in space and magnitude. The combination of inter-and intracellular processes, such as Dkk feedback, diffusion and Wnt/beta-catenin signal transduction, allow wild-type hepatocytes to largely maintain their gene expression.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
Background Evolution of metastatic melanoma (MM) under B-RAF inhibitors (BRAFi) is unpredictable, but anticipation is crucial for therapeutic decision. Kinetics changes in metastatic growth are driven by molecular and immune events, and thus we hypothesized that they convey relevant information for decision making. Patients and methods We used a retrospective cohort of 37 MM patients treated by BRAFi only with at least 2 close CT-scans available before BRAFi, as a model to study kinetics of metastatic growth before, under and after BRAFi. All metastases (mets) were individually measured at each CT-scan. From these measurements, different measures of growth kinetics of each met and total tumor volume were computed at different time points. A historical cohort permitted to build a reference model for the expected spontaneous disease kinetics without BRAFi. All variables were included in Cox and multistate regression models for survival, to select best candidates for predicting overall survival. Results Before starting BRAFi, fast kinetics and moreover a wide range of kinetics (fast and slow growing mets in a same patient) were pejorative markers. At the first assessment after BRAFi introduction, high heterogeneity of kinetics predicted short survival, and added independent information over RECIST progression in multivariate analysis. Metastatic growth rates after BRAFi discontinuation was usually not faster than before BRAFi introduction, but they were often more heterogeneous than before. Conclusions Monitoring kinetics of different mets before and under BRAFi by repeated CT-scan provides information for predictive mathematical modelling. Disease kinetics deserves more interest
Maximal subsemigroups of some semigroups of order-preserving mappings on a countably infinite set
(2017)
In this paper, we study the maximal subsemigroups of several semigroups of order-preserving transformations on the natural numbers and the integers, respectively. We determine all maximal subsemigroups of the monoid of all order-preserving injections on the set of natural numbers as well as on the set of integers. Further, we give all maximal subsemigroups of the monoid of all bijections on the integers. For the monoid of all order-preserving transformations on the natural numbers, we classify also all its maximal subsemigroups, containing a particular set of transformations.
This article presents a new and easily implementable method to quantify the so-called coupling distance between the law of a time series and the law of a differential equation driven by Markovian additive jump noise with heavy-tailed jumps, such as a-stable Levy flights. Coupling distances measure the proximity of the empirical law of the tails of the jump increments and a given power law distribution. In particular, they yield an upper bound for the distance of the respective laws on path space. We prove rates of convergence comparable to the rates of the central limit theorem which are confirmed by numerical simulations. Our method applied to a paleoclimate time series of glacial climate variability confirms its heavy tail behavior. In addition, this approach gives evidence for heavy tails in datasets of precipitable water vapor of the Western Tropical Pacific. Published by AIP Publishing.
Background: Infliximab (IFX), an anti-TNF monoclonal antibody approved for the treatment of inflammatory bowel disease, is dosed per kg body weight (BW). However, the rationale for body size adjustment has not been unequivocally demonstrated [1], and first attempts to improve IFX therapy have been undertaken [2]. The aim of our study was to assess the impact of different dosing strategies (i.e. body size-adjusted and fixed dosing) on drug exposure and pharmacokinetic (PK) target attainment. For this purpose, a comprehensive simulation study was performed, using patient characteristics (n=116) from an in-house clinical database.
Methods: IFX concentration-time profiles of 1000 virtual, clinically representative patients were generated using a previously published PK model for IFX in patients with Crohn's disease [3]. For each patient 1000 profiles accounting for PK variability were considered. The IFX exposure during maintenance treatment after the following dosing strategies was compared: i) fixed dose, and per ii) BW, iii) lean BW (LBW), iv) body surface area (BSA), v) height (HT), vi) body mass index (BMI) and vii) fat-free mass (FFM)). For each dosing strategy the variability in maximum concentration Cmax, minimum concentration Cmin (= C8weeks) and area under the concentration-time curve (AUC), as well as percent of patients achieving the PK target, Cmin=3 μg/mL [4] were assessed.
Results: For all dosing strategies the variability of Cmin (CV ≈110%) was highest, compared to Cmax and AUC, and was of similar extent regardless of dosing strategy. The proportion of patients reaching the PK target (≈⅓ was approximately equal for all dosing strategies.
Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER.
Prospective and retrospective evaluation of five-year earthquake forecast models for California
(2017)
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
The first main goal of this thesis is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset A of the Euclidean space and for every integer k ≥ 2 we introduce the approximate differential of order k of A and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.
The other goal is to extend to general closed sets a well known theorem of Alberti on the second order rectifiability properties of the boundary of convex bodies. The Alberti theorem provides a stratification of second order rectifiable subsets of the boundary of a convex body based on the dimension of the (convex) normal cone. Considering a suitable generalization of this normal cone for general closed subsets of the Euclidean space and employing some results from the first part we can prove that the same stratification exists for every closed set.
We establish in this paper the existence of weak solutions of infinite-dimensional shift invariant stochastic differential equations driven by a Brownian term. The drift function is very general, in the sense that it is supposed to be neither bounded or continuous, nor Markov. On the initial law we only assume that it admits a finite specific entropy and a finite second moment.
The originality of our method leads in the use of the specific entropy as a tightness tool and in the description of such infinite-dimensional stochastic process as solution of a variational problem on the path space. Our result clearly improves previous ones obtained for free dynamics with bounded drift.
As a potentially toxic agent on nervous system and bone, the safety of aluminium exposure from adjuvants in vaccines and subcutaneous immune therapy (SCIT) products has to be continuously reevaluated, especially regarding concomitant administrations. For this purpose, knowledge on absorption and disposition of aluminium in plasma and tissues is essential. Pharmacokinetic data after vaccination in humans, however, are not available, and for methodological and ethical reasons difficult to obtain. To overcome these limitations, we discuss the possibility of an in vitro-in silico approach combining a toxicokinetic model for aluminium disposition with biorelevant kinetic absorption parameters from adjuvants. We critically review available kinetic aluminium-26 data for model building and, on the basis of a reparameterized toxicokinetic model (Nolte et al., 2001), we identify main modelling gaps. The potential of in vitro dissolution experiments for the prediction of intramuscular absorption kinetics of aluminium after vaccination is explored. It becomes apparent that there is need for detailed in vitro dissolution and in vivo absorption data to establish an in vitro-in vivo correlation (IVIVC) for aluminium adjuvants. We conclude that a combination of new experimental data and further refinement of the Nolte model has the potential to fill a gap in aluminium risk assessment. (C) 2017 Elsevier Inc. All rights reserved.
This longitudinal study examined relationships between student-perceived teaching for meaning, support for autonomy, and competence in mathematic classrooms (Time 1), and students’ achievement goal orientations and engagement in mathematics 6 months later (Time 2). We tested whether student-perceived instructional characteristics at Time 1 indirectly related to student engagement at Time 2, via their achievement goal orientations (Time 2), and, whether student gender moderated these relationships. Participants were ninth and tenth graders (55.2% girls) from 46 classrooms in ten secondary schools in Berlin, Germany. Only data from students who participated at both timepoints were included (N = 746 out of total at Time 1 1118; dropout 33.27%). Longitudinal structural equation modeling showed that student-perceived teaching for meaning and support for competence indirectly predicted intrinsic motivation and effort, via students’ mastery goal orientation. These paths were equivalent for girls and boys. The findings are significant for mathematics education, in identifying motivational processes that partly explain the relationships between student-perceived teaching for meaning and competence support and intrinsic motivation and effort in mathematics.
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering similar to 10% of the permafrost region. Lake area loss (-1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (< 10(-5)%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
Given two weighted graphs (X, b(k), m(k)), k = 1, 2 with b(1) similar to b(2) and m(1) similar to m(2), we prove a weighted L-1-criterion for the existence and completeness of the wave operators W-+/- (H-2, H-1, I-1,I-2), where H-k denotes the natural Laplacian in l(2)(X, m(k)) w.r.t. (X, b(k), m(k)) and I-1,I-2 the trivial identification of l(2)(X, m(1)) with l(2) (X, m(2)). In particular, this entails a general criterion for the absolutely continuous spectra of H-1 and H-2 to be equal.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
If (T-t) is a semigroup of Markov operators on an L-1-space that admits a nontrivial lower bound, then a well-known theorem of Lasota and Yorke asserts that the semigroup is strongly convergent as t -> infinity. In this article we generalize and improve this result in several respects. First, we give a new and very simple proof for the fact that the same conclusion also holds if the semigroup is merely assumed to be bounded instead of Markov. As a main result, we then prove a version of this theorem for semigroups which only admit certain individual lower bounds. Moreover, we generalize a theorem of Ding on semigroups of Frobenius-Perron operators. We also demonstrate how our results can be adapted to the setting of general Banach lattices and we give some counterexamples to show optimality of our results. Our methods combine some rather concrete estimates and approximation arguments with abstract functional analytical tools. One of these tools is a theorem which relates the convergence of a time-continuous operator semigroup to the convergence of embedded discrete semigroups.
In paper (Flad and Harutyunyan in Discrete Contin Dyn Syst 420-429, 2011) is shown that the Hamiltonian of the helium atom in the Born-Oppenheimer approximation, in the case if two particles coincide, is an edge-degenerate operator, which is elliptic in the corresponding edge calculus. The aim of this paper is an analogous investigation in the case if all three particles coincide. More precisely, we show that the Hamiltonian in the mentioned case is a corner-degenerate operator, which is elliptic as an operator in the corner analysis.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
The simultaneous detection of energy, momentum and temporal information in electron spectroscopy is the key aspect to enhance the detection efficiency in order to broaden the range of scientific applications. Employing a novel 60 degrees wide angle acceptance lens system, based on an additional accelerating electron optical element, leads to a significant enhancement in transmission over the previously employed 30 degrees electron lenses. Due to the performance gain, optimized capabilities for time resolved electron spectroscopy and other high transmission applications with pulsed ionizing radiation have been obtained. The energy resolution and transmission have been determined experimentally utilizing BESSY II as a photon source. Four different and complementary lens modes have been characterized. (C) 2017 The Authors. Published by Elsevier B.V.
We complete the picture how the asymptotic behavior of a dynamical system is reflected by properties of the associated Perron-Frobenius operator. Our main result states that strong convergence of the powers of the Perron-Frobenius operator is equivalent to setwise convergence of the underlying dynamic in the measure algebra. This situation is furthermore characterized by uniform mixing-like properties of the system.
We prove finiteness and diameter bounds for graphs having a positive Ricci-curvature bound in the Bakry–Émery sense. Our first result using only curvature and maximal vertex degree is sharp in the case of hypercubes. The second result depends on an additional dimension bound, but is independent of the vertex degree. In particular, the second result is the first Bonnet–Myers type theorem for unbounded graph Laplacians. Moreover, our results improve diameter bounds from Fathi and Shu (Bernoulli 24(1):672–698, 2018) and Horn et al. (J für die reine und angewandte Mathematik (Crelle’s J), 2017, https://doi.org/10.1515/crelle-2017-0038) and solve a conjecture from Cushing et al. (Bakry–Émery curvature functions of graphs, 2016).
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Left-right (L-R) asymmetry in the body plan is determined by nodal flow in vertebrate embryos. Shinohara et al. (Shinohara K et al. 2012 Nat. Commun. 3, 622 (doi:10.1038/ncomms1624)) used Dpcd and Rfx3 mutant mouse embryos and showed that only a few cilia were sufficient to achieve L-R asymmetry. However, the mechanism underlying the breaking of symmetry by such weak ciliary flow is unclear. Flow-mediated signals associated with the L-R asymmetric organogenesis have not been clarified, and two different hypotheses-vesicle transport and mechanosensing-are now debated in the research field of developmental biology. In this study, we developed a computational model of the node system reported by Shinohara et al. and examined the feasibilities of the two hypotheses with a small number of cilia. With the small number of rotating cilia, flow was induced locally and global strong flow was not observed in the node. Particles were then effectively transported only when they were close to the cilia, and particle transport was strongly dependent on the ciliary positions. Although the maximum wall shear rate was also influenced by ciliary position, the mean wall shear rate at the perinodal wall increased monotonically with the number of cilia. We also investigated the membrane tension of immotile cilia, which is relevant to the regulation of mechanotransduction. The results indicated that tension of about 0.1 mu Nm(-1) was exerted at the base even when the fluid shear rate was applied at about 0.1 s(-1). The area of high tension was also localized at the upstream side, and negative tension appeared at the downstream side. Such localization may be useful to sense the flow direction at the periphery, as time-averaged anticlockwise circulation was induced in the node by rotation of a few cilia. Our numerical results support the mechanosensing hypothesis, and we expect that our study will stimulate further experimental investigations of mechanotransduction in the near future.