Refine
Year of publication
Document Type
- Article (938) (remove)
Language
- English (938) (remove)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- data assimilation (8)
- Bayesian inference (5)
- discrepancy principle (5)
- ensemble Kalman filter (5)
- linear term (5)
- Data assimilation (4)
- Earthquake interaction (4)
Institute
- Institut für Mathematik (938) (remove)
Let M be a closed connected spin manifold of dimension 2 or 3 with a fixed orientation and a fixed spin structure. We prove that for a generic Riemannian metric on M the non-harmonic eigenspinors of the Dirac operator are nowhere zero. The proof is based on a transversality theorem and the unique continuation property of the Dirac operator.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
We consider a solution of the nonlinear Klein-Gordon equation perturbed by a parametric driver. The frequency of parametric perturbation varies slowly and passes through a resonant value, which leads to a solution change. We obtain a new connection formula for the asymptotic solution before and after the resonance.
The present paper is intended to provide the basis for the study of weakly differentiable functions on rectifiable varifolds with locally bounded first variation. The concept proposed here is defined by means of integration-by-parts identities for certain compositions with smooth functions. In this class, the idea of zero boundary values is realised using the relative perimeter of superlevel sets. Results include a variety of Sobolev Poincare-type embeddings, embeddings into spaces of continuous and sometimes Holder-continuous functions, and point wise differentiability results both of approximate and integral type as well as coarea formulae. As a prerequisite for this study, decomposition properties of such varifolds and a relative isoperimetric inequality are established. Both involve a concept of distributional boundary of a set introduced for this purpose. As applications, the finiteness of the geodesic distance associated with varifolds with suitable summability of the mean curvature and a characterisation of curvature varifolds are obtained.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.
The morphological features in the deviations of the total electron content (TEC) of the ionosphere from the background undisturbed state as possible precursors of the earthquake of January 12, 2010 (21:53 UT (16:53 LT), 18.46A degrees N, 72.5A degrees W, 7.0 M) in Haiti are analyzed. To identify these features, global and regional differential TEC maps based on global 2-h TEC maps provided by NASA in the IONEX format were plotted. For the considered earthquake, long-lived disturbances, presumably of seismic origin, were localized in the near-epicenter area and were accompanied by similar effects in the magnetoconjugate region. Both decreases and increases in the local TEC over the period from 22 UT of January 10 to 08 UT of January 12, 2010 were observed. The horizontal dimensions of the anomalies were similar to 40A degrees in longitude and similar to 20A degrees in latitude, with the magnitude of TEC disturbances reaching similar to 40% relative to the background near the epicenter and more than 50% in the magnetoconjugate area. No significant geomagnetic disturbances within January 1-12, 2010 were observed, i.e., the detected TEC anomalies were manifestations of interplay between processes in the lithosphere-atmosphere-ionosphere system.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
Valuations of Terms
(2003)
Let tau be a type of algebras. There are several commonly used measurements of the complexity of terms of type tau, including the depth or height of a term and the number of variable symbols appearing in a term. In this paper we formalize these various measurements, by defining a complexity or valuation mapping on terms. A valuation of terms is thus a mapping from the absolutely free term algebra of type tau into another algebra of the same type on which an order relation is defined. We develop the interconnections between such term valuations and the equational theory of Universal Algebra. The collection of all varieties of a given type forms a complete lattice which is very complex and difficult to study; valuations of terms offer a new method to study complete sublattices of this lattice
In this work we extract the microphysical properties of aerosols for a collection of measurement cases with low volume depolarization ratio originating from fire sources captured by the Raman lidar located at the National Institute of Optoelectronics (INOE) in Bucharest. Our algorithm was tested not only for pure smoke but also for mixed smoke and urban aerosols of variable age and growth. Applying a sensitivity analysis on initial parameter settings of our retrieval code was proved vital for producing semi-automatized retrievals with a hybrid regularization method developed at the Institute of Mathematics of Potsdam University. A direct quantitative comparison of the retrieved microphysical properties with measurements from a Compact Time of Flight Aerosol Mass Spectrometer (CToF-AMS) is used to validate our algorithm. Microphysical retrievals performed with sun photometer data are also used to explore our results. Focusing on the fine mode we observed remarkable similarities between the retrieved size distribution and the one measured by the AMS. More complicated atmospheric structures and the factor of absorption appear to depend more on particle radius being subject to variation. A good correlation was found between the aerosol effective radius and particle age, using the ratio of lidar ratios (LR: aerosol extinction to backscatter ratios) as an indicator for the latter. Finally, the dependence on relative humidity of aerosol effective radii measured on the ground and within the layers aloft show similar patterns. (C) 2015 Elsevier Inc. All rights reserved.
Using Causal Effect Networks to Analyze Different Arctic Drivers of Midlatitude Winter Circulation
(2016)
In recent years, the Northern Hemisphere midlatitudes have suffered from severe winters like the extreme 2012/13 winter in the eastern United States. These cold spells were linked to a meandering upper-tropospheric jet stream pattern and a negative Arctic Oscillation index (AO). However, the nature of the drivers behind these circulation patterns remains controversial. Various studies have proposed different mechanisms related to changes in the Arctic, most of them related to a reduction in sea ice concentrations or increasing Eurasian snow cover. Here, a novel type of time series analysis, called causal effect networks (CEN), based on graphical models is introduced to assess causal relationships and their time delays between different processes. The effect of different Arctic actors on winter circulation on weekly to monthly time scales is studied, and robust network patterns are found. Barents and Kara sea ice concentrations are detected to be important external drivers of the midlatitude circulation, influencing winter AO via tropospheric mechanisms and through processes involving the stratosphere. Eurasia snow cover is also detected to have a causal effect on sea level pressure in Asia, but its exact role on AO remains unclear. The CEN approach presented in this study overcomes some difficulties in interpreting correlation analyses, complements model experiments for testing hypotheses involving teleconnections, and can be used to assess their validity. The findings confirm that sea ice concentrations in autumn in the Barents and Kara Seas are an important driver of winter circulation in the midlatitudes.
Understanding and reducing complex systems pharmacology models based on a novel input-response index
(2018)
A growing understanding of complex processes in biology has led to large-scale mechanistic models of pharmacologically relevant processes. These models are increasingly used to study the response of the system to a given input or stimulus, e.g., after drug administration. Understanding the input–response relationship, however, is often a challenging task due to the complexity of the interactions between its constituents as well as the size of the models. An approach that quantifies the importance of the different constituents for a given input–output relationship and allows to reduce the dynamics to its essential features is therefore highly desirable. In this article, we present a novel state- and time-dependent quantity called the input–response index that quantifies the importance of state variables for a given input–response relationship at a particular time. It is based on the concept of time-bounded controllability and observability, and defined with respect to a reference dynamics. In application to the brown snake venom–fibrinogen (Fg) network, the input–response indices give insight into the coordinated action of specific coagulation factors and about those factors that contribute only little to the response. We demonstrate how the indices can be used to reduce large-scale models in a two-step procedure: (i) elimination of states whose dynamics have only minor impact on the input–response relationship, and (ii) proper lumping of the remaining (lower order) model. In application to the brown snake venom–fibrinogen network, this resulted in a reduction from 62 to 8 state variables in the first step, and a further reduction to 5 state variables in the second step. We further illustrate that the sequence, in which a recursive algorithm eliminates and/or lumps state variables, has an impact on the final reduced model. The input–response indices are particularly suited to determine an informed sequence, since they are based on the dynamics of the original system. In summary, the novel measure of importance provides a powerful tool for analysing the complex dynamics of large-scale systems and a means for very efficient model order reduction of nonlinear systems.
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
We construct eta- and rho-invariants for Dirac operators, on the universal covering of a closed manifold, that are invariant under the projective action associated to a 2-cocycle of the fundamental group. We prove an Atiyah-Patodi-Singer index theorem in this setting, as well as its higher generalisation. Applications concern the classification of positive scalar curvature metrics on closed spin manifolds. We also investigate the properties of these twisted invariants for the signature operator and the relation to the higher invariants.
The evolution of the closed Friedmann Universe with a packet of short scalar waves is considered with the help of the Wheeler-DeWitt equation. The packet ensures conservation of homogeneity and isotropy of the metric on average. It is shown that during tunneling the amplitudes of short waves of a scalar field can increase catastrophically promptly if their influence to the metric is not taken into account. This effect is similar to the Rubakov-effect of catastrophic particle creation calculated already in 1984. In our approach to the problem it is possible to consider a self- consistent dynamics of the expansion of the Universe and amplification of short waves. It results in a decrease of the barrier and interruption of amplification of waves, and we get an exit of the wave function from the quantum to the classically available region.
We analyze a general class of difference operators Hε=Tε+Vε on ℓ2((εZ)d), where Vε is a multi-well potential and ε is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two “wells” (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol h0(x,ξ) of Hε) connecting the two minima and the case where the minimal geodesics form an ℓ+1 dimensional manifold, ℓ≥1. These results on the tunneling problem are as sharp as the classical results for the Schrödinger operator in Helffer and Sjöstrand (Commun PDE 9:337–408, 1984). Technically, our approach is pseudo-differential and we adapt techniques from Helffer and Sjöstrand [Analyse semi-classique pour l’équation de Harper (avec application à l’équation de Schrödinger avec champ magnétique), Mémoires de la S.M.F., 2 series, tome 34, pp 1–113, 1988)] and Helffer and Parisse (Ann Inst Henri Poincaré 60(2):147–187, 1994) to our discrete setting.
We analyze a general class of difference operators on where is a multi-well potential and is a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we shall treat the eigenvalue problem for as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix, similar to the analysis for the Schrodinger operator [see Helffer and Sjostrand in Commun Partial Differ Equ 9:337-408, 1984], and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
Trees and Valuation Rings
(2000)
This article assesses the distance between the laws of stochastic differential equations with multiplicative Levy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Levy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER.
Towards the right Hamiltonians for singular perturbations via regularization and extension theory
(1996)
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2016)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
Tomographic Reservoir Imaging with DNA-Labeled Silica Nanotracers: The First Field Validation
(2018)
This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource-multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.
Toda chains with type A(m) Lie algebra for multidimensional m-component perfect fluid cosmology
(1999)
Toda chains with type A m Lie algebra for multidimensional m-component perfect fluid cosmology
(1998)
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
In quantum mechanics the temporal decay of certain resonance states is associated with an effective time evolution e(-ith(kappa)), where h(.) is an analytic family of non-self-adjoint matrices. In general the corresponding resonance states do not decay exponentially in time. Using analytic perturbation theory, we derive asymptotic expansions for e(-ith(kappa)), simultaneously in the limits kappa -> 0 and t -> infinity, where the corrections with respect to pure exponential decay have uniform bounds in one complex variable kappa(2)t.
In the Appendix we briefly review analytic perturbation theory, replacing the classical reference to the 1920 book of Knopp [Funktionentheorie II, Anwendungen und Weiterfuhrung der allgemeinen Theorie, Sammlung Goschen, Vereinigung wissenschaftlicher Verleger Walter de Gruyter, 1920] and its terminology by standard modern references. This might be of independent interest.
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
(2020)
In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.
Thermophysical modelling and parameter estimation of small solar system bodies via data assimilation
(2020)
Deriving thermophysical properties such as thermal inertia from thermal infrared observations provides useful insights into the structure of the surface material on planetary bodies. The estimation of these properties is usually done by fitting temperature variations calculated by thermophysical models to infrared observations. For multiple free model parameters, traditional methods such as least-squares fitting or Markov chain Monte Carlo methods become computationally too expensive. Consequently, the simultaneous estimation of several thermophysical parameters, together with their corresponding uncertainties and correlations, is often not computationally feasible and the analysis is usually reduced to fitting one or two parameters. Data assimilation (DA) methods have been shown to be robust while sufficiently accurate and computationally affordable even for a large number of parameters. This paper will introduce a standard sequential DA method, the ensemble square root filter, for thermophysical modelling of asteroid surfaces. This method is used to re-analyse infrared observations of the MARA instrument, which measured the diurnal temperature variation of a single boulder on the surface of near-Earth asteroid (162173) Ryugu. The thermal inertia is estimated to be 295 +/- 18 Jm(-2) K-1 s(-1/2), while all five free parameters of the initial analysis are varied and estimated simultaneously. Based on this thermal inertia estimate the thermal conductivity of the boulder is estimated to be between 0.07 and 0.12,Wm(-1) K-1 and the porosity to be between 0.30 and 0.52. For the first time in thermophysical parameter derivation, correlations and uncertainties of all free model parameters are incorporated in the estimation procedure that is more than 5000 times more efficient than a comparable parameter sweep.
In June 2018, after 4 years of cruise, the Japanese space probe Hayabusa2 [1-Watanabe S. et al.: Hayabusa2 Mission Overview. (2017)] reached the Near-Earth Asteroid (162173) Ryugu. Hayabusa2 carried a small Lander named MASCOT (Mobile Asteroid Surface Scout) [2-Ho T. M. et al.: MASCOT-The Mobile Asteroid Surface Scout onboard the Hayabusa2 mission. (2017)], jointly developed by the German Aerospace Center (DLR) and the French Space Agency (CNES), to investigate Ryugu's surface structure, composition and physical properties including its thermal behaviour and magnetization in-situ. The Microgravity User Support Centre (DLR-MUSC) in Cologne was in charge of providing all thermal conditions and constraints necessary for the selection of the final landing site and for the final operations of the Lander MASCOT on the surface of the asteroid Ryugu. This article provides a comprehensive assessment of these thermal conditions and constraints, based on predictions performed with the Thermal Mathematical Model (TMM) of MASCOT using different asteroid surface thermal models, ephemeris data for approach as well as descent and hopping trajectories, the related operation sequences and scenarios and the possible environmental conditions driven by the Hayabusa2 spacecraft. A comparison with the real telemetry data confirms the analysis and provides further information about the asteroid characteristics.
Purpose This review provides an overview of the current challenges in oral targeted antineoplastic drug (OAD) dosing and outlines the unexploited value of therapeutic drug monitoring (TDM). Factors influencing the pharmacokinetic exposure in OAD therapy are depicted together with an overview of different TDM approaches. Finally, current evidence for TDM for all approved OADs is reviewed. Methods A comprehensive literature search (covering literature published until April 2020), including primary and secondary scientific literature on pharmacokinetics and dose individualisation strategies for OADs, together with US FDA Clinical Pharmacology and Biopharmaceutics Reviews and the Committee for Medicinal Products for Human Use European Public Assessment Reports was conducted. Results OADs are highly potent drugs, which have substantially changed treatment options for cancer patients. Nevertheless, high pharmacokinetic variability and low treatment adherence are risk factors for treatment failure. TDM is a powerful tool to individualise drug dosing, ensure drug concentrations within the therapeutic window and increase treatment success rates. After reviewing the literature for 71 approved OADs, we show that exposure-response and/or exposure-toxicity relationships have been established for the majority. Moreover, TDM has been proven to be feasible for individualised dosing of abiraterone, everolimus, imatinib, pazopanib, sunitinib and tamoxifen in prospective studies. There is a lack of experience in how to best implement TDM as part of clinical routine in OAD cancer therapy. Conclusion Sub-therapeutic concentrations and severe adverse events are current challenges in OAD treatment, which can both be addressed by the application of TDM-guided dosing, ensuring concentrations within the therapeutic window.
We study mixed boundary value problems for an elliptic operator A on a manifold X with boundary Y, i.e., Au = f in int X, T (+/-) u = g(+/-) on int Y+/-, where Y is subdivided into subsets Y+/- with an interface Z and boundary conditions T+/- on Y+/- that are Shapiro-Lopatinskij elliptic up to Z from the respective sides. We assume that Z subset of Y is a manifold with conical singularity v. As an example we consider the Zaremba problem, where A is the Laplacian and T- Dirichlet, T+ Neumann conditions. The problem is treated as a corner boundary value problem near v which is the new point and the main difficulty in this paper. Outside v the problem belongs to the edge calculus as is shown in Bull. Sci. Math. ( to appear). With a mixed problem we associate Fredholm operators in weighted corner Sobolev spaces with double weights, under suitable edge conditions along Z {v} of trace and potential type. We construct parametrices within the calculus and establish the regularity of solutions.
In the semiclassical limit (h) over bar -> 0, we analyze a class of self-adjoint Schrodinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V center dot id(E) acting on sections of a vector bundle E over an oriented Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has non-degenerate minima at a finite number of points m(1),... m(r) is an element of M, called potential wells. Using quasimodes of WKB-type near m(j) for eigenfunctions associated with the low lying eigenvalues of H-(h) over bar, we analyze the tunneling effect, i.e. the splitting between low lying eigenvalues, which e.g. arises in certain symmetric configurations. Technically, we treat the coupling between different potential wells by an interaction matrix and we consider the case of a single minimal geodesic (with respect to the associated Agmon metric) connecting two potential wells and the case of a submanifold of minimal geodesics of dimension l + 1. This dimension l determines the polynomial prefactor for exponentially small eigenvalue splitting.
We classify the existent Birkhoff-type theorems into four classes: first, in field theory, the theorem states the absence of helicity 0- and spin 0-parts of the gravitational field. Second, in relativistic astrophysics, it is the statement that the gravitational far-field of a spherically symmetric star carries, apart from its mass, no information about the star; therefore, a radially oscillating star has a static gravitational far-field. Third, in mathematical physics, Birkhoff's theorem reads: up to singular exceptions of measure zero, the spherically symmetric solutions of Einstein's vacuum field equation with can be expressed by the Schwarzschild metric; for , it is the Schwarzschild-de Sitter metric instead. Fourth, in differential geometry, any statement of the type: every member of a family of pseudo-Riemannian space-times has more isometries than expected from the original metric ansatz, carries the name Birkhoff-type theorem. Within the fourth of these classes we present some new results with further values of dimension and signature of the related spaces; including them are some counterexamples: families of space-times where no Birkhoff-type theorem is valid. These counterexamples further confirm the conjecture, that the Birkhoff-type theorems have their origin in the property, that the two eigenvalues of the Ricci tensor of 2-D pseudo-Riemannian spaces always coincide, a property not having an analogy in higher dimensions. Hence, Birkhoff-type theorems exist only for those physical situations which are reducible to 2-D.
We classify the existent Birkhoff-type theorems into four classes: First, in field theory, the theorem states the absence of helicity 0- and spin 0-parts of the gravitational field. Second, in relativistic astrophysics, it is the statement that the gravitational far-field of a spherically symmetric star carries, apart from its mass, no information about the star; therefore, a radially oscillating star has a static gravitational far-field. Third, in mathematical physics, Birkhoff's theorem reads: up to singular exceptions of measure zero, the spherically symmetric solutions of Einstein's vacuum field equation with Lambda = 0 can be expressed by the Schwarzschild metric; for Lambda unequal 0, it is the Schwarzschild-de Sitter metric instead. Fourth, in differential geometry, any statement of the type: every member of a family of pseudo-Riemannian space-times has more isometries than expected from the original metric ansatz, carries the name Birkhoff-type theorem. Within the fourth of these classes we present some new results with further values of dimension and signature of the related spaces; including them are some counterexamples: families of space-times where no Birkhoff-type theorem is valid. These counterexamples further confirm the conjecture, that the Birkhoff-type theorems have their origin in the property, that the two eigenvalues of the Ricci tensor of two- dimensional pseudo-Riemannian spaces always coincide, a property not having an analogy in higher dimensions. Hence, Birkhoff-type theorems exist only for those physical situations which are reducible to two dimensions.
In this preparatory chapter, the tools of stochastic analysis needed for the investigation of the asymptotic behavior of the stochastic Chafee-Infante equation are provided. In the first place, this encompasses a recollection of basic facts about Lévy processes with values in Hilbert spaces. Playing the role of the additive noise processes perturbing the deterministic Chafee-Infante equation in the systems the stochastic dynamics of which will be our main interest, symmetric ?-stable Lévy processes are in the focus of our investigation (Sect. 3.1).
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
We study pseudo-differential operators on a cylinder R x B where B has conical singularities. Configurations of that kind are the local model of corner singularities with cross section B. Operators in our calculus are assumed to have symbols a which are meromorphic in the complex covariable with values in the algebra of all cone operators on B. We show an explicit formula for solutions of the homogeneous equation if a is independent of the axial variable t is an element of R. Each non-bijectivity point of the symbol in the complex plane corresponds to a finite-dimensional space of solutions. Moreover, we give a relative index formula
A zig-zag (or fence) order is a special partial order on a (finite) set. In this paper, we consider the semigroup TFn of all order-preserving transformations on an n-element zig-zag-ordered set. We determine the rank of TFn and provide a minimal generating set for TFn. Moreover, a formula for the number of idempotents in TFn is given.
In this paper, we give a complete classification of all finite simple groups with maximal subgroups of index n, where n = 2(a)center dot 3b for a, b >= 1. As a consequence, for such n, all primitive permutation groups of degree n are given. The motivation of this work comes also from a study of Cayley graphs of certain valency on a finite simple group
A term, also called a tree, is said to be linear, if each variable occurs in the term only once. The linear terms and sets of linear terms, the so-called linear tree languages, play some role in automata theory and in the theory of formal languages in connection with recognizability. We define a partial superposition operation on sets of linear trees of a given type and study the properties of some many-sorted partial clones that have sets of linear trees as elements and partial superposition operations as fundamental operations. The endomorphisms of those algebras correspond to nondeterministic linear hypersubstitutions.
Generalizing a linear expression over a vector space, we call a term of an arbitrary type tau linear if its every variable occurs only once. Instead of the usual superposition of terms and of the total many-sorted clone of all terms in the case of linear terms, we define the partial many-sorted superposition operation and the partial many-sorted clone that satisfies the superassociative law as weak identity. The extensions of linear hypersubstitutions are weak endomorphisms of this partial clone. For a variety V of one-sorted total algebras of type tau, we define the partial many-sorted linear clone of V as the partial quotient algebra of the partial many-sorted clone of all linear terms by the set of all linear identities of V. We prove then that weak identities of this clone correspond to linear hyperidentities of V.
A term t is linear if no variable occurs more than once in t. An identity s ≈ t is said to be linear if s and t are linear terms. Identities are particular formulas. As for terms superposition operations can be defined for formulas too. We define the arbitrary linear formulas and seek for a condition for the set of all linear formulas to be closed under superposition. This will be used to define the partial superposition operations on the set of linear formulas and a partial many-sorted algebra Formclonelin(τ, τ′). This algebra has similar properties with the partial many-sorted clone of all linear terms. We extend the concept of a hypersubstitution of type τ to the linear hypersubstitutions of type (τ, τ′) for algebraic systems. The extensions of linear hypersubstitutions of type (τ, τ′) send linear formulas to linear formulas, presenting weak endomorphisms of Formclonelin(τ, τ′).
This note is a revised and enlarged version of the german article [16] in a slightly different framework. We here correct a serious mistake in the first version and generalize the class of Polya sum processes considered there. (A corrected version of the same results can be found already in the thesis of Mathias Rafler [12].) Moreover, the class of Polya difference processes is constructed here for the first time. In analogy to classical statistical mechanics we propose a theory of interacting Bosons and Fermions. We consider Papangelou processes. These are point processes specified by some kernel which represents the conditional intensity of the process. The main result is a general construction of a large class of such processes which contains Cox, Gibbs processes of classical statistical mechanics, but also interacting Bose and Fermi processes.
Hypersubstitutions were introduced in [3] as a way of making precise the concepts of hyperidentity and M- hyperidentity. The monoid of hypersubstitutions has been widely studied by many authors. Knowledge of the monoid of hypersubstitutions can be applied to the concept of M-hyperidentities. In this paper, we show that the order of hypersubstitutions of type tau = (3) is 1, 2, 3 or infinite
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
We present simulations of binary black-hole mergers in which, after the common outer horizon has formed, the marginally outer trapped surfaces (MOTSs) corresponding to the individual black holes continue to approach and eventually penetrate each other. This has very interesting consequences according to recent results in the theory of MOTSs. Uniqueness and stability theorems imply that two MOTSs which touch with a common outer normal must be identical. This suggests a possible dramatic consequence of the collision between a small and large black hole. If the penetration were to continue to completion, then the two MOTSs would have to coalesce, by some combination of the small one growing and the big one shrinking. Here we explore the relationship between theory and numerical simulations, in which a small black hole has halfway penetrated a large one.
We establish a quantisation of corner-degenerate symbols, here called Mellin-edge quantisation, on a manifold with second order singularities. The typical ingredients come from the "most singular" stratum of which is a second order edge where the infinite transversal cone has a base that is itself a manifold with smooth edge. The resulting operator-valued amplitude functions on the second order edge are formulated purely in terms of Mellin symbols taking values in the edge algebra over . In this respect our result is formally analogous to a quantisation rule of (Osaka J. Math. 37:221-260, 2000) for the simpler case of edge-degenerate symbols that corresponds to the singularity order 1. However, from the singularity order 2 on there appear new substantial difficulties for the first time, partly caused by the edge singularities of the cone over that tend to infinity.
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.