Refine
Year of publication
- 2018 (55) (remove)
Document Type
- Article (48)
- Other (3)
- Doctoral Thesis (2)
- Monograph/Edited Volume (1)
- Review (1)
Language
- English (55) (remove)
Keywords
- Boolean model (2)
- Edge calculus (2)
- adaptive estimation (2)
- data assimilation (2)
- discrepancy principle (2)
- early stopping (2)
- uncertainty quantification (2)
- Aerosol (1)
- Agmon estimates (1)
- Alternatividentitäten (1)
Institute
- Institut für Mathematik (55) (remove)
Given two weighted graphs (X, b(k), m(k)), k = 1, 2 with b(1) similar to b(2) and m(1) similar to m(2), we prove a weighted L-1-criterion for the existence and completeness of the wave operators W-+/- (H-2, H-1, I-1,I-2), where H-k denotes the natural Laplacian in l(2)(X, m(k)) w.r.t. (X, b(k), m(k)) and I-1,I-2 the trivial identification of l(2)(X, m(1)) with l(2) (X, m(2)). In particular, this entails a general criterion for the absolutely continuous spectra of H-1 and H-2 to be equal.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
We prove finiteness and diameter bounds for graphs having a positive Ricci-curvature bound in the Bakry–Émery sense. Our first result using only curvature and maximal vertex degree is sharp in the case of hypercubes. The second result depends on an additional dimension bound, but is independent of the vertex degree. In particular, the second result is the first Bonnet–Myers type theorem for unbounded graph Laplacians. Moreover, our results improve diameter bounds from Fathi and Shu (Bernoulli 24(1):672–698, 2018) and Horn et al. (J für die reine und angewandte Mathematik (Crelle’s J), 2017, https://doi.org/10.1515/crelle-2017-0038) and solve a conjecture from Cushing et al. (Bakry–Émery curvature functions of graphs, 2016).
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Left-right (L-R) asymmetry in the body plan is determined by nodal flow in vertebrate embryos. Shinohara et al. (Shinohara K et al. 2012 Nat. Commun. 3, 622 (doi:10.1038/ncomms1624)) used Dpcd and Rfx3 mutant mouse embryos and showed that only a few cilia were sufficient to achieve L-R asymmetry. However, the mechanism underlying the breaking of symmetry by such weak ciliary flow is unclear. Flow-mediated signals associated with the L-R asymmetric organogenesis have not been clarified, and two different hypotheses-vesicle transport and mechanosensing-are now debated in the research field of developmental biology. In this study, we developed a computational model of the node system reported by Shinohara et al. and examined the feasibilities of the two hypotheses with a small number of cilia. With the small number of rotating cilia, flow was induced locally and global strong flow was not observed in the node. Particles were then effectively transported only when they were close to the cilia, and particle transport was strongly dependent on the ciliary positions. Although the maximum wall shear rate was also influenced by ciliary position, the mean wall shear rate at the perinodal wall increased monotonically with the number of cilia. We also investigated the membrane tension of immotile cilia, which is relevant to the regulation of mechanotransduction. The results indicated that tension of about 0.1 mu Nm(-1) was exerted at the base even when the fluid shear rate was applied at about 0.1 s(-1). The area of high tension was also localized at the upstream side, and negative tension appeared at the downstream side. Such localization may be useful to sense the flow direction at the periphery, as time-averaged anticlockwise circulation was induced in the node by rotation of a few cilia. Our numerical results support the mechanosensing hypothesis, and we expect that our study will stimulate further experimental investigations of mechanotransduction in the near future.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
In the present paper, we study the problem of existence of honest and adaptive confidence sets for matrix completion. We consider two statistical models: the trace regression model and the Bernoulli model. In the trace regression model, we show that honest confidence sets that adapt to the unknown rank of the matrix exist even when the error variance is unknown. Contrary to this, we prove that in the Bernoulli model, honest and adaptive confidence sets exist only when the error variance is known a priori. In the course of our proofs, we obtain bounds for the minimax rates of certain composite hypothesis testing problems arising in low rank inference.
Tomographic Reservoir Imaging with DNA-Labeled Silica Nanotracers: The First Field Validation
(2018)
This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource-multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.
We generalise disagreement percolation to Gibbs point processes of balls with varying radii. This allows to establish the uniqueness of the Gibbs measure and exponential decay of pair correlations in the low activity regime by comparison with a sub-critical Boolean model. Applications to the Continuum Random Cluster model and the Quermass-interaction model are presented. At the core of our proof lies an explicit dependent thinning from a Poisson point process to a dominated Gibbs point process. (C) 2018 Elsevier B.V. All rights reserved.
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
For linear inverse problems Y = A mu + zeta, it is classical to recover the unknown signal mu by iterative regularization methods ((mu) over cap,(m) = 0,1, . . .) and halt at a data-dependent iteration tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error parallel to A((mu) over cap (()(tau)) - mu)parallel to(2) is controlled. In the context of statistical estimation with stochastic noise zeta, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared- error E[parallel to((mu) over cap (()(tau)) - mu)parallel to(2)]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularization methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L-2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically.
We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.
We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C.
We study the Ollivier-Ricci curvature of graphs as a function of the chosen idleness. We show that this idleness function is concave and piecewise linear with at most three linear parts, and at most two linear parts in the case of a regular graph. We then apply our result to show that the idleness function of the Cartesian product of two regular graphs is completely determined by the idleness functions of the factors.
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
We analyze a general class of difference operators Hε=Tε+Vε on ℓ2((εZ)d), where Vε is a multi-well potential and ε is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two “wells” (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol h0(x,ξ) of Hε) connecting the two minima and the case where the minimal geodesics form an ℓ+1 dimensional manifold, ℓ≥1. These results on the tunneling problem are as sharp as the classical results for the Schrödinger operator in Helffer and Sjöstrand (Commun PDE 9:337–408, 1984). Technically, our approach is pseudo-differential and we adapt techniques from Helffer and Sjöstrand [Analyse semi-classique pour l’équation de Harper (avec application à l’équation de Schrödinger avec champ magnétique), Mémoires de la S.M.F., 2 series, tome 34, pp 1–113, 1988)] and Helffer and Parisse (Ann Inst Henri Poincaré 60(2):147–187, 1994) to our discrete setting.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound.
For a given subcritical discrete Schrodinger operator H on a weighted infinite graph X, we construct a Hardy-weight w which is optimal in the following sense. The operator H - lambda w is subcritical in X for all lambda < 1, null-critical in X for lambda = 1, and supercritical near any neighborhood of infinity in X for any lambda > 1. Our results rely on a criticality theory for Schrodinger operators on general weighted graphs.
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Transition metals in inorganic systems and metalloproteins can occur in different oxidation states, which makes them ideal redox-active catalysts. To gain a mechanistic understanding of the catalytic reactions, knowledge of the oxidation state of the active metals, ideally in operando, is therefore critical. L-edge X-ray absorption spectroscopy (XAS) is a powerful technique that is frequently used to infer the oxidation state via a distinct blue shift of L-edge absorption energies with increasing oxidation state. A unified description accounting for quantum-chemical notions whereupon oxidation does not occur locally on the metal but on the whole molecule and the basic understanding that L-edge XAS probes the electronic structure locally at the metal has been missing to date. Here we quantify how charge and spin densities change at the metal and throughout the molecule for both redox and core-excitation processes. We explain the origin of the L-edge XAS shift between the high-spin complexes Mn-II(acac)(2) and Mn-III(acac)(3) as representative model systems and use ab initio theory to uncouple effects of oxidation-state changes from geometric effects. The shift reflects an increased electron affinity of Mn-III in the core-excited states compared to the ground state due to a contraction of the Mn 3d shell upon core-excitation with accompanied changes in the classical Coulomb interactions. This new picture quantifies how the metal-centered core hole probes changes in formal oxidation state and encloses and substantiates earlier explanations. The approach is broadly applicable to mechanistic studies of redox-catalytic reactions in molecular systems where charge and spin localization/delocalization determine reaction pathways.
The Widom-Rowlinson model (or the Area-interaction model) is a Gibbs point process in R-d with the formal Hamiltonian defined as the volume of Ux epsilon omega B1(x), where. is a locally finite configuration of points and B-1(x) denotes the unit closed ball centred at x. The model is also tuned by two other parameters: the activity z > 0 related to the intensity of the process and the inverse temperature beta >= 0 related to the strength of the interaction. In the present paper we investigate the phase transition of the model in the point of view of percolation theory and the liquid-gas transition. First, considering the graph connecting points with distance smaller than 2r > 0, we show that for any beta >= 0, there exists 0 <(similar to a)(zc) (beta, r) < +infinity such that an exponential decay of connectivity at distance n occurs in the subcritical phase (i.e. z <(similar to a)(zc) (beta, r)) and a linear lower bound of the connection at infinity holds in the supercritical case (i.e. z >(similar to a)(zc) (beta, r)). These results are in the spirit of recent works using the theory of randomised tree algorithms (Probab. Theory Related Fields 173 (2019) 479-490, Ann. of Math. 189 (2019) 75-99, Duminil-Copin, Raoufi and Tassion (2018)). Secondly we study a standard liquid-gas phase transition related to the uniqueness/non-uniqueness of Gibbs states depending on the parameters z, beta. Old results (Phys. Rev. Lett. 27 (1971) 1040-1041, J. Chem. Phys. 52 (1970) 1670-1684) claim that a non-uniqueness regime occurs for z = beta large enough and it is conjectured that the uniqueness should hold outside such an half line ( z = beta >= beta(c) > 0). We solve partially this conjecture in any dimension by showing that for beta large enough the non-uniqueness holds if and only if z = beta. We show also that this critical value z = beta corresponds to the percolation threshold (similar to a)(zc) (beta, r) = beta for beta large enough, providing a straight connection between these two notions of phase transition.
SmB6 is predicted to be the first member of the intersection of topological insulators and Kondo insulators, strongly correlated materials in which the Fermi level lies in the gap of a many-body resonance that forms by hybridization between localized and itinerant states. While robust, surface-only conductivity at low temperature and the observation of surface states at the expected high symmetry points appear to confirm this prediction, we find both surface states at the (100) surface to be topologically trivial. We find the (Gamma) over bar state to appear Rashba split and explain the prominent (X) over bar state by a surface shift of the many-body resonance. We propose that the latter mechanism, which applies to several crystal terminations, can explain the unusual surface conductivity. While additional, as yet unobserved topological surface states cannot be excluded, our results show that a firm connection between the two material classes is still outstanding.
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
In the thesis there are constructed new quantizations for pseudo-differential boundary value problems (BVPs) on manifolds with edge. The shape of operators comes from Boutet de Monvel’s calculus which exists on smooth manifolds with boundary. The singular case, here with edge and boundary, is much more complicated. The present approach simplifies the operator-valued symbolic structures by using suitable Mellin quantizations on infinite stretched model cones of wedges with boundary. The Mellin symbols themselves are, modulo smoothing ones, with asymptotics, holomorphic in the complex Mellin covariable. One of the main results is the construction of parametrices of elliptic elements in the corresponding operator algebra, including elliptic edge conditions.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Manifolds with corners in the present investigation are non-smooth configurations - specific stratified spaces - with an incomplete metric such as cones, manifolds with edges, or corners of piecewise smooth domains in Euclidean space. We focus here on operators on such "corner manifolds" of singularity order <= 2, acting in weighted corner Sobolev spaces. The corresponding corner degenerate pseudo-differential operators are formulated via Mellin quantizations, and they also make sense on infinite singular cones.
In paper (Flad and Harutyunyan in Discrete Contin Dyn Syst 420-429, 2011) is shown that the Hamiltonian of the helium atom in the Born-Oppenheimer approximation, in the case if two particles coincide, is an edge-degenerate operator, which is elliptic in the corresponding edge calculus. The aim of this paper is an analogous investigation in the case if all three particles coincide. More precisely, we show that the Hamiltonian in the mentioned case is a corner-degenerate operator, which is elliptic as an operator in the corner analysis.
We establish essential steps of an iterative approach to operator algebras, ellipticity and Fredholm property on stratified spaces with singularities of second order. We cover, in particular, corner-degenerate differential operators. Our constructions are focused on the case where no additional conditions of trace and potential type are posed, but this case works well and will be considered in a forthcoming paper as a conclusion of the present calculus.
The simultaneous detection of energy, momentum and temporal information in electron spectroscopy is the key aspect to enhance the detection efficiency in order to broaden the range of scientific applications. Employing a novel 60 degrees wide angle acceptance lens system, based on an additional accelerating electron optical element, leads to a significant enhancement in transmission over the previously employed 30 degrees electron lenses. Due to the performance gain, optimized capabilities for time resolved electron spectroscopy and other high transmission applications with pulsed ionizing radiation have been obtained. The energy resolution and transmission have been determined experimentally utilizing BESSY II as a photon source. Four different and complementary lens modes have been characterized. (C) 2017 The Authors. Published by Elsevier B.V.
Understanding and reducing complex systems pharmacology models based on a novel input-response index
(2018)
A growing understanding of complex processes in biology has led to large-scale mechanistic models of pharmacologically relevant processes. These models are increasingly used to study the response of the system to a given input or stimulus, e.g., after drug administration. Understanding the input–response relationship, however, is often a challenging task due to the complexity of the interactions between its constituents as well as the size of the models. An approach that quantifies the importance of the different constituents for a given input–output relationship and allows to reduce the dynamics to its essential features is therefore highly desirable. In this article, we present a novel state- and time-dependent quantity called the input–response index that quantifies the importance of state variables for a given input–response relationship at a particular time. It is based on the concept of time-bounded controllability and observability, and defined with respect to a reference dynamics. In application to the brown snake venom–fibrinogen (Fg) network, the input–response indices give insight into the coordinated action of specific coagulation factors and about those factors that contribute only little to the response. We demonstrate how the indices can be used to reduce large-scale models in a two-step procedure: (i) elimination of states whose dynamics have only minor impact on the input–response relationship, and (ii) proper lumping of the remaining (lower order) model. In application to the brown snake venom–fibrinogen network, this resulted in a reduction from 62 to 8 state variables in the first step, and a further reduction to 5 state variables in the second step. We further illustrate that the sequence, in which a recursive algorithm eliminates and/or lumps state variables, has an impact on the final reduced model. The input–response indices are particularly suited to determine an informed sequence, since they are based on the dynamics of the original system. In summary, the novel measure of importance provides a powerful tool for analysing the complex dynamics of large-scale systems and a means for very efficient model order reduction of nonlinear systems.
Background and objective Optimisation of hydrocortisone replacement therapy in children is challenging as there is currently no licensed formulation and dose in Europe for children under 6 years of age. In addition, hydrocortisone has non-linear pharmacokinetics caused by saturable plasma protein binding. A paediatric hydrocortisone formulation, Infacort (R) oral hydrocortisone granules with taste masking, has therefore been developed. The objective of this study was to establish a population pharmacokinetic model based on studies in healthy adult volunteers to predict hydrocortisone exposure in paediatric patients with adrenal insufficiency. Methods Cortisol and binding protein concentrations were evaluated in the absence and presence of dexamethasone in healthy volunteers (n = 30). Dexamethasone was used to suppress endogenous cortisol concentrations prior to and after single doses of 0.5, 2, 5 and 10 mg of Infacort (R) or 20 mg of Infacort (R)/hydrocortisone tablet/hydrocortisone intravenously. A plasma protein binding model was established using unbound and total cortisol concentrations, and sequentially integrated into the pharmacokinetic model. Results Both specific (non-linear) and non-specific (linear) protein binding were included in the cortisol binding model. A two-compartment disposition model with saturable absorption and constant endogenous cortisol baseline (Baseline (cort),15.5 nmol/L) described the data accurately. The predicted cortisol exposure for a given dose varied considerably within a small body weight range in individuals weighing < 20 kg. Conclusions Our semi-mechanistic population pharmacokinetic model for hydrocortisone captures the complex pharmacokinetics of hydrocortisone in a simplified but comprehensive framework. The predicted cortisol exposure indicated the importance of defining an accurate hydrocortisone dose to mimic physiological concentrations for neonates and infants weighing < 20 kg.