Institut für Mathematik
Refine
Year of publication
- 2024 (1)
- 2023 (14)
- 2022 (38)
- 2021 (48)
- 2020 (90)
- 2019 (60)
- 2018 (67)
- 2017 (63)
- 2016 (71)
- 2015 (65)
- 2014 (54)
- 2013 (63)
- 2012 (60)
- 2011 (38)
- 2010 (44)
- 2009 (43)
- 2008 (25)
- 2007 (24)
- 2006 (78)
- 2005 (102)
- 2004 (86)
- 2003 (72)
- 2002 (79)
- 2001 (103)
- 2000 (94)
- 1999 (122)
- 1998 (129)
- 1997 (122)
- 1996 (73)
- 1995 (102)
- 1994 (77)
- 1993 (16)
- 1992 (19)
- 1991 (5)
- 1990 (1)
- 1980 (1)
Document Type
- Article (1078)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (151)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1874)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2150) (remove)
We construct eta- and rho-invariants for Dirac operators, on the universal covering of a closed manifold, that are invariant under the projective action associated to a 2-cocycle of the fundamental group. We prove an Atiyah-Patodi-Singer index theorem in this setting, as well as its higher generalisation. Applications concern the classification of positive scalar curvature metrics on closed spin manifolds. We also investigate the properties of these twisted invariants for the signature operator and the relation to the higher invariants.
Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action (-log[P]). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s(tn);tn=t0+n Delta t and, using methods of nonlinear time series analysis, show how to produce a DE>1-dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s(tn) time series. In that DE-dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on DE-dimensional input and producing DE-dimensional outputs.
We study the spectral properties of curl, a linear differential operator of first order acting on differential forms of appropriate degree on an odd-dimensional closed oriented Riemannian manifold. In three dimensions, its eigenvalues are the electromagnetic oscillation frequencies in vacuum without external sources. In general, the spectrum consists of the eigenvalue 0 with infinite multiplicity and further real discrete eigenvalues of finite multiplicity. We compute the Weyl asymptotics and study the zeta-function. We give a sharp lower eigenvalue bound for positively curved manifolds and analyze the equality case. Finally, we compute the spectrum for flat tori, round spheres, and 3-dimensional spherical space forms. Published under license by AIP Publishing.
By adapting the Cheeger-Simons approach to differential cohomology, we establish a notion of differential cohomology with compact support. We show that it is functorial with respect to open embeddings and that it fits into a natural diagram of exact sequences which compare it to compactly supported singular cohomology and differential forms with compact support, in full analogy to ordinary differential cohomology. We prove an excision theorem for differential cohomology using a suitable relative version. Furthermore, we use our model to give an independent proof of Pontryagin duality for differential cohomology recovering a result of [Harvey, Lawson, Zweck - Amer. J. Math. 125 (2003), 791]: On any oriented manifold, ordinary differential cohomology is isomorphic to the smooth Pontryagin dual of compactly supported differential cohomology. For manifolds of finite-type, a similar result is obtained interchanging ordinary with compactly supported differential cohomology.
Data assimilation
(2019)
Data assimilation addresses the general problem of how to combine model-based predictions with partial and noisy observations of the process in an optimal manner. This survey focuses on sequential data assimilation techniques using probabilistic particle-based algorithms. In addition to surveying recent developments for discrete- and continuous-time data assimilation, both in terms of mathematical foundations and algorithmic implementations, we also provide a unifying framework from the perspective of coupling of measures, and Schrödinger’s boundary value problem for stochastic processes in particular.
We continue our study of invariant forms of the classical equations of mathematical physics, such as the Maxwell equations or the Lam´e system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
In this paper we develop a general framework for constructing and analyzing coupled Markov chain Monte Carlo samplers, allowing for both (possibly degenerate) diffusion and piecewise deterministic Markov processes. For many performance criteria of interest, including the asymptotic variance, the task of finding efficient couplings can be phrased in terms of problems related to optimal transport theory. We investigate general structural properties, proving a singularity theorem that has both geometric and probabilistic interpretations. Moreover, we show that those problems can often be solved approximately and support our findings with numerical experiments. For the particular objective of estimating the variance of a Bayesian posterior, our analysis suggests using novel techniques in the spirit of antithetic variates. Addressing the convergence to equilibrium of coupled processes we furthermore derive a modified Poincare inequality.
Many machine learning problems can be characterized by mutual contamination models. In these problems, one observes several random samples from different convex combinations of a set of unknown base distributions and the goal is to infer these base distributions. This paper considers the general setting where the base distributions are defined on arbitrary probability spaces. We examine three popular machine learning problems that arise in this general setting: multiclass classification with label noise, demixing of mixed membership models, and classification with partial labels. In each case, we give sufficient conditions for identifiability and present algorithms for the infinite and finite sample settings, with associated performance guarantees.
We discuss canonical representations of the de Rham cohomology on a compact manifold with boundary. They are obtained by minimising the energy integral in a Hilbert space of differential forms that belong along with the exterior derivative to the domain of the adjoint operator. The corresponding Euler-Lagrange equations reduce to an elliptic boundary value problem on the manifold, which is usually referred to as the Neumann problem after Spencer.
Packungen aus Kreisscheiben
(2019)
Der englische Seefahrer Sir Walter Raleigh fragte sich einst, wie er in seinem Schiffsladeraum moeglichst viele Kanonenkugeln stapeln koennte. Johannes Kepler entwickelte daraufhin 1611 eine Vermutung ueber die optimale Anordnung der Kugeln. Diese Vermutung sollte sich als eine der haertesten mathematischen Nuesse der Geschichte erweisen. Selbst in der Ebene sind dichteste Packungen kongruenter Kreise eine Herausforderung. 1892 und 1910 veroeffentlichte Axel Thue (kritisierte) Beweise, dass die hexagonale Kreispackung optimal sei. Erst 1940 lieferte Laszlo Fejes Toth schliesslich einen wasserdichten Beweis fuer diese Tatsache. Eine Variante des Problems verlangt,
Packungen mit endlich vielen kongruenten Kugeln zu finden, die eine gewisse quadratische Energie minimieren: Diese spannende geometrische Aufgabe wurde 1967 von Toth gestellt. Sie ist auch heute noch nicht vollstaendig gelaest. In diesem Beitrag schlagen die Autorinnen eine originelle wahrscheinlichkeitstheoretische Methode vor, um in der Ebene Näherungen der Lösung zu konstruieren.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
We prove a version of the Hopf-Rinow theorem with respect to path metrics on discrete spaces. The novel aspect is that we do not a priori assume local finiteness but isolate a local finiteness type condition, called essentially locally finite, that is indeed necessary. As a side product we identify the maximal weight, called the geodesic weight, generating the path metric in the situation when the space is complete with respect to any of the equivalent notions of completeness proven in the Hopf-Rinow theorem. As an application we characterize the graphs for which the resistance metric is a path metric induced by the graph structure.
Low thermal conductivity boulder with high porosity identified on C-type asteroid (162173) Ryugu
(2019)
C-type asteroids are among the most pristine objects in the Solar System, but little is known about their interior structure and surface properties. Telescopic thermal infrared observations have so far been interpreted in terms of a regolith-covered surface with low thermal conductivity and particle sizes in the centimetre range. This includes observations of C-type asteroid (162173) Ryugu1,2,3. However, on arrival of the Hayabusa2 spacecraft at Ryugu, a regolith cover of sand- to pebble-sized particles was found to be absent4,5 (R.J. et al., manuscript in preparation). Rather, the surface is largely covered by cobbles and boulders, seemingly incompatible with the remote-sensing infrared observations. Here we report on in situ thermal infrared observations of a boulder on the C-type asteroid Ryugu. We found that the boulder’s thermal inertia was much lower than anticipated based on laboratory measurements of meteorites, and that a surface covered by such low-conductivity boulders would be consistent with remote-sensing observations. Our results furthermore indicate high boulder porosities as well as a low tensile strength in the few hundred kilopascal range. The predicted low tensile strength confirms the suspected observational bias6 in our meteorite collections, as such asteroidal material would be too frail to survive atmospheric entry7
Low thermal conductivity boulder with high porosity identified on C-type asteroid (162173) Ryugu
(2019)
C-type asteroids are among the most pristine objects in the Solar System, but little is known about their interior structure and surface properties. Telescopic thermal infrared observations have so far been interpreted in terms of a regolith-covered surface with low thermal conductivity and particle sizes in the centimetre range. This includes observations of C-type asteroid (162173) Ryugu1,2,3. However, on arrival of the Hayabusa2 spacecraft at Ryugu, a regolith cover of sand- to pebble-sized particles was found to be absent4,5 (R.J. et al., manuscript in preparation). Rather, the surface is largely covered by cobbles and boulders, seemingly incompatible with the remote-sensing infrared observations. Here we report on in situ thermal infrared observations of a boulder on the C-type asteroid Ryugu. We found that the boulder’s thermal inertia was much lower than anticipated based on laboratory measurements of meteorites, and that a surface covered by such low-conductivity boulders would be consistent with remote-sensing observations. Our results furthermore indicate high boulder porosities as well as a low tensile strength in the few hundred kilopascal range. The predicted low tensile strength confirms the suspected observational bias6 in our meteorite collections, as such asteroidal material would be too frail to survive atmospheric entry7.
An efficient immunosurveillance of CD8(+) T cells in the periphery depends on positive/negative selection of thymocytes and thus on the dynamics of antigen degradation and epitope production by thymoproteasome and immunoproteasome in the thymus. Although studies in mouse systems have shown how thymoproteasome activity differs from that of immunoproteasome and strongly impacts the T cell repertoire, the proteolytic dynamics and the regulation of human thymoproteasome are unknown. By combining biochemical and computational modeling approaches, we show here that human 20S thymoproteasome and immunoproteasome differ not only in the proteolytic activity of the catalytic sites but also in the peptide transport. These differences impinge upon the quantity of peptide products rather than where the substrates are cleaved. The comparison of the two human 20S proteasome isoforms depicts different processing of antigens that are associated to tumors and autoimmune diseases.
We develop a technique for the multivariate data analysis of perturbed self-sustained oscillators. The approach is based on the reconstruction of the phase dynamics model from observations and on a subsequent exploration of this model. For the system, driven by several inputs, we suggest a dynamical disentanglement procedure, allowing us to reconstruct the variability of the system's output that is due to a particular observed input, or, alternatively, to reconstruct the variability which is caused by all the inputs except for the observed one. We focus on the application of the method to the vagal component of the heart rate variability caused by a respiratory influence. We develop an algorithm that extracts purely respiratory-related variability, using a respiratory trace and times of R-peaks in the electrocardiogram. The algorithm can be applied to other systems where the observed bivariate data can be represented as a point process and a slow continuous signal, e.g. for the analysis of neuronal spiking. This article is part of the theme issue 'Coupling functions: dynamical interaction mechanisms in the physical, biological and social sciences'.
For a singularly perturbed parabolic - ODE system we construct the asymptotic expansion in the small parameter in the case, when the degenerate equation has a double root. Such systems, which are called partly dissipative reaction-diffusion systems, are used to model various natural processes, including the signal transmission along axons, solid combustion and the kinetics of some chemical reactions. It turns out that the algorithm of the construction of the boundary layer functions and the behavior of the solution in the boundary layers essentially differ from that ones in case of a simple root. The multizonal initial and boundary layers behaviour was stated.
Probabilistic integration of a continuous dynamical system is a way of systematically introducing discretisation error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al. (Stat Comput 27(4):1065-1082, 2017. ), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.
In this paper Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular (and some singular) SLPs of even orders (tested for up to eight), with a mix of (including non-separable and finite singular endpoints) boundary conditions, accurately and efficiently. The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved. The inverse SLP algorithm proposed by Barcilon (1974) is utilized in combination with the Magnus method so that a direct SLP of any (even) order and an inverse SLP of order two can be solved effectively.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
In this paper Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular (and some singular) SLPs of even orders (tested for up to eight), with a mix of (including non-separable and finite singular endpoints) boundary conditions, accurately and efficiently. The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved. The inverse SLP algorithm proposed by Barcilon (1974) is utilized in combination with the Magnus method so that a direct SLP of any (even) order and an inverse SLP of order two can be solved effectively.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
We study elements of the calculus of boundary value problems in a variant of Boutet de Monvel’s algebra (Acta Math 126:11–51, 1971) on a manifold N with edge and boundary. If the boundary is empty then the approach corresponds to Schulze (Symposium on partial differential equations (Holzhau, 1988), BSB Teubner, Leipzig, 1989) and other papers from the subsequent development. For non-trivial boundary we study Mellin-edge quantizations and compositions within the structure in terms a new Mellin-edge quantization, compared with a more traditional technique. Similar structures in the closed case have been studied in Gil et al.
We show that elliptic complexes of (pseudo) differential operators on smooth compact manifolds with boundary can always be complemented to a Fredholm problem by boundary conditions involving global pseudodifferential projections on the boundary (similarly as the spectral boundary conditions of Atiyah, Patodi, and Singer for a single operator). We prove that boundary conditions without projections can be chosen if, and only if, the topological Atiyah-Bott obstruction vanishes. These results make use of a Fredholm theory for complexes of operators in algebras of generalized pseudodifferential operators of Toeplitz type which we also develop in the present paper.
Permafrost warming has the potential to amplify global climate change, because when frozen sediments thaw it unlocks soil organic carbon. Yet to date, no globally consistent assessment of permafrost temperature change has been compiled. Here we use a global data set of permafrost temperature time series from the Global Terrestrial Network for Permafrost to evaluate temperature change across permafrost regions for the period since the International Polar Year (2007-2009). During the reference decade between 2007 and 2016, ground temperature near the depth of zero annual amplitude in the continuous permafrost zone increased by 0.39 +/- 0.15 degrees C. Over the same period, discontinuous permafrost warmed by 0.20 +/- 0.10 degrees C. Permafrost in mountains warmed by 0.19 +/- 0.05 degrees C and in Antarctica by 0.37 +/- 0.10 degrees C. Globally, permafrost temperature increased by 0.29 +/- 0.12 degrees C. The observed trend follows the Arctic amplification of air temperature increase in the Northern Hemisphere. In the discontinuous zone, however, ground warming occurred due to increased snow thickness while air temperature remained statistically unchanged.
Background: Circulating infliximab (IFX) concentrations correlate with clinical outcomes, forming the basis of the IFX concentration monitoring in patients with Crohn's disease. This study aims to investigate and refine the exposure-response relationship by linking the disease activity markers "Crohn's disease activity index" (CDAI) and C-reactive protein (CRP) to IFX exposure. In addition, we aim to explore the correlations between different disease markers and exposure metrics.
Methods: Data from 47 Crohn's disease patients of a randomized controlled trial were analyzed post hoc. All patients had secondary treatment failure at inclusion and had received intensified IFX of 5 mg/kg every 4 weeks for up to 20 weeks. Graphical analyses were performed to explore exposure-response relationships. Metrics of exposure included area under the concentration-time curve (AUC) and trough concentrations (Cmin). Disease activity was measured by CDAI and CRP values, their change from baseline/last visit, and response/remission outcomes at week 12.
Results: Although trends toward lower Cmin and lower AUC in nonresponders were observed, neither CDAI nor CRP showed consistent trends of lower disease activity with higher IFX exposure across the 30 evaluated relationships. As can be expected, Cmin and AUC were strongly correlated with each other. Contrarily, the disease activity markers were only weakly correlated with each other.
Conclusions: No significant relationship between disease activity, as evaluated by CDAI or CRP, and IFX exposure was identified. AUC did not add benefit compared with Cmin. These findings support the continued use of Cmin and call for stringent objective disease activity (bio-)markers (eg, endoscopy) to form the basis of personalized IFX therapy for Crohn's disease patients with IFX treatment failure.
Continuous insight into biological processes has led to the development of large-scale, mechanistic systems biology models of pharmacologically relevant networks. While these models are typically designed to study the impact of diverse stimuli or perturbations on multiple system variables, the focus in pharmacological research is often on a specific input, e.g., the dose of a drug, and a specific output related to the drug effect or response in terms of some surrogate marker.
To study a chosen input-output pair, the complexity of the interactions as well as the size of the models hinders easy access and understanding of the details of the input-output relationship.
The objective of this thesis is the development of a mathematical approach, in specific a model reduction technique, that allows (i) to quantify the importance of the different state variables for a given input-output relationship, and (ii) to reduce the dynamics to its essential features -- allowing for a physiological interpretation of state variables as well as parameter estimation in the statistical analysis of clinical data. We develop a model reduction technique using a control theoretic setting by first defining a novel type of time-limited controllability and observability gramians for nonlinear systems. We then show the superiority of the time-limited generalised gramians for nonlinear systems in the context of balanced truncation for a benchmark system from control theory.
The concept of time-limited controllability and observability gramians is subsequently used to introduce a state and time-dependent quantity called the input-response (ir) index that quantifies the importance of state variables for a given input-response relationship at a particular time.
We subsequently link our approach to sensitivity analysis, thus, enabling for the first time the use of sensitivity coefficients for state space reduction. The sensitivity based ir-indices are given as a product of two sensitivity coefficients. This allows not only for a computational more efficient calculation but also for a clear distinction of the extent to which the input impacts a state variable and the extent to which a state variable impacts the output.
The ir-indices give insight into the coordinated action of specific state variables for a chosen input-response relationship.
Our developed model reduction technique results in reduced models that still allow for a mechanistic interpretation in terms of the quantities/state variables of the original system, which is a key requirement in the field of systems pharmacology and systems biology and distinguished the reduced models from so-called empirical drug effect models. The ir-indices are explicitly defined with respect to a reference trajectory and thereby dependent on the initial state (this is an important feature of the measure). This is demonstrated for an example from the field of systems pharmacology, showing that the reduced models are very informative in their ability to detect (genetic) deficiencies in certain physiological entities. Comparing our novel model reduction technique to the already existing techniques shows its superiority.
The novel input-response index as a measure of the importance of state variables provides a powerful tool for understanding the complex dynamics of large-scale systems in the context of a specific drug-response relationship. Furthermore, the indices provide a means for a very efficient model order reduction and, thus, an important step towards translating insight from biological processes incorporated in detailed systems pharmacology models into the population analysis of clinical data.
This paper presents a scalable E-band radar platform based on single-channel fully integrated transceivers (TRX) manufactured using 130-nm silicon-germanium (SiGe) BiCMOS technology. The TRX is suitable for flexible radar systems exploiting massive multiple-input-multipleoutput (MIMO) techniques for multidimensional sensing. A fully integrated fractional-N phase-locked loop (PLL) comprising a 39.5-GHz voltage-controlled oscillator is used to generate wideband frequency-modulated continuous-wave (FMCW) chirp for E-band radar front ends. The TRX is equipped with a vector modulator (VM) for high-speed carrier modulation and beam-forming techniques. A single TRX achieves 19.2-dBm maximum output power and 27.5-dB total conversion gain with input-referred 1-dB compression point of -10 dBm. It consumes 220 mA from 3.3-V supply and occupies 3.96 mm(2) silicon area. A two-channel radar platform based on full-custom TRXs and PLL was fabricated to demonstrate high-precision and high-resolution FMCW sensing. The radar enables up to 10-GHz frequency ramp generation in 74-84-GHz range, which results in 1.5-cm spatial resolution. Due to high output power, thus high signal-to-noise ratio (SNR), a ranging precision of 7.5 mu m for a target at 2 m was achieved. The proposed architecture supports scalable multichannel applications for automotive FMCW using a single local oscillator (LO).
We present new conditions for semigroups of positive operators to converge strongly as time tends to infinity. Our proofs are based on a novel approach combining the well-known splitting theorem by Jacobs, de Leeuw, and Glicksberg with a purely algebraic result about positive group representations. Thus, we obtain convergence theorems not only for one-parameter semigroups but also for a much larger class of semigroup representations. Our results allow for a unified treatment of various theorems from the literature that, under technical assumptions, a bounded positive C-0-semigroup containing or dominating a kernel operator converges strongly as t ->infinity. We gain new insights into the structure theoretical background of those theorems and generalize them in several respects; especially we drop any kind of continuity or regularity assumption with respect to the time parameter.
For a finite measure space X, we characterize strongly continuous Markov lattice semigroups on Lp(X) by showing that their generator A acts as a derivation on the dense subspace D(A)L(X). We then use this to characterize Koopman semigroups on Lp(X) if X is a standard probability space. In addition, we show that every measurable and measure-preserving flow on a standard probability space is isomorphic to a continuous flow on a compact Borel probability space.
We provide explicit examples of positive and power-bounded operators on c(0) and l(infinity) which are mean ergodic but not weakly almost periodic. As a consequence we prove that a countably order complete Banach lattice on which every positive and power-bounded mean ergodic operator is weakly almost periodic is necessarily a KB-space. This answers several open questions from the literature. Finally, we prove that if T is a positive mean ergodic operator with zero fixed space on an arbitrary Banach lattice, then so is every power of T .
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering similar to 10% of the permafrost region. Lake area loss (-1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (< 10(-5)%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
Given two weighted graphs (X, b(k), m(k)), k = 1, 2 with b(1) similar to b(2) and m(1) similar to m(2), we prove a weighted L-1-criterion for the existence and completeness of the wave operators W-+/- (H-2, H-1, I-1,I-2), where H-k denotes the natural Laplacian in l(2)(X, m(k)) w.r.t. (X, b(k), m(k)) and I-1,I-2 the trivial identification of l(2)(X, m(1)) with l(2) (X, m(2)). In particular, this entails a general criterion for the absolutely continuous spectra of H-1 and H-2 to be equal.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
We prove finiteness and diameter bounds for graphs having a positive Ricci-curvature bound in the Bakry–Émery sense. Our first result using only curvature and maximal vertex degree is sharp in the case of hypercubes. The second result depends on an additional dimension bound, but is independent of the vertex degree. In particular, the second result is the first Bonnet–Myers type theorem for unbounded graph Laplacians. Moreover, our results improve diameter bounds from Fathi and Shu (Bernoulli 24(1):672–698, 2018) and Horn et al. (J für die reine und angewandte Mathematik (Crelle’s J), 2017, https://doi.org/10.1515/crelle-2017-0038) and solve a conjecture from Cushing et al. (Bakry–Émery curvature functions of graphs, 2016).
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
Frühe mathematische Bildung
(2018)
Im vorliegenden Beitrag werden aktuelle Forschungstrends im Bereich der frühen mathematischen Bildung im Kontext jüngst formulierter Zieldimensionen für die frühe mathematische Bildung (siehe Benz et al., 2017) dargestellt. Es wird auf spielbasierte Fördermaßnahmen, Kompetenzen im Bereich „Raum und Form“, den Einfluss sprachlicher Parameter auf die Entwicklung mathematischer Kompetenzen sowie auf mathematikbezogene Kompetenzen frühpädagogischer Fachkräfte eingegangen. Darüber hinaus werden die Ergebnisse einer aktuellen Feldstudie zur Förderung früher mathematischer Kompetenzen (siehe Dillon, Kannan, Dean, Spelke & Duflo, 2017) vorgestellt. Abschließend wird die Entwicklung und Implementierung anschlussfähiger Bildungskonzepte als eine der zentralen Herausforderungen zukünftiger Forschungs- und Bildungsbemühungen diskutiert
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Left-right (L-R) asymmetry in the body plan is determined by nodal flow in vertebrate embryos. Shinohara et al. (Shinohara K et al. 2012 Nat. Commun. 3, 622 (doi:10.1038/ncomms1624)) used Dpcd and Rfx3 mutant mouse embryos and showed that only a few cilia were sufficient to achieve L-R asymmetry. However, the mechanism underlying the breaking of symmetry by such weak ciliary flow is unclear. Flow-mediated signals associated with the L-R asymmetric organogenesis have not been clarified, and two different hypotheses-vesicle transport and mechanosensing-are now debated in the research field of developmental biology. In this study, we developed a computational model of the node system reported by Shinohara et al. and examined the feasibilities of the two hypotheses with a small number of cilia. With the small number of rotating cilia, flow was induced locally and global strong flow was not observed in the node. Particles were then effectively transported only when they were close to the cilia, and particle transport was strongly dependent on the ciliary positions. Although the maximum wall shear rate was also influenced by ciliary position, the mean wall shear rate at the perinodal wall increased monotonically with the number of cilia. We also investigated the membrane tension of immotile cilia, which is relevant to the regulation of mechanotransduction. The results indicated that tension of about 0.1 mu Nm(-1) was exerted at the base even when the fluid shear rate was applied at about 0.1 s(-1). The area of high tension was also localized at the upstream side, and negative tension appeared at the downstream side. Such localization may be useful to sense the flow direction at the periphery, as time-averaged anticlockwise circulation was induced in the node by rotation of a few cilia. Our numerical results support the mechanosensing hypothesis, and we expect that our study will stimulate further experimental investigations of mechanotransduction in the near future.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
In the present paper, we study the problem of existence of honest and adaptive confidence sets for matrix completion. We consider two statistical models: the trace regression model and the Bernoulli model. In the trace regression model, we show that honest confidence sets that adapt to the unknown rank of the matrix exist even when the error variance is unknown. Contrary to this, we prove that in the Bernoulli model, honest and adaptive confidence sets exist only when the error variance is known a priori. In the course of our proofs, we obtain bounds for the minimax rates of certain composite hypothesis testing problems arising in low rank inference.
Tomographic Reservoir Imaging with DNA-Labeled Silica Nanotracers: The First Field Validation
(2018)
This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource-multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.
We generalise disagreement percolation to Gibbs point processes of balls with varying radii. This allows to establish the uniqueness of the Gibbs measure and exponential decay of pair correlations in the low activity regime by comparison with a sub-critical Boolean model. Applications to the Continuum Random Cluster model and the Quermass-interaction model are presented. At the core of our proof lies an explicit dependent thinning from a Poisson point process to a dominated Gibbs point process. (C) 2018 Elsevier B.V. All rights reserved.
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
For linear inverse problems Y = A mu + zeta, it is classical to recover the unknown signal mu by iterative regularization methods ((mu) over cap,(m) = 0,1, . . .) and halt at a data-dependent iteration tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error parallel to A((mu) over cap (()(tau)) - mu)parallel to(2) is controlled. In the context of statistical estimation with stochastic noise zeta, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared- error E[parallel to((mu) over cap (()(tau)) - mu)parallel to(2)]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularization methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L-2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically.
We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.
We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C.
We study the Ollivier-Ricci curvature of graphs as a function of the chosen idleness. We show that this idleness function is concave and piecewise linear with at most three linear parts, and at most two linear parts in the case of a regular graph. We then apply our result to show that the idleness function of the Cartesian product of two regular graphs is completely determined by the idleness functions of the factors.
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
We analyze a general class of difference operators Hε=Tε+Vε on ℓ2((εZ)d), where Vε is a multi-well potential and ε is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two “wells” (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol h0(x,ξ) of Hε) connecting the two minima and the case where the minimal geodesics form an ℓ+1 dimensional manifold, ℓ≥1. These results on the tunneling problem are as sharp as the classical results for the Schrödinger operator in Helffer and Sjöstrand (Commun PDE 9:337–408, 1984). Technically, our approach is pseudo-differential and we adapt techniques from Helffer and Sjöstrand [Analyse semi-classique pour l’équation de Harper (avec application à l’équation de Schrödinger avec champ magnétique), Mémoires de la S.M.F., 2 series, tome 34, pp 1–113, 1988)] and Helffer and Parisse (Ann Inst Henri Poincaré 60(2):147–187, 1994) to our discrete setting.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound.
For a given subcritical discrete Schrodinger operator H on a weighted infinite graph X, we construct a Hardy-weight w which is optimal in the following sense. The operator H - lambda w is subcritical in X for all lambda < 1, null-critical in X for lambda = 1, and supercritical near any neighborhood of infinity in X for any lambda > 1. Our results rely on a criticality theory for Schrodinger operators on general weighted graphs.
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
Neue Medien“ war über viele Jahre hinweg das Codewort für Computer, die den Einzug in den Schulunterricht schaffen sollten – wenn es nach den Befürwortern ging. Die Widerstände, gerade in der Grundschule, waren groß und vielfältig. Es ist verständlich, dass kurz nach der spielerischen Heranführung an Bildung im Kindergarten, in einer Zeit, in der die Schülerinnen und Schüler auch das soziale Miteinander einüben müssen und auch fein- und grobmotorische Fähigkeiten erwerben sollen, das vereinzelnde Sitzen vor einem Bildschirm nicht zu den obersten Prioritäten gehört – und auch unserer Meinung nach nicht gehören sollte. In den letzten Jahren hat sich der Begriff der neuen Medien aber verändert, und das, was bisher damit verbunden wurde, ist mit der „Digitalisierung“ nicht nur des Schulunterrichts, sondern des ganzen Lebens, zu einem Dreh- und Angelpunkt der Bildung geworden. Statt klobigen Computern mit Bildschirmen, die das Miteinander schon über die Ausstattung der Computerräume in die falsche Bahn lenken, haben mobile Geräte in der Hand der Schülerinnen und Schüler übernommen. Diese können nun gemeinsam an einem Gerät arbeiten, sie können direkt mit den Bildschirminhalten interagieren, sie können die Kameras, Mikrophone und Sensoren nutzen, um authentische Daten zu erfassen und zu verarbeiten, sie können auch außerhalb des Klassenraums oder der Schule damit arbeiten und haben inzwischen fast jederzeit das ganze Wissen des Internets mit dabei. Schwerpunkt dieses Bandes ist daher der Umgang mit Tablets und den darauf laufenden „Apps“ im Mathematikunterricht. In fünf Beiträgen werden konkrete Unterrichtsvorschläge gemacht, die als Blaupausen für App-gestützten Unterricht dienen können. Ergänzt wird dieser Band durch einen allgemeinen Leitfaden zur Beurteilung von Apps für den Mathematikunterricht samt Beispielen.
In this thesis, we discuss the characterization of orthogroups by so-called disjunctions of identities. The orthogroups are a subclass of the class of completely regular semigroups, a generalization of the concept of a group. Thus there is for all elements of an orthogroup some kind of an inverse element such that both elements commute. Based on a fundamental result by A.H. Clifford, every completely regular semigroup is a semilattice of completely simple semigroups. This allows the description the gross structure of such semigroup. In particular every orthogroup is a semilattice of rectangular groups which are isomorphic to direct products of rectangular bands and groups. Semilattices of rectangular groups coming from various classes are characterized using the concept of an alternative variety, a generalization of the classical idea of a variety by Birkhoff.
After starting with some fundamental definitions and results concerning semigroups, we introduce the concept of disjunctions of identities and summarize some necessary properties. In particular we present some disjunction of identities which is sufficient for a semigroup for being completely regular. Furthermore we derive from this identity some statements concerning Rees matrix semigroups, a possible representation of completely simple semigroups. A main result of this thesis is the general description of disjunctions of identities such that a completely regular semigroup satisfying the described identity is a semilattice of left groups (right groups / groups). In this case the completely regular semigroup is an orthogroup. Furthermore we define various classes of rectangular groups such that there is an exponent taken from a set of pairwise coprime positive integers. An important result is the characterization of the class of all semilattices of particular rectangular groups (taken from the classes defined before) using a set-theoretic minimal set of disjunctions of identities. Additionally we investigate semilattices of groups (so-called Clifford semigroups). For this purpose we consider abelian groups of particular exponents and prove some well-known results from the theory of Clifford semigroups in an alternative way applying the concept of disjunctions of identities. As a practical application of the results concerning semilattices of left zero semigroups and right zero semigroups we identify a particular transformation semigroup. For more detailed information about the product of two arbitrary elements of a semilattice of semigroups we introduce the concept of strong semilattices of semigroups. It is well-known that a semilattice of groups is a strong semilattice of groups. So we can characterize a strong semilattice of groups of particular pairwise coprime exponents by disjunctions of identities. Additionally we describe the class of all strong semilattices of left zero semigroups and right zero semigroups with the help of such kind of identity, and we relate this statement to the theory of normal bands. A possible extension of the already described semilattices of rectangular groups can be achieved by an auxiliary total order (in terms of chains of semigroups). To this end we present a corresponding characterization due to disjunctions of identities which is obviously minimal. A list of open questions which have arisen during the research for this thesis, but left crude, is attached.
Um beim Berufseinstieg erfolgreich als Informatiker wirken zu können, reicht es oft nicht aus nur separierte Kenntnisse über technische und theoretische Grundlagen, Programmiersprachen, Werkzeuge und Selbst- und Zeitmanagement zu besitzen. Vielmehr sollten Absolventen diese Kenntnisse praktisch miteinander verzahnt einsetzen können. An der Universität wird Studierenden leider selten die Möglichkeit geboten, diese verschiedenen Bereiche der Informatik miteinander integriert auszuüben. Dafür entwickeln wir seit über zwei Dekaden ein Lehr- und Lernkonzept zur Unterstützung praktischer Softwareentwicklungsveranstaltungen und setzen dieses um. Dadurch bieten wir angehenden SoftwareentwicklerInnen und ProjektmanagerInnen eine Umgebung, in der sie neues, praktisch relevantes Wissen erwerben können, sich selbst praktisch erproben und ihr Wissen konkret einsetzen können. Hier legen wir einen Schwerpunkt auf das Arbeiten im Team. Das hier vorgestellte Konzept kann auf ähnliche Lehrveranstaltungen übertragen und aufgrund seiner Modularisierung verändert und erweitert werden.
Im Rahmen eines Informatikstudiums wird neben theoretischen Grundlagen und Programmierfähigkeiten auch gezielt vermittelt, wie moderne Software in der Praxis entwickelt wird. Dabei wird oftmals eine Form der Projektarbeit gewählt, um Studierenden möglichst realitätsnahe Erfahrungen zu ermöglichen. Die Studierenden entwickeln einzeln oder in kleineren Teams Softwareprodukte für ausgewählte Problemstellungen. Neben fachlichen Inhalte stehen durch gruppendynamische Prozesse auch überfachliche Kompetenzen im Fokus. Dieser Beitrag präsentiert eine Interviewstudie mit Dozierenden von Softwareprojektpraktika an der RWTH Aachen und konzentriert sich auf die Ausgestaltung der Veranstaltungen sowie Förderung von überfachlichen Kompetenzen nach einem Kompetenzprofil für Softwareingenieure.
Empirische Untersuchungen von Lückentext-Items zur Beherrschung der Syntax einer Programmiersprache
(2018)
Lückentext-Items auf der Basis von Programmcode können eingesetzt werden, um Kenntnisse in der Syntax einer Programmiersprache zu prüfen, ohne dazu komplexe Programmieraufgaben zu stellen, deren Bearbeitung weitere Kompetenzen erfordert. Der vorliegende Beitrag dokumentiert den Einsatz von insgesamt zehn derartigen Items in einer universitären Erstsemestervorlesung zur Programmierung mit Java. Es werden sowohl Erfahrungen mit der Konstruktion der Items als auch empirische Daten aus dem Einsatz diskutiert. Der Beitrag zeigt dadurch insbesondere die Herausforderungen bei der Konstruktion valider Instrumente zur Kompetenzmessung in der Programmierausbildung auf. Die begrenzten und teilweise vorläufigen Ergebnisse zur Qualität der erzeugten Items legen trotzdem nahe, dass Erstellung und Einsatz entsprechender Items möglich ist und einen Beitrag zur Kompetenzmessung leisten kann.
Was ist Data Science?
(2018)
In Zusammenhang mit den Entwicklungen der vergangenen Jahre, insbesondere in den Bereichen Big Data, Datenmanagement und Maschinenlernen, hat sich der Umgang mit Daten und deren Analyse wesentlich weiterentwickelt. Mittlerweile wird die Datenwissenschaft als eigene Disziplin angesehen, die auch immer stärker durch entsprechende Studiengänge an Hochschulen repräsentiert wird. Trotz dieser zunehmenden Bedeutung ist jedoch oft unklar, welche konkreten Inhalte mit ihr in Verbindung stehen, da sie in verschiedensten Ausprägungen auftritt. In diesem Beitrag werden daher die hinter der Data Science stehenden informatischen Inhalte durch eine qualitative Analyse der Modulhandbücher etablierter Studiengänge aus diesem Bereich ermittelt und so ein Beitrag zur Charakterisierung dieser Disziplin geleistet. Am Beispiel der Entwicklung eines Data-Literacy-Kompetenzmodells, die als Ausblick skizziert wird, wird die Bedeutung dieser Charakterisierung für die weitere Forschung expliziert.
In diesem Artikel werden die Ergebnisse einer explorativen Datenanalyse über die Studierendenperformance in Klausur- und Hausaufgaben eines Einführungskurses der Theoretischen Informatik vorgestellt. Da bisher empirisch wenig untersucht ist, welche Probleme Studierenden in den Einführungskursen haben und die Durchfallquoten in diesen Kursen sehr hoch sind, soll auf diesem Weg ein Überblick gegeben werden. Die Ergebnisse zeigen, dass alle Studierenden unabhängig von ihrer Klausurnote die niedrigste Performance in den Klausur- und Hausaufgaben aufweisen, in denen formale Beweise gefordert sind. Dieses Ergebnis stärkt die Vermutung, dass didaktische
Ansätze und Maßnahmen sich insbesondere auf das Erlernen formaler Beweismethoden fokussieren sollten, um Informatik-Studierende nachhaltiger dabei zu unterstützen, in Theoretischer Informatik erfolgreich zu sein.
Die Lehre von wissenschaftlichem Arbeiten stellt einen zentralen Aspekt in forschungsorientierten Studiengängen wie der Informatik dar. Trotz diverser Angebote werden mittel- und langfristig Mängel in der
Arbeitsqualität von Studierenden sichtbar. Dieses Paper analysiert daher das Profil der Studierenden, deren Anwendung des wissenschaftlichen Arbeitens, und das Angebot von Proseminaren zum Thema „Einführung in das wissenschaftliche Arbeiten“ einer deutschen Universität. Die Ergebnisse mehrerer Erhebungen zeigen dabei diverse Probleme bei Studierenden auf, u. a. bei dem Prozessverständnis, dem Zeitmanagement und der Kommunikation.
Um für ein Leben in der digitalen Gesellschaft vorbereitet zu sein, braucht jeder heute in verschiedenen Situationen umfangreiche informatische Grundlagen. Die Bedeutung von Informatik nimmt nicht nur in immer mehr
Bereichen unseres täglichen Lebens zu, sondern auch in immer mehr Ausbildungsrichtungen. Um junge Menschen auf ihr zukünftiges Leben und/oder ihre zukünftige berufliche Tätigkeit vorzubereiten, bieten verschiedene Hochschulen Informatikmodule für Studierende anderer Fachrichtungen an. Die Materialien jener Kurse bilden einen umfangreichen Datenpool, um die für Studierende anderer Fächer bedeutenden Aspekte der Informatik mithilfe eines empirischen Ansatzes zu identifizieren. Im Folgenden werden 70 Module zu informatischer Bildung für Studierende anderer Fachrichtungen analysiert. Die Materialien – Publikationen, Syllabi und Stundentafeln – werden zunächst mit einer qualitativen Inhaltsanalyse nach Mayring untersucht und anschließend quantitativ ausgewertet. Basierend auf der Analyse werden Ziele, zentrale Themen und Typen eingesetzter Werkzeuge identifiziert.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Vorlesungs-Pflege
(2018)
Ähnlich zu Alterungsprozessen bei Software degenerieren auch Vorlesungen, wenn sie nicht hinreichend gepflegt werden. Die Gründe hierfür werden ebenso beleuchtet wie mögliche Indikatoren und Maßnahmen – der Blick ist dabei immer der eines Informatikers. An drei Vorlesungen wird erläutert, wie der Degeneration von Lehrveranstaltungen
gegengewirkt werden kann. Mangels hinreichend großer empirischer Daten liefert das Paper keine unumstößlichen Wahrheiten. Ein Ziel ist es vielmehr Kollegen, die ähnliche Phänomene beobachten, einen ersten Anker für einen
inneren Diskurs zu bieten. Ein langfristiges Ziel ist die Sammlung eines Katalogs an Maßnahmen zur Pflege von Informatikvorlesungen.
Transition metals in inorganic systems and metalloproteins can occur in different oxidation states, which makes them ideal redox-active catalysts. To gain a mechanistic understanding of the catalytic reactions, knowledge of the oxidation state of the active metals, ideally in operando, is therefore critical. L-edge X-ray absorption spectroscopy (XAS) is a powerful technique that is frequently used to infer the oxidation state via a distinct blue shift of L-edge absorption energies with increasing oxidation state. A unified description accounting for quantum-chemical notions whereupon oxidation does not occur locally on the metal but on the whole molecule and the basic understanding that L-edge XAS probes the electronic structure locally at the metal has been missing to date. Here we quantify how charge and spin densities change at the metal and throughout the molecule for both redox and core-excitation processes. We explain the origin of the L-edge XAS shift between the high-spin complexes Mn-II(acac)(2) and Mn-III(acac)(3) as representative model systems and use ab initio theory to uncouple effects of oxidation-state changes from geometric effects. The shift reflects an increased electron affinity of Mn-III in the core-excited states compared to the ground state due to a contraction of the Mn 3d shell upon core-excitation with accompanied changes in the classical Coulomb interactions. This new picture quantifies how the metal-centered core hole probes changes in formal oxidation state and encloses and substantiates earlier explanations. The approach is broadly applicable to mechanistic studies of redox-catalytic reactions in molecular systems where charge and spin localization/delocalization determine reaction pathways.
The Widom-Rowlinson model (or the Area-interaction model) is a Gibbs point process in R-d with the formal Hamiltonian defined as the volume of Ux epsilon omega B1(x), where. is a locally finite configuration of points and B-1(x) denotes the unit closed ball centred at x. The model is also tuned by two other parameters: the activity z > 0 related to the intensity of the process and the inverse temperature beta >= 0 related to the strength of the interaction. In the present paper we investigate the phase transition of the model in the point of view of percolation theory and the liquid-gas transition. First, considering the graph connecting points with distance smaller than 2r > 0, we show that for any beta >= 0, there exists 0 <(similar to a)(zc) (beta, r) < +infinity such that an exponential decay of connectivity at distance n occurs in the subcritical phase (i.e. z <(similar to a)(zc) (beta, r)) and a linear lower bound of the connection at infinity holds in the supercritical case (i.e. z >(similar to a)(zc) (beta, r)). These results are in the spirit of recent works using the theory of randomised tree algorithms (Probab. Theory Related Fields 173 (2019) 479-490, Ann. of Math. 189 (2019) 75-99, Duminil-Copin, Raoufi and Tassion (2018)). Secondly we study a standard liquid-gas phase transition related to the uniqueness/non-uniqueness of Gibbs states depending on the parameters z, beta. Old results (Phys. Rev. Lett. 27 (1971) 1040-1041, J. Chem. Phys. 52 (1970) 1670-1684) claim that a non-uniqueness regime occurs for z = beta large enough and it is conjectured that the uniqueness should hold outside such an half line ( z = beta >= beta(c) > 0). We solve partially this conjecture in any dimension by showing that for beta large enough the non-uniqueness holds if and only if z = beta. We show also that this critical value z = beta corresponds to the percolation threshold (similar to a)(zc) (beta, r) = beta for beta large enough, providing a straight connection between these two notions of phase transition.
SmB6 is predicted to be the first member of the intersection of topological insulators and Kondo insulators, strongly correlated materials in which the Fermi level lies in the gap of a many-body resonance that forms by hybridization between localized and itinerant states. While robust, surface-only conductivity at low temperature and the observation of surface states at the expected high symmetry points appear to confirm this prediction, we find both surface states at the (100) surface to be topologically trivial. We find the (Gamma) over bar state to appear Rashba split and explain the prominent (X) over bar state by a surface shift of the many-body resonance. We propose that the latter mechanism, which applies to several crystal terminations, can explain the unusual surface conductivity. While additional, as yet unobserved topological surface states cannot be excluded, our results show that a firm connection between the two material classes is still outstanding.
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
In der dualen IT-Ausbildung als Verbindung von beruflicher und akademischer Qualifikation werden die berufstypischen Werkzeuge, wie z. B. Laptops, ebenso in den Lehr-Lern-Prozessen der akademischen Unterrichtseinheiten eingesetzt. Im Prüfungswesen wird oft auf klassische Papierklausuren zurückgegriffen. Unterrichtseinheiten mit hohem Blended-Learning-Anteil ohne E-Prüfung werden dabei als „nicht konsistent“ wahrgenommen. In diesem Artikel wird eine empirische Studie dargelegt, die untersucht, welche Einflüsse aus der persönlichen Lernbiografie bei den Lehrenden in einer dualen IT-Ausbildung dazu führen können, die Möglichkeiten eines E-Assessments als summative Modulprüfung anzunehmen oder abzulehnen. Beispielhaft wurden in der dargelegten Studie Interviews mit Dozenten geführt und diese hinsichtlich der Verbindung zwischen Lernbiografie, Gestaltung der Didaktik der Lehr-Lern-Prozesse, Zufriedenheit und Veränderungsbereitschaft untersucht.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
In the thesis there are constructed new quantizations for pseudo-differential boundary value problems (BVPs) on manifolds with edge. The shape of operators comes from Boutet de Monvel’s calculus which exists on smooth manifolds with boundary. The singular case, here with edge and boundary, is much more complicated. The present approach simplifies the operator-valued symbolic structures by using suitable Mellin quantizations on infinite stretched model cones of wedges with boundary. The Mellin symbols themselves are, modulo smoothing ones, with asymptotics, holomorphic in the complex Mellin covariable. One of the main results is the construction of parametrices of elliptic elements in the corresponding operator algebra, including elliptic edge conditions.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Manifolds with corners in the present investigation are non-smooth configurations - specific stratified spaces - with an incomplete metric such as cones, manifolds with edges, or corners of piecewise smooth domains in Euclidean space. We focus here on operators on such "corner manifolds" of singularity order <= 2, acting in weighted corner Sobolev spaces. The corresponding corner degenerate pseudo-differential operators are formulated via Mellin quantizations, and they also make sense on infinite singular cones.