Refine
Year of publication
Document Type
- Article (1079)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (151)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1875)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
- Cauchy problem (8)
- Bayesian inference (7)
- K-theory (7)
- discrepancy principle (7)
- manifolds with singularities (7)
- pseudodifferential operators (7)
- Hodge theory (6)
- Navier-Stokes equations (6)
- Neumann problem (6)
- Pseudo-differential operators (6)
- Toeplitz operators (6)
- reciprocal class (6)
- relative index (6)
- stochastic differential equations (6)
- Atiyah-Patodi-Singer theory (5)
- Boundary value problems (5)
- Dirac operator (5)
- Elliptic complexes (5)
- ellipticity (5)
- ensemble Kalman filter (5)
- index theory (5)
- infinite-dimensional Brownian diffusion (5)
- linear term (5)
- local time (5)
- reversible measure (5)
- surgery (5)
- Bayesian inversion (4)
- DLR equations (4)
- Data assimilation (4)
- Earthquake interaction (4)
- Geomagnetic field (4)
- Lefschetz number (4)
- Markov processes (4)
- Mathematikunterricht (4)
- Modellierung (4)
- Probabilistic Cellular Automata (4)
- Statistical seismology (4)
- Zaremba problem (4)
- clone (4)
- differential operators (4)
- elliptic complexes (4)
- elliptic operator (4)
- graphs (4)
- inversion (4)
- linear hypersubstitution (4)
- manifold with singularities (4)
- multiplicative noise (4)
- nonlinear operator (4)
- optimal transport (4)
- partial clone (4)
- spectral flow (4)
- star product (4)
- tunneling (4)
- 'eta' invariant (3)
- Atiyah-Bott condition (3)
- Atiyah-Bott obstruction (3)
- Bayesian inverse problems (3)
- Dirichlet form (3)
- Dirichlet to Neumann operator (3)
- Dyson-Schwinger equations (3)
- Eigenvalues (3)
- Fredholm complexes (3)
- Fredholm operators (3)
- Gibbs measure (3)
- Kalman filter (3)
- Lidar (3)
- MCMC (3)
- Malliavin calculus (3)
- Markov chain (3)
- Mellin transform (3)
- Probabilistic forecasting (3)
- Quasilinear equations (3)
- Riemannian manifold (3)
- Stochastic Differential Equation (3)
- Transformation semigroups (3)
- Wahrscheinlichkeitstheorie (3)
- aerosol size distribution (3)
- asymptotic behavior (3)
- asymptotic expansion (3)
- boundary value problem (3)
- classical solution (3)
- cohomology (3)
- conical singularities (3)
- conormal symbol (3)
- counting process (3)
- de Rham complex (3)
- duality formula (3)
- equi-singular connections (3)
- eta invariant (3)
- evolution equation (3)
- filter (3)
- forecasting (3)
- gradient flow (3)
- hard core potential (3)
- hyperbolic tilings (3)
- ill-posed problem (3)
- index of elliptic operators in subspaces (3)
- isotopic tiling theory (3)
- kernel methods (3)
- localization (3)
- logarithmic source condition (3)
- manifolds with conical singularities (3)
- minimax convergence rates (3)
- non-Markov drift (3)
- parameter estimation (3)
- point process (3)
- pseudo-differential boundary value problems (3)
- reciprocal processes (3)
- relative rank (3)
- reproducing kernel Hilbert space (3)
- skew Brownian motion (3)
- statistical seismology (3)
- stochastic bridge (3)
- the Cauchy problem (3)
- transformations (3)
- transition path theory (3)
- uncertainty quantification (3)
- 26D15 (2)
- 31C20 (2)
- 35B09 (2)
- 35R02 (2)
- 39A12 (primary) (2)
- 58E35 (secondary) (2)
- Aerosols (2)
- Alterung (2)
- Aluminium (2)
- Arctic haze (2)
- Asymptotics of solutions (2)
- Averaging principle (2)
- Beltrami equation (2)
- Boolean model (2)
- Boundary value methods (2)
- Boutet de Monvel's calculus (2)
- Brownian bridge (2)
- Brownian motion with discontinuous drift (2)
- Canonical Gibbs measure (2)
- Carleman matrix (2)
- Cauchy data spaces (2)
- Chemotaxis (2)
- Cluster expansion (2)
- Corona (2)
- Cox model (2)
- DLR equation (2)
- Dirac operators (2)
- Diracoperator (2)
- Dirichlet problem (2)
- Duality formula (2)
- Edge calculus (2)
- Einstein-Hilbert action (2)
- Ellipticity of corner-degenerate operators (2)
- Ensemble Kalman filter (2)
- Euler-Lagrange equations (2)
- Finite difference method (2)
- Finsler distance (2)
- Fokker-Planck equation (2)
- Foucault (2)
- Fourth order Sturm-Liouville problem (2)
- Fredholm operator (2)
- Gamma-convergence (2)
- Gaussian process (2)
- Geomagnetism (2)
- Geomagnetismus (2)
- Gibbs field (2)
- Gibbs point process (2)
- Gibbs point processes (2)
- Girsanov formula (2)
- Goursat problem (2)
- Graphs (2)
- Heat equation (2)
- Hughes-free (2)
- Index theory (2)
- Infinite-dimensional SDE (2)
- Integrability (2)
- Inverse Sturm-Liouville problem (2)
- Inversion (2)
- Iran (2)
- Kompetenzen (2)
- Kunststofflichtwellenleiter (2)
- Lagrangian submanifolds (2)
- Lagrangian system (2)
- Lame system (2)
- Langzeitverhalten (2)
- Laplace equation (2)
- Laplace-Beltrami operator (2)
- Laplacian (2)
- Lefschetz fixed point formula (2)
- Left-ordered groups (2)
- Levenberg-Marquardt method (2)
- Levy measure (2)
- Levy process (2)
- Lichtwellenleiter (2)
- Machine learning (2)
- Magnus expansion (2)
- Markov chains (2)
- Markovprozesse (2)
- Mathematikdidaktik (2)
- Mathematische Physik (2)
- Maximal subsemigroups (2)
- Maximum expected earthquake magnitude (2)
- Mellin symbols with values in the edge calculus (2)
- Meromorphic operator-valued symbols (2)
- Model order reduction (2)
- Morse-Smale property (2)
- Multiple zeta values (2)
- Nonlinear Laplace operator (2)
- Numerov's method (2)
- Onsager-Machlup functional (2)
- Order-preserving transformations (2)
- PBPK (2)
- POF (2)
- Perturbed complexes (2)
- Picard-Fuchs equations (2)
- Poincare inequality (2)
- Randwertprobleme (2)
- Removable sets (2)
- Riemann-Hilbert problem (2)
- Royden boundary (2)
- Runge-Kutta methods (2)
- SPDEs (2)
- Schrodinger operators (2)
- Scientific discovery learning (2)
- Seismicity and tectonics (2)
- Specific entropy (2)
- Spin Geometry (2)
- Stochastic differential equations (2)
- Stochastik (2)
- Streuung (2)
- Sturm-Liouville problem (2)
- TIMSS (2)
- Teamarbeit (2)
- Temperatur (2)
- Tikhonov regularization (2)
- Time series (2)
- Tunneleffekt (2)
- Uncertainty quantification (2)
- Unterrichtsmethode (2)
- Verzweigungsprozess (2)
- Videostudie (2)
- Vietnam (2)
- WKB method (2)
- WKB-expansion (2)
- Wasserstein distance (2)
- Wavelet transform (2)
- Wellengleichung (2)
- Wiener measure (2)
- Yamabe operator (2)
- adaptive estimation (2)
- alpha-stable Levy process (2)
- analytic continuation (2)
- and prediction (2)
- approximate differentiability (2)
- asymptotic method (2)
- birth-death-mutation-competition point process (2)
- boundary layer (2)
- boundary regularity (2)
- branching process (2)
- bridge (2)
- censoring (2)
- coarea formula (2)
- confidence sets (2)
- conjugate gradient (2)
- consistency (2)
- corner Sobolev spaces with double weights (2)
- counterterms (2)
- coupling (2)
- coupling methods (2)
- curvature (2)
- detailed balance equation (2)
- didactics of mathematics (2)
- difference operator (2)
- dimension independent bound (2)
- division of spaces (2)
- early stopping (2)
- edge singularities (2)
- edge-degenerate operators (2)
- elliptic boundary value problems (2)
- elliptic families (2)
- elliptic family (2)
- elliptic system (2)
- estimation (2)
- eta-invariant (2)
- exact simulation (2)
- existence (2)
- first boundary value problem (2)
- formula (2)
- generators (2)
- geodesic distance (2)
- geodesics (2)
- geometry (2)
- ground state (2)
- hard core interaction (2)
- heat equation (2)
- heat kernel (2)
- heterogeneity (2)
- high dimensional (2)
- higher-order Sturm–Liouville problems (2)
- holomorphic solution (2)
- homotopy classification (2)
- ill-posed problems (2)
- index formulas (2)
- infinite divisibility (2)
- infinite-dimensional diffusion (2)
- infinitely divisible point processes (2)
- inflammatory bowel disease (2)
- infliximab (2)
- integral formulas (2)
- integration by parts formula (2)
- interacting particle systems (2)
- interaction matrix (2)
- inverse Sturm–Liouville problems (2)
- inverse ill-posed problem (2)
- inverse problems (2)
- iterative regularization (2)
- jump process (2)
- knots (2)
- lattice packing and covering (2)
- lidar (2)
- linear formula (2)
- linear tree language (2)
- linking coefficients (2)
- localisation (2)
- long-time behaviour (2)
- manifolds with edges (2)
- mapping degree (2)
- maps on surfaces (2)
- marked Gibbs point processes (2)
- mathematics education (2)
- mathematische Modellierung (2)
- maximal subsemigroups (2)
- maximum a posteriori (2)
- maximum likelihood estimator (2)
- metastability (2)
- methods (2)
- microdialysis (2)
- mild solution (2)
- minimax optimality (2)
- modeling (2)
- modelling optical fibres waveguides pof scattering temperature aging ageing (2)
- models (2)
- modified Landweber method (2)
- modn-index (2)
- molecular motor (2)
- molecular weaving (2)
- monodromy matrix (2)
- monotone coupling (2)
- networks (2)
- nonlinear (2)
- nonlinear equations (2)
- nonlinear filtering (2)
- nonparametric regression (2)
- normal reflection (2)
- operator-valued symbols (2)
- optimal rate (2)
- optische Fasern (2)
- orbifolds (2)
- p-Laplace operator (2)
- parametrices (2)
- partial least squares (2)
- particle filter (2)
- path integral (2)
- pediatrics (2)
- periodic entanglement (2)
- pharmacokinetics (2)
- pharmacometrics (2)
- polyhedra and polytopes (2)
- population pharmacokinetics (2)
- positive solutions (2)
- prediction (2)
- pseudo-differential operators (2)
- pseudodiferential operators (2)
- pseudodifferential operator (2)
- quantization (2)
- random walk on Abelian group (2)
- random walks on graphs (2)
- reciprocal characteristics (2)
- regular figures (2)
- regularisation (2)
- regularization methods (2)
- regularizer (2)
- regularizers (2)
- renormalization Hopf algebra (2)
- restricted range (2)
- reziproke Klassen (2)
- root functions (2)
- sampling (2)
- sequential data assimilation (2)
- singular manifolds (2)
- singular partial differential equation (2)
- singular perturbation (2)
- skew diffusions (2)
- small noise asymptotic (2)
- small parameter (2)
- spectral theorem (2)
- stability and accuracy (2)
- stable limit cycle (2)
- star-product (2)
- statistical model selection (2)
- stochastic ordering (2)
- stopping rules (2)
- symmetry conditions (2)
- tangles (2)
- teaching methods (2)
- term (2)
- time duality (2)
- trace (2)
- transfer operator (2)
- ultracontractivity (2)
- uniqueness (2)
- variational stability (2)
- wave equation (2)
- weak boundary values (2)
- weighted edge spaces (2)
- weighted spaces (2)
- (2+1)-dimensional gravity (1)
- (co)boundary operator (1)
- (generalised) wealdy differentiable function (1)
- (sub-) tropical Africa (1)
- (sub-) tropisches Afrika (1)
- 1st Eigenvalue (1)
- 31A25 (1)
- 35J70 (1)
- 35K65 (1)
- 4-Mannigfaltigkeiten (1)
- 47A52 (1)
- 47G30 (1)
- 58J40 (1)
- 65F18 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ACAT (1)
- AERONET (1)
- AFM (1)
- ALOS-2 PALSAR-2 (1)
- AMS (1)
- APC concentration gradient (1)
- APS problem (1)
- Absorption kinetics (1)
- Achievement goal orientation (1)
- Activity Theory (1)
- Aerosol (1)
- Aerosole (1)
- Agmon estimates (1)
- Al-26 (1)
- Alfred Schütz (1)
- Algebraic Birkhoff factorisation (1)
- Algebraic quantum field theory (1)
- Algorithmic (1)
- Alternatividentitäten (1)
- Alternativvarietäten (1)
- Aluminium adjuvants (1)
- Analyse (1)
- Analytic continuation (1)
- Analytic extension (1)
- Andere Fachrichtungen (1)
- Anfangsrandwertproblem (1)
- Angiogenesis (1)
- Angle (1)
- Angular derivatives (1)
- Animal movement modeling (1)
- Anisotropic pseudo-differential operators (1)
- Approximate approximations (1)
- Approximate likelihood (1)
- Arctic (1)
- Arnoldi process (1)
- Artof (1)
- Assimilation (1)
- Asymptotic variance of maximum partial likelihood estimate (1)
- Asymptotische Entwicklung (1)
- Atiyah-Singer theorem (1)
- Atmosphere (1)
- Attractive Dynamics (1)
- Audience Response System (1)
- Aufgabensammlung (1)
- Aussterbewahrscheinlichkeit (1)
- Automotive (1)
- BTZ black hole (1)
- Banach-valued process (1)
- Bayesian (1)
- Bayesian Inference (1)
- Bayesian method (1)
- Bayessche Inferenz (1)
- Bernstein inequality (1)
- Bewegungsgleichung (1)
- Beweisaufgaben (1)
- Bienaymé-Galton-Watson Prozess (1)
- Bienaymé-Galton-Watson process (1)
- Big Data (1)
- Biodiversity (1)
- Birkhoff theorem (1)
- Birth-and-death process (1)
- Bisectorial operator (1)
- Bivariant K-theory (1)
- Blended Learning (1)
- Blended learning (1)
- Blood coagulation network (1)
- Borel Funktionen (1)
- Borel functions (1)
- Bose-Einstein condensation (1)
- Boundary Value Problems (1)
- Boundary value method (1)
- Boundary value problem (1)
- Boundary value problems for first order systems (1)
- Boundary-contact problems (1)
- Bounds (1)
- Boutet de Monvels Kalkül (1)
- Brownian motion (1)
- Broyden's method (1)
- Bruchzahlen (1)
- Bruck-Reilly extension (1)
- C-Test (1)
- C0−semigroup (1)
- CCR-algebra (1)
- CFD (1)
- CHAMP satellite (1)
- COCOMO (1)
- COVID-19 (1)
- CRPS (1)
- Caccioppoli inequality (1)
- Calculation (1)
- Calculus of Variation (1)
- Calculus of conormal symbols (1)
- Calderón projections (1)
- Canonical (Marcus) SDE (1)
- Capture into resonance (1)
- Carleman formulas (1)
- Cartan's development (1)
- Cartesian product of varifolds (1)
- Case-Cohort-Design (1)
- Casped plates (1)
- Categories of stratified spaces (1)
- Cauchy Riemann operator (1)
- Cauchy horizon (1)
- Cauchyhorizont (1)
- Cell-level kinetics (1)
- Censoring (1)
- Central extensions of groups (1)
- Chamseddine-Connes spectral action (1)
- Characteristic polynomial (1)
- Cheeger inequalities (1)
- Cheeger inequality (1)
- Chern character (1)
- Chicken chorioallantoic membrane (CAM) (1)
- Classification (1)
- Classroom Response System (1)
- Clearance induction (1)
- Clicker (1)
- Clifford algebra (1)
- Clifford semigroup (1)
- Clifford-Halbgruppen (1)
- Cloud Computing (1)
- Cluster Entwicklung (1)
- Cluster-Expansion (1)
- Clusteranalyse (1)
- Coarea Formel (1)
- Codeverständnis (1)
- Collapse (1)
- Commutative geometries (1)
- Complete asymptotics (1)
- Composition operators (1)
- Compound Poisson processes (1)
- Computer simulations (1)
- Conceptual change (1)
- Condition number (1)
- Cone (1)
- Cone and edge pseudo-differential operators (1)
- Confidence intervals (1)
- Conical zeta values (1)
- Connectivity (1)
- Connes-Kreimer Hopf algebra (1)
- Conradian left-order (1)
- Conradian ordered groups (1)
- Constant scalar curvature (1)
- Continuum (1)
- Continuum random cluster model (1)
- Control theory (1)
- Convergence rates (1)
- Convex cones (1)
- Core (1)
- Core Field Modeling (1)
- Core dynamics (1)
- Core field (1)
- Corner boundary value problems (1)
- Corner pseudo-differential operators (1)
- Correlation based modelling (1)
- Counting process (1)
- Coupling (1)
- Covid-19 (1)
- Cox-Modell (1)
- Crack theory (1)
- Critical mathematics education (1)
- Critique (1)
- Cross-effects (1)
- Curie-Weiss Potts Model (1)
- Curie-Weiss Potts Modell (1)
- Curvature varifold (1)
- C‐ reactive protein remission (1)
- DFN (1)
- DLR-Gleichungen (1)
- DMSP (1)
- Daily gravity field (1)
- Data Assimilation (1)
- Data Literacy (1)
- Data Science (1)
- Data augmentation (1)
- Data-Driven Methods (1)
- Data-driven modelling (1)
- Daten Assimilation (1)
- Datenassimilation (1)
- Datengetriebene Methoden (1)
- De Rham complex (1)
- Decay of eigenfunctions (1)
- Decorated cones (1)
- Definitions (1)
- Degenerationsprozesse (1)
- Delaney--Dress (1)
- Delaney–Dress tiling theory (1)
- Deligne Cohomology (1)
- Deligne Kohomologie (1)
- Dependent thinning (1)
- Derivation (1)
- Design Research (1)
- Detailed balance (1)
- Detektion multipler Übergänge (1)
- Determinant (1)
- Determinantal point processes (1)
- Determinante (1)
- Deterministic finite state automata (1)
- Diagrams (1)
- Dichte eines Maßes (1)
- Dickkopf diffusion and feedback regulation (1)
- Didaktik der Mathematik (1)
- Differential Geometrie (1)
- Differential Geometry (1)
- Differential invariant (1)
- Differentialgeometrie (1)
- Differentialoperatoren (1)
- Differenzenoperator (1)
- Diffusionsprozess (1)
- Digital Tools (1)
- Digital technology (1)
- Digitale Werkzeuge (1)
- Diophantine Approximation (1)
- Dirac Operator (1)
- Dirac-harmonic maps (1)
- Dirac-harmonische Abbildungen (1)
- Dirac-type operator (1)
- Direct method (1)
- Dirichlet mixture (1)
- Dirichlet-form (1)
- Dirichlet-to-Neumann operator (1)
- Disagreement percolation (1)
- Discontinuous Robin condition (1)
- Discovery learning (1)
- Discrete Dirichlet forms (1)
- Discrete-element method (1)
- Dispositional learning analytics (1)
- Distributed Learning (1)
- Disziplinierung (1)
- Divisionsbäume (1)
- Doblin (1)
- Dobrushin criterion (1)
- Dobrushin criterion; (1)
- Dobrushin-Kriterium (1)
- Doeblin (1)
- Dualitätsformeln (1)
- Dubrovinring (1)
- DySEM (1)
- Dynamical systems (1)
- Dynamo (1)
- Dynamo: theories and simulations (1)
- Döblin (1)
- E-band (1)
- EGFR (1)
- EM (1)
- ERgodicity of Markov Chains (1)
- Earth rotation (1)
- Earth's magnetic field (1)
- Earthquake dynamics (1)
- Earthquake modeling (1)
- Ecosystem (1)
- Edge and corner pseudo-differential operators (1)
- Edge degenerate operators (1)
- Edge symbols (1)
- Edge-degenerate operators (1)
- Efficient solutions (1)
- Effort (1)
- Eichtheorie (1)
- Eigenvalue problem (1)
- Einstein manifolds (1)
- Einstein space (1)
- Einstein-Hilbert-Wirkung (1)
- Einstein-Mannigfaltigkeiten (1)
- Elastizität (1)
- Electromagnetic induction (1)
- Electron spectroscopy (1)
- Elektrodynamik (1)
- Elliptic boundary (1)
- Elliptic equation with order degeneration (1)
- Elliptic operators (1)
- Elliptic operators in domains with edges (1)
- Ellipticity and parametrices (1)
- Ellipticity of edge-degenerate operators (1)
- Elliptische Komplexe (1)
- Elliptizität (1)
- Empirische Untersuchung (1)
- Endothelin (ET) (1)
- Energy resolution (1)
- Ensemble Kalman (1)
- Ensemble Kalman Filter (1)
- Entdeckendes Lernen (1)
- Entropiemethode (1)
- Entropy method (1)
- Entstehungsfragestellung (1)
- Environmental DNA (1)
- Epidemiologie (1)
- Epidemiology (1)
- Epistemologie (1)
- Epistemology (1)
- Equivalence (1)
- Error analysis (1)
- Error control/adaptivity (1)
- Error covariance (1)
- Essential spectrum (1)
- Estimability (1)
- Estimation for branching processes (1)
- Euclidean fields (1)
- Euler equations (1)
- Euler operator (1)
- Euler's theta functions (1)
- Evolution Strategies (1)
- Evolutionsgleichung (1)
- Evolutionsstrategien (1)
- Exact solution (1)
- Explorative Datenanalyse (1)
- Exponential decay of pair correlation (1)
- Extensive transformation (1)
- Extremal problem (1)
- Eyring-Kramers Formel (1)
- FIB patterning (1)
- False discovery rate (1)
- Fast win (1)
- Fault slip (1)
- Feedback (1)
- Feller Diffusionsprozesse (1)
- Feller diffusion processes (1)
- Fence (1)
- Fermi golden rule (1)
- Fertigkeiten (1)
- Feynman-Kac formula (1)
- Fibroblasten (1)
- Filterung (1)
- Finite energy sections (1)
- Finite transformation semigroup (1)
- Finsler-Abstand (1)
- Finsler-distance (1)
- First exit location (1)
- First exit time (1)
- First order PDE (1)
- First passage time (1)
- First variation (1)
- Fischer-Riesz equations (1)
- Fitness (1)
- Fixationsbewegungen der Augen (1)
- Flocking (1)
- Fluvial (1)
- Foliated spaces (1)
- Force splitting (1)
- Forecasting and prediction (1)
- Form (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Forschung (1)
- Fortuin-Kasteleyn representation (1)
- Fourier Integraloperatoren (1)
- Fourier and Mellin transform (1)
- Fourier and Mellin transforms (1)
- Fourier transform (1)
- Fourier-Laplace transform (1)
- Fractal (1)
- Fractions with linear poles (1)
- Fredholm Komplexe (1)
- Fredholm alternative (1)
- Freidlin-Wentzell theory (1)
- Fremdverstehen (1)
- Frühförderung (1)
- Full rank matrix filters (1)
- Functional calculus (1)
- Fundamentale Ideen (1)
- Funktorgeometrie (1)
- Future time interval (1)
- Fuzzy logic (1)
- G-Wishart distribution (1)
- G-index (1)
- G-trace (1)
- GPS (1)
- GRACE (1)
- Gamification (1)
- Gauge theory (1)
- Gauss-Bonnet-Chern (1)
- Gaussian Loop Processes (1)
- Gaussian graphical models (1)
- Gaussian kernel estimators (1)
- Gaussian mixtures (1)
- Gaussian processes (1)
- Gaussian sequence model (1)
- Gauß-Prozesse (1)
- Gaußsche Loopprozess (1)
- Gender (1)
- Generalised mean curvature vector (1)
- Generalized hybrid Monte Carlo (1)
- Generalized translation operator (1)
- Geodetic measurements (1)
- Geodynamo (1)
- Geodäten (1)
- Geomagnetic jerks (1)
- Geomagnetic models (1)
- Geomagnetic secular variation (1)
- Geomagnetic storm (1)
- Geometric Analysis (1)
- Geometrie (1)
- Geometrieunterricht (1)
- Geometrische Analysis (1)
- Geometrische Reproduktionsverteilung (1)
- Geometry (1)
- Geopotential theory (1)
- Gerben (1)
- Gerbes (1)
- Gestagenic drug (1)
- Gestaltungsprinzipien (1)
- Gestures (1)
- Gevrey classes (1)
- Gibbs measures (1)
- Gibbs perturbation (1)
- Gibbs processes (1)
- Gibbs state (1)
- Gibbssche Punktprozesse (1)
- Gigli-Mantegazza flow (1)
- Glauber Dynamics (1)
- Glauber Dynamik (1)
- Global Analysis (1)
- Global Differentialgeometry (1)
- Global attractor (1)
- Global sensitivity analysis (1)
- Globale Differentialgeometrie (1)
- Globally hyperbolic Lorentz manifold (1)
- Goal specificity (1)
- Gradient flow (1)
- Gradientenfluss (1)
- Granular matter (1)
- Graph Laplacians (1)
- Graphentheorie (1)
- Gravitation (1)
- Gravitationswelle (1)
- Gravity anomalies and Earth structure (1)
- Greatest harmonic minorant (1)
- Green and Mellin edge operators (1)
- Green formula (1)
- Green operator (1)
- Green's function (1)
- Green's operator (1)
- Green´s Relations (1)
- Grenzwertsatz (1)
- Grundvorstellungen (1)
- Grushin operator (1)
- Gutzwiller formula (1)
- H-infinity-functional calculus (1)
- HIV (1)
- HIV Erkrankung (1)
- Haar system (1)
- Habitus (1)
- Halbgruppentheorie (1)
- Hamilton-Jacobi theory (1)
- Hamiltonian dynamics (1)
- Hamiltonian group action (1)
- Hamiltonicity (1)
- Hardy‘s inequality (1)
- Harmonic measure (1)
- Hauptfaserbündel (1)
- Hawkes process (1)
- Heat Flow (1)
- Heat kernel coefficients (1)
- Heavy-tailed distributions (1)
- Helmholtz problem (1)
- Hermeneutik (1)
- Heuristics (1)
- High dimensional statistical inference (1)
- Higher-order Sturm-Liouville problem (1)
- Hilbert Scales (1)
- Historie der Verzweigungsprozesse (1)
- Hochschule (1)
- Hochschulkurse (1)
- Hochschullehre (1)
- Holder-type source condition (1)
- Holomorphic map (1)
- Holomorphic mappings (1)
- Holonomie (1)
- Holonomy (1)
- Hopf algebra (1)
- Hopf algebra of Feynman diagrams (1)
- Hughes-frei (1)
- Hyperbolic dynamical system (1)
- Hyperbolic-parabolic system (1)
- Hypoelliptic operators (1)
- Hypoellipticity (1)
- Hölder-type source condition (1)
- IGRF (1)
- ITG-Grace2010 (1)
- Idempotents (1)
- Identities (1)
- Ill-conditioning (1)
- Ill-posed problem (1)
- In vitro dissolution (1)
- Indefinite (1)
- Index Theorie (1)
- Inference post model-selection (1)
- Infinite chain (1)
- Infinite dimensional manifolds (1)
- Infinite divisibility (1)
- Infinite graph (1)
- Infinite-dimensional interacting diffusion (1)
- Infinitely divisible point processes (1)
- Informatik (1)
- Informatik für alle (1)
- Informationskompetenz (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Innovation (1)
- Inquiry-based learning (1)
- Instabilität des Prozesses (1)
- Instruction (1)
- Integral varifold (1)
- Integration by parts formula (1)
- Interacting Diffusion Processes (1)
- Interacting Particle Systems (1)
- Interacting particle systems (1)
- Interpolation (1)
- Intrinsic metrics for Dirichlet forms (1)
- Intrinsicmotivation (1)
- Inverse Probleme (1)
- Inverse Problems (1)
- Inverse ill-posed problem (1)
- Inverse problem (1)
- Inverse problems (1)
- Inverse theory (1)
- Inverses Sturm-Liouville-Problem (1)
- Ionospheric current (1)
- Irrfahrten auf Graphen (1)
- Isometry group (1)
- Iterated corner asymptotics of solutions (1)
- Ito SDE (1)
- Ito integral (1)
- Jump processes (1)
- K-Means Verfahren (1)
- KB-space (1)
- KS model (1)
- Kalman Bucy filter (1)
- Kalman Filter (1)
- Kalman smoother (1)
- Kalman-Bucy Filter (1)
- Kanten-Randwertprobleme (1)
- Kato square root problem (1)
- Kegel space (1)
- Kern Methoden (1)
- Kernel regression (1)
- Kernfeldmodellierung (1)
- Kette von Halbgruppen (1)
- Kinesin V (1)
- Kirkwood--Salsburg equations (1)
- Kirkwood-Salsburg-Gleichungen (1)
- Kollaboration (1)
- Kolmogorov-Gleichung (1)
- Kolmogorov-Smirnov type tests (1)
- Kombinationstherapie (1)
- Kompetenzmessung (1)
- Konfidenzintervall (1)
- Kontinuumsgrenzwert (1)
- Kontrolltheorie (1)
- Konvergenzrate (1)
- Koopman operator (1)
- Koopman semigroup (1)
- Kopplung (1)
- Korn’s weighted inequality (1)
- Kritikalitätstheorem (1)
- Kulturelle Aktivität (1)
- Kähler-Mannigfaltigkeit (1)
- L-2-invariants (1)
- L2 metrics (1)
- L2-Metrik (1)
- LETKF (1)
- Lagrange Distributionen (1)
- Lagrangian modeling (1)
- Lagrangian modelling (1)
- Lagrangian-averaged equations (1)
- Lamé system (1)
- Landing site selection (1)
- Landweber iteration (1)
- Langevin Dynamics (1)
- Langevin diffusions (1)
- Langevin dynamics (1)
- Langevin-Diffusions (1)
- Laplace expansion (1)
- Laplace-type operator (1)
- Lattice cones (1)
- Laufzeittomographie (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning theory (1)
- Lehre (1)
- Lehrevaluation (1)
- Lehrkräftebildung Mathematik (1)
- Lehrpotential (1)
- Lehrtext (1)
- Leistungstests (1)
- Leitidee „Daten und Zufall“ (1)
- Lernen (1)
- Lernspiele (1)
- Lerntheorie (1)
- Level of confidence (1)
- Levy Maß (1)
- Levy diffusion approximation (1)
- Levy diffusions on manifolds (1)
- Levy processes (1)
- Levy type processes (1)
- Lidar remote sensing (1)
- Lie groupoid (1)
- Linear inverse problems (1)
- Linearized equation (1)
- Liouville theorem (1)
- Lipopolysaccharides (LPS) (1)
- Lipschitz domain (1)
- Lipschitz domains (1)
- Lithology (1)
- Local index theory (1)
- Locality (1)
- Logarithmic Sobolev inequality (1)
- Logik (1)
- Logrank test (1)
- Lokalitätsprinzip (1)
- Loop space (1)
- Lorentzgeometrie (1)
- Lorentzian Geometry (1)
- Lorenz 96 (1)
- Low rank matrices (1)
- Lower bound (1)
- Lumping (1)
- Lyapunov equation (1)
- Lyapunov function (1)
- Lévy diffusion approximation (1)
- Lévy diffusions on manifolds (1)
- Lévy measure (1)
- Lévy type processes (1)
- Lückentext (1)
- Lφ spectrum (1)
- MASCOT (1)
- MCMC modelling (1)
- MCMC-Verfahren (1)
- MOSAiC (1)
- Magnetfeldmodellierung (1)
- Magnetic anomalies: modelling and interpretation (1)
- Magnetic field variations through time (1)
- Magnetosphere (1)
- Manifolds with boundary (1)
- Mannigfaltigkeit (1)
- Mannigfaltigkeiten mit Kante (1)
- Mannigfaltigkeiten mit Singularitäten (1)
- Marcus canonical equation (1)
- Marked Gibbs process (1)
- Markierte Gibbs-Punkt-Prozesse (1)
- Markov Chain (1)
- Markov semigroups (1)
- Markov-Ketten (1)
- Markov-field property (1)
- Markovketten (1)
- Martin-Dynkin boundary (1)
- Marx (1)
- Maslov and Conley–Zehnder index (1)
- Mathematical Physics (1)
- Mathematical model (1)
- Mathematical modeling (1)
- Mathematics Tasks (1)
- Mathematics classrooms (1)
- Mathematics textbooks (1)
- Mathematik (1)
- Mathematikaufgaben (1)
- Matrix function approximation (1)
- Maximal subsemibands (1)
- McKean-Vlasov (1)
- Measure-preserving semiflow (1)
- Mehrtyp-Verzweigungsprozesse (1)
- Mellin (1)
- Mellin and Green operators edge symbols (1)
- Mellin operators (1)
- Mellin oscillatory integrals (1)
- Mellin quantization (1)
- Mellin quantizations (1)
- Mellin-Symbole (1)
- Mellin-Symbols (1)
- Menger algebra of rank n (1)
- Meromorphic operator functions (1)
- Metastabilität (1)
- Metastasis (1)
- Methode (1)
- Microdialyse (1)
- Microphysical particle properties (1)
- Microphysical properties (1)
- Mikrophysik (1)
- Mikrosakkaden (1)
- Mikrosakkadensequenzen (1)
- Milnor Moore theorem (1)
- Minimax Optimality (1)
- Minimax Optimalität (1)
- Minimax convergence rates (1)
- Minimax hypothesis testing (1)
- Minimization (1)
- Minimizers (1)
- Mischzeiten (1)
- Misconceptions (1)
- Mittag-Leffler function (1)
- Mixing Times (1)
- Model comparison (1)
- Model selection (1)
- Modellreduktion (1)
- Modified Hamiltonians (1)
- Moduli space (1)
- Moduli spaces (1)
- Modulraum (1)
- Molecular dynamics (1)
- Molecular motor (1)
- Mollification (1)
- Monte Carlo (1)
- Monte Carlo method (1)
- Monte Carlo testing (1)
- Montel theorem (1)
- Motivation (1)
- Multi objective function (1)
- Multichannel wavelets (1)
- Multidimensional nonisentropic hydrodynamic model (1)
- Multigrid (1)
- Multiple problem spaces (1)
- Multiple time stepping (1)
- Multiplicative Levy noise (1)
- Multiplicative noise (1)
- Multiscale analysis (1)
- Multitype branching processes (1)
- Multivariate meromorphic functions (1)
- Multiwavelength LIDAR (1)
- Multizeta-Abbildungen (1)
- NLME (1)
- NWP (1)
- Navier-Stokes-Gleichungen (1)
- Navier-Stoks equations (1)
- Newton Polytope (1)
- Newton method (1)
- Newton polytopes (1)
- Nodal domain (1)
- Non-Markov drift (1)
- Non-coercive problem (1)
- Non-linear (1)
- Non-linear semigroups (1)
- Non-proportional hazards (1)
- Non-regular drift (1)
- Non-symmetric potential (1)
- Nonlinear (1)
- Nonlinear filters (1)
- Nonlinear ill-posed problems (1)
- Nonlinear systems (1)
- Nonparametric regression (1)
- Normalenbündel (1)
- Numerical simulation (1)
- Numerical weather prediction (1)
- Ny-Alesund (1)
- ODE with random initial conditions (1)
- OSSS inequality (1)
- Objektive Hermeneutik (1)
- Ollivier-Ricci (1)
- Onkologie (1)
- Operation (1)
- Operator algebras (1)
- Operator differential equations (1)
- Operator-valued symbols (1)
- Operator-valued symbols of Mellin type (1)
- Operators on manifolds with conical singularities (1)
- Operators on manifolds with edge (1)
- Operators on manifolds with edge and conical exit to infinity (1)
- Operators on manifolds with second order singularities (1)
- Operators on singular cones (1)
- Operators on singular manifolds (1)
- Operatortheorie (1)
- Optimal transportation (1)
- Optimality conditions (1)
- Optimierung (1)
- Optimization (1)
- Orbifolds (1)
- Order-preserving (1)
- Order-preserving bijections (1)
- Order-preserving maps (1)
- Order-reversing transformations (1)
- Ordered fields (1)
- Ordinary differential equations (1)
- Ordnungs-Filtrierung (1)
- Orlicz space height-excess (1)
- Ornstein-Uhlenbeck (1)
- Orthogonal connections with torsion (1)
- Orthogruppen (1)
- PBTK (1)
- PC-MRI (1)
- PISA (1)
- Pacific Ocean (1)
- Pade approximants (1)
- Paleoclimate reconstruction (1)
- Paleoecology (1)
- Paleogeography (1)
- Papangelou Process (1)
- Papangelou process (1)
- Papangelou processes (1)
- Papangelou-Prozess (1)
- Paracrine and autocrine regulation (1)
- Parameter Schätzung (1)
- Parametrices (1)
- Parametrices of elliptic operators (1)
- Partial Differential Equations (1)
- Partial Integration (1)
- Partial algebra (1)
- Partielle Differential Gleichungen (1)
- Peano phenomena (1)
- Penalized likelihood (1)
- Perfect groups (1)
- Permuted balance (1)
- Perron's method (1)
- Perron-Frobenius operator (1)
- Personal Response System (1)
- Perturbation theory (1)
- Perturbative renormalization (1)
- Pfadintegrale (1)
- Pfaffian (1)
- Pharmacokinetic models (1)
- Pharmakokinetik (1)
- Pharmakometrie (1)
- Phase transition (1)
- Physics concepts (1)
- Physics-based machine learning (1)
- Physik (1)
- Plio-Pleistocene (1)
- Plio-Pleistozän (1)
- PoSI constants (1)
- Poincaré Birkhoff Witt theorem (1)
- Point Processes (1)
- Point process (1)
- Poisson bridge (1)
- Poisson process (1)
- Polya Process (1)
- Polya difference process (1)
- Polya sum (1)
- Polyascher Prozess (1)
- Polymere (1)
- Pontrjagin duality (1)
- Populationen (1)
- Populations Analyse (1)
- Porous medium equation (1)
- Positional games (1)
- Positive mass theorem (1)
- Positive scalar curvature (1)
- Positive semigroups (1)
- Potential theory (1)
- Pregnancy (1)
- Primary 26D15 (1)
- Primary: 47B35 (1)
- Prinicipal Fibre Bundles (1)
- Probabilistic numerical methods (1)
- Problem solving (1)
- Professionswissen (1)
- Programmierausbildung (1)
- Projekte (1)
- Proper orthogonal decomposition (1)
- Proportional hazards (1)
- Protein binding (1)
- Proving (1)
- Proxy forward modeling (1)
- Pseudo-Differentialoperatoren (1)
- Pseudo-differential algebras (1)
- Pseudodifferential operators (1)
- Pseudodifferentialoperatoren auf dem Torus (1)
- Punktprozess (1)
- Punktprozesse (1)
- Quadratic tilt-excess (1)
- Quadrature mirror filters (1)
- Quadrature rule (1)
- Quantenfeldtheorie (1)
- Quantifizierung von Unsicherheit (1)
- Quantizations (1)
- Quantizer (1)
- Quasi Random Walk (1)
- Quasi-shuffles (1)
- Quasiconformal mapping (1)
- Quasilinear hyperbolic system (1)
- Quasimodes (1)
- Quotientenschiefkörper (1)
- RMSE (1)
- Radar backscatter (1)
- Rahmenlehrplan (1)
- Raman lidar (1)
- Ramified Cauchy problem (1)
- Randbedingungen (1)
- Random Field Ising Model (1)
- Random cluster model (1)
- Random feature maps (1)
- Random measures (1)
- Randomisation (1)
- Randomised tree algorithm (1)
- Randomized strategy (1)
- Rank (1)
- Rank of semigroup (1)
- Rarita-Schwinger (1)
- Raumzeiten mit zeitartigen Rand (1)
- Re-Engineering (1)
- Reaction-diffusion system (1)
- Real-variable harmonic analysis (1)
- Realistic Mathematics Education (1)
- Rechnen (1)
- Reciprocal process (1)
- Reciprocal processes (1)
- Reclassification (1)
- Rectifiable varifold (1)
- Recursive transport equations (1)
- Reflektierende Randbedingungen (1)
- Regular semigroups (1)
- Regular variation (1)
- Regularisierung (1)
- Regularity analysis (1)
- Regularization (1)
- Reihendarstellungen (1)
- Reinforcement Learning (1)
- Rektifizierbarkeit höherer Ordnung (1)
- Renormalisation (1)
- Renormalization (1)
- Renormalized integral (1)
- Reproducing kernel Hilbert space (1)
- Reproduktionsrate (1)
- Resampling (1)
- Retrieval (1)
- Reversibility (1)
- Rho invariants (1)
- Ricci flow (1)
- Ricci solitons (1)
- Ricci-Fluss (1)
- Riemann-Roch theorem (1)
- Riemannsche Geometrie (1)
- Riesz continuity (1)
- Riesz topology (1)
- Risikoanalyse (1)
- Risk analysis (1)
- Risk assessment (1)
- Risk model (1)
- Root function (1)
- Rooted trees (1)
- Rota-Baxter (1)
- Rota-Baxter algebra (1)
- Rothe method (1)
- Rough paths (1)
- SAR amplitude (1)
- SHBG (1)
- SPECT (1)
- Sakkadendetektion (1)
- Sampling (1)
- Sand pile (1)
- Satellite geodesy (1)
- Satellite magnetics (1)
- Satellite magnetometer observations (1)
- Saturation model (1)
- Satz von Milnor Moore (1)
- Satz von Poincaré Birkhoff Witt (1)
- Scattering theory (1)
- Schrodinger equation (1)
- Schrödinger Problem (1)
- Schrödinger operator (1)
- Schrödinger operators (1)
- Schrödinger problem (1)
- Schulbuch (1)
- Schwarzes Loch (1)
- Schätzung von Verzweigungsprozessen (1)
- Second fundamental form (1)
- Second order elliptic equations (1)
- Secondary: 47L80 (1)
- Secular variation (1)
- Secular variation rate of change (1)
- Sedimentary ancient DNA (sedaDNA) (1)
- Seiberg-Witten theory (1)
- Seiberg-Witten-Invariante (1)
- Sekundarstufe I (1)
- Selbstassemblierung (1)
- Self-exciting point process (1)
- Self-interacting scalar field (1)
- Semi-classical analysis (1)
- Semi-klasische Abschätzung (1)
- Semiclassical analysis (1)
- Semiclassical difference operator (1)
- Semigroup (1)
- Semiklassik (1)
- Semiklassische Spektralasymptotik (1)
- Sentinel-1 (1)
- Sequential data assimilation (1)
- Sharp threshold (1)
- Shintani zeta values (1)
- Shnol theorem (1)
- Shuffle products (1)
- Shuffles (1)
- Signatures (1)
- Simulation (1)
- Simulation of Gaussian processes (1)
- Simulationsstudien (1)
- Singular analysis (1)
- Singular cones (1)
- Sinkhorn approximation (1)
- Sinkhorn problem (1)
- Skew Diffusionen (1)
- Skorokhod' s invariance principle (1)
- Small (1)
- Smooth cones (1)
- Smoothing (1)
- Sobolev Poincare inequality (1)
- Sobolev problem (1)
- Sobolev spaces (1)
- Sobolev spaces with double weights on singular cones (1)
- Sociolinguistics (1)
- Software Engineering (1)
- Softwareentwicklung (1)
- Soziolinguistik (1)
- Space (1)
- Space-Time Cluster Expansions (1)
- Spatio-temporal ETAS model (1)
- Spectral Geometry (1)
- Spectral Regularization (1)
- Spectral analysis (1)
- Spectral exponent (1)
- Spectral flow (1)
- Spectral gap (1)
- Spectral regularization (1)
- Spectral theory of graphs (1)
- Spectral triples (1)
- Spektraltheorie (1)
- Spezifikationstests (1)
- Spiel (1)
- Spin Geometrie (1)
- Spin Hall effekte (1)
- Spin geometry (1)
- Stability selection (1)
- Statistical inverse problem (1)
- Statistical learning (1)
- Statistical methods (1)
- Stichprobenentnahme aus einem statistischen Modell (1)
- Stochastic Burgers equations (1)
- Stochastic Hamiltonian (1)
- Stochastic Ordering (1)
- Stochastic bridges (1)
- Stochastic domination (1)
- Stochastic epidemic model (1)
- Stochastic geometry (1)
- Stochastic systems (1)
- Stochastische Analysis (1)
- Stochastische Zellulare Automaten (1)
- Stormer-Verlet method (1)
- Strain (1)
- Stratified spaces (1)
- Stratonovich SDE (1)
- Stratonovich integral (1)
- Stress (1)
- Stress drop (1)
- Streuamplitude (1)
- Streutheorie (1)
- Strike-slip fault model (1)
- Strings (1)
- Structured population equation (1)
- Strukturbildung (1)
- Strukturverbesserung (1)
- Studiengänge (1)
- Studierendenperformance (1)
- Studium (1)
- Sturm-Liouville problems (1)
- Sturm-Liouville problems of higher order (1)
- Sturm-Liouville-Problem (1)
- Sturm-Liouville-Problem höherer Ordnung (1)
- Subcritical (1)
- Subdivision schemes (1)
- Subdivisions (1)
- Submanifolds (1)
- Subsampling (1)
- Super-quadratic tilt-excess (1)
- Supergeometrie (1)
- Surface potentials with asymptotics (1)
- Surface roughness (1)
- Surgery (1)
- Survival models with covariates (1)
- Svalbard (1)
- Sylvester equations (1)
- Symplectic manifold (1)
- Synchrotron (1)
- System of nonlocal PDE of first order (1)
- Systeme interagierender Partikel (1)
- Systempharmakologie (1)
- TEC (1)
- Taphonomy (1)
- Technology (1)
- TerraSAR-X/TanDEM-X (1)
- Testfähigkeit (1)
- Tests (1)
- Tetration (1)
- Textbook analysis (1)
- Textbook research (1)
- The Yamabe (1)
- Theoretische Informatik (1)
- Therapeutic proteins (1)
- Thermal mathematical model (1)
- Three-space theory (1)
- Tibetan Plateau (1)
- Tides (1)
- Time duality (1)
- Time integration (1)
- Time of flight (1)
- Toeplitz-type pseudodifferential operators (1)
- Topological model (1)
- Toxicokinetic modelling (1)
- Toxicokinetics (1)
- Trace Dirichlet form (1)
- Transformation semigroup (1)
- Transformation semigroups on infinite chains (1)
- Transition probabilities (1)
- Tunneling (1)
- Turbulence (1)
- Twisted product (1)
- Twisted symbolic estimates (1)
- Two-level interacting process (1)
- Two-sample tests (1)
- Tätigkeitstheorie (1)
- Umbilic product (1)
- Uncertainty Quantification (1)
- Unendlichdimensionale Mannigfaltigkeit (1)
- Unique Gibbs state (1)
- Unit disk (1)
- Universal covering group (1)
- Van der Pol oscillator (1)
- Variable selection (1)
- Variational principle (1)
- Variationsrechnung (1)
- Variationsrechung (1)
- Variationsstabilität (1)
- Varifaltigkeit (1)
- Vector bundle (1)
- Vector subdivision schemes (1)
- Verknüpfung Fachwissenschaft und Fachdidaktik (1)
- Verzweigungsprozesse (1)
- Vincent (1)
- Viscosity solutions (1)
- Visuospatial reasoning (1)
- Vitali theorem (1)
- Volterra operator (1)
- Volterra symbols (1)
- WKB ansatz (1)
- WKB approximation (1)
- WKB expansion (1)
- Wahrscheinlichkeitsverteilung (1)
- Warped product (1)
- Wartung von Lehrveranstaltungen (1)
- Wave equation (1)
- Wave operator (1)
- Waveletanalyse (1)
- Weak Mixing Condition (1)
- Wechselwirkende Teilchensysteme (1)
- Weighted (1)
- Weighted edge spaces (1)
- Well log (1)
- Weyl algebras bundle (1)
- Weyl symbol (1)
- Weyl tensor (1)
- Wide angle (1)
- Willmore functional (1)
- Winkel (1)
- Wissenschaftliches Arbeiten (1)
- Wnt/beta-catenin signalling pathway (1)
- Wolfgang (1)
- Wärmefluss (1)
- Wärmekern (1)
- Wärmeleitungsgleichung (1)
- Yamabe invariant (1)
- Yamabe problem (1)
- Yamabe-Problem (1)
- Zahlbereichserweiterung (1)
- Zahlerwerb (1)
- Zellmotilität (1)
- Zeta-function (1)
- Zig-zag order (1)
- Zufallsvariable (1)
- Zustandsschätzung (1)
- Zählprozesse (1)
- a posteriori stopping rule (1)
- absorbing boundary (1)
- absorbing set (1)
- absorption (1)
- accelerated life time model (1)
- accelerated small (1)
- accuracy (1)
- acute severe (1)
- adaptive (1)
- adaptivity (1)
- adipose tissue (1)
- adrenal insufficiency (1)
- aerosol (1)
- aerosol distribution (1)
- aerosol-boundary layer interactions (1)
- aerosols (1)
- affine (1)
- affine invariance (1)
- algebra (1)
- algebra of rank n (1)
- algebraic systems (1)
- algebras (1)
- alignment (1)
- alternating direction implicit (1)
- alternative variety (1)
- amoeboid motion (1)
- amöboide Bewegung (1)
- analytic functional (1)
- analytic index (1)
- analytic perturbation theory (1)
- angewandte Mathematik (1)
- animal behavior (1)
- anisotropic spaces (1)
- anti-infective (1)
- antibiotics (1)
- antigen processing (1)
- antimicrobial stewardship (1)
- applied mathematics (1)
- approximation (1)
- approximative Differenzierbarkeit (1)
- aptitude tests (1)
- archaeomagnetism (1)
- articulation (1)
- association (1)
- asteroseismology (1)
- asymptotic (1)
- asymptotic approximation (1)
- asymptotic expansions (1)
- asymptotic methods (1)
- asymptotic properties of eigenfunctions (1)
- asymptotic stable (1)
- asymptotical normal distribution (1)
- asymptotics (1)
- asymptotics of solutions (1)
- asymptotische Entwicklung (1)
- asymptotische Normalverteilung (1)
- attenuated Radon transform (1)
- aurora (1)
- autonomic nervous system (1)
- b-value (1)
- backtrajectories; (1)
- backward heat problem (1)
- balanced dynamics (1)
- bar with variable cross-section (1)
- basic ideas ('Grundvorstellungen') (1)
- bayessche Inferenz (1)
- bedingter Erwartungswert (1)
- behavior (1)
- bending of an orthotropic cusped plate (1)
- beta-functions (1)
- binding (1)
- bioinformatics (1)
- biological population equations (1)
- birhythmic behavior (1)
- black hole (1)
- black holes (1)
- body mass index procedure (1)
- body surface area (1)
- boun- dedness (1)
- boundary conditions (1)
- boundary element method (1)
- boundary values problems (1)
- bounds (1)
- branching processes (1)
- bridges of random walks (1)
- bundles (1)
- calculation (1)
- calculus of variations (1)
- canonical Marcus integration (1)
- canonical discretization schemes (1)
- category equivalence of clones (1)
- cell motility (1)
- certolizumab pegol (1)
- chain of semigroups (1)
- characteristic boundary point (1)
- characteristic points (1)
- characterization of point processes (1)
- chemical master equation (1)
- chemistry (1)
- classical and quantum reduction (1)
- classification with partial labels (1)
- clathrin (1)
- clone of operations (1)
- cluster (1)
- cluster analysis (1)
- clustering (1)
- coated and absorbing aerosols (1)
- coercivity (1)
- coherent set (1)
- collegial supervision (1)
- coloration of terms (1)
- colored solid varieties (1)
- compact resolvent (1)
- companies (1)
- comparison principle (1)
- completeness (1)
- completeness levels (1)
- complex systems (1)
- composition of terms (1)
- composition operator (1)
- compound Poisson processes (1)
- compound polyhedra (1)
- compressible Euler equations (1)
- computational biology (1)
- concentration (1)
- concentration inequalities (1)
- condition number (1)
- conditional Bayes factors (1)
- conditional Wiener measure (1)
- conditional expectation value (1)
- conditioned (1)
- conditioned Feller diffusion (1)
- conditions (1)
- conditions of success (1)
- confidence interval (1)
- confidence intervals (1)
- congenital adrenal hyperplasia (1)
- congruence (1)
- connections (1)
- conormal asymptotic expansions (1)
- conormal asymptotics (1)
- conormal symbols (1)
- conservation laws (1)
- conservative discretization (1)
- constitutive relations (1)
- constrained Hamiltonian systems (1)
- contact transformations (1)
- continuity in Sobolev spaces with double weights (1)
- continuous testing (1)
- continuous time Markov Chains (1)
- continuous time Markov chain (1)
- continuous-time data assimilation (1)
- control theory (1)
- convergence assessment (1)
- convergence rate (1)
- corner parametrices (1)
- corona virus (1)
- correlated noise (1)
- cortisol (1)
- coupled solution (1)
- covering (1)
- critical and subcritical Dawson-Watanabe process (1)
- criticality theorem (1)
- critically ill (1)
- crohn's disease (1)
- crossed product (1)
- curvature varifold (1)
- cusp (1)
- cusped bar (1)
- das Cauchyproblem (1)
- das Goursatproblem (1)
- das charakteristische Cauchyproblem (1)
- data-driven (1)
- dbar-Neumann problem (1)
- de Sitter model ; Fundamental solutions ; Decay estimates (1)
- decay of eigenfunctions (1)
- decomposition (1)
- deformation quantization (1)
- degenerate elliptic equations (1)
- degenerate elliptic systems (1)
- delaney-dress tiling theory (1)
- density estimation (1)
- density of a measure (1)
- design elements (1)
- design research (1)
- determinant (1)
- determinantal point processes (1)
- determinantische Punktprozesse (1)
- dht-symmetric category (1)
- die linearisierte Einsteingleichung (1)
- differential cohomology (1)
- differential geometry (1)
- differential-algebraic equations (1)
- diffusion maps (1)
- diffusion process (1)
- dimension functional (1)
- dimension reduction (1)
- direct and indirect climate observations (1)
- direkte und indirekte Klimaobservablen (1)
- disagreement percolation (1)
- discontinuous Robin condition (1)
- discontinuous drift (1)
- discrete Schrodinger (1)
- discrete Witten complex (1)
- discrete saymptotic types (1)
- discrete spectrum (1)
- disease activity (1)
- disjunction of identities (1)
- diskontinuierliche Drift (1)
- diskreter Witten-Laplace-Operator (1)
- distorted Brownian motion (1)
- distribution (1)
- distribution with asymptotics (1)
- distributional boundary (1)
- distributions with one-sided support (1)
- division algebras (1)
- division ring of fractions (1)
- division rings (1)
- division trees (1)
- divisors (1)
- domains with singularities (1)
- doppelsemigroup (1)
- dried blood spots (1)
- drug monitoring (1)
- duale IT-Ausbildung (1)
- duality formulae (1)
- dynamical models (1)
- dynamical system (1)
- dynamical system representation (1)
- e-Assessment (1)
- e-Learning (1)
- early mathematical education (1)
- earthquake hazards (1)
- earthquake interaction (1)
- earthquake precursor (1)
- edge Sobolev spaces (1)
- edge algebra (1)
- edge boundary value problems (1)
- edge quantizations (1)
- edge spaces (1)
- edge symbol (1)
- edge- and corner-degenerate symbols (1)
- eigenfunction (1)
- eigenvalue asymptotics (1)
- eigenvalue decay (1)
- eigenvalues (1)
- elastic bar (1)
- elasticity (1)
- electrodynamics (1)
- elliptic boundary (1)
- elliptic boundary conditions (1)
- elliptic complex (1)
- elliptic differential operators of firstorder (1)
- elliptic equation (1)
- elliptic functions (1)
- elliptic morphism (1)
- elliptic operators in subspaces (1)
- elliptic operators on non-compact manifolds (1)
- elliptic problem (1)
- elliptic problems (1)
- elliptic quasicomplexes (1)
- elliptic systems (1)
- ellipticity in the edge calculus (1)
- ellipticity of cone operators (1)
- ellipticity of corners operators (1)
- ellipticity with interface conditions (1)
- ellipticity with parameter (1)
- ellipticity with respect to interior and edge symbols (1)
- elliptische Gleichungen (1)
- elliptische Quasi-Komplexe (1)
- embedded Markov chain (1)
- embeddings (1)
- empirical Wasserstein distance (1)
- endomorphism semigroup (1)
- energetic space (1)
- enlargement of filtration (1)
- ensembles (1)
- entropy (1)
- equation of motion (1)
- equatorial ionization anomaly (1)
- equatorial ionosphere (1)
- equatorial plasma bubbles (1)
- equivalence (1)
- ergodic diffusion processes (1)
- ergodic rates (1)
- error diagram (1)
- erste Variation (1)
- essential position in terms (1)
- estimation of regression (1)
- eta forms (1)
- exact simulation method (1)
- exact simulation methods (1)
- exact solution (1)
- exakte Simulation (1)
- exchange algorithms (1)
- exercise collection (1)
- exit calculus (1)
- expansion (1)
- exponential decay (1)
- exponential function (1)
- exponential stability (1)
- exterior tensor product (1)
- extinction probability (1)
- eye movements (1)
- false discovery rate (1)
- fat-free mass (1)
- feedback (1)
- feedback particle filter (1)
- fence-preserving transformations (1)
- fibration (1)
- fibre coordinates (1)
- fibroblasts (1)
- field-aligned currents (1)
- filtering (1)
- finite transformation semigroup (1)
- finiteness theorem (1)
- finsler distance (1)
- first exit location (1)
- first exit times (1)
- first passage times (1)
- first variation (1)
- fixational eye movements (1)
- fixed point formula (1)
- flocking (1)
- flood loss estimation (1)
- fluid mechanics (1)
- fluid-structure interaction (1)
- foliated diffusion (1)
- force unification (1)
- forcing from below (1)
- fore-casting (1)
- forecasting and prediction (1)
- formal (1)
- formal power series (1)
- formulas (1)
- forschungsorientiertes Lernen (1)
- fosfomycin (1)
- fractional calculus (1)
- fractions (1)
- fracture (1)
- fracture network (1)
- frameworks (1)
- free algebra (1)
- frequency-modulated continuous-wave (FMCW) (1)
- fully non-linear degenerate parabolic equations (1)
- functional calculus (1)
- functor geometry (1)
- fundamental ideas (1)
- fundamental solution (1)
- game (1)
- game-based (1)
- gamification (1)
- gauge group (1)
- generalized Abelian gauge theory (1)
- generalized Bruck-Reilly *-extension (1)
- generalized Laplace operator (1)
- generalized eigenfunction (1)
- generalized eigenfunctions (1)
- generating sets (1)
- geodätischer Abstand (1)
- geomagnetic field (1)
- geomagnetism (1)
- geometric optics approximation (1)
- geometric reproduction distribution (1)
- geomorphology (1)
- geophysics (1)
- geopotential theory (1)
- geordnete Gruppen von Conrad-Typ (1)
- global exact boundary controllability (1)
- global solution (1)
- global solutions (1)
- global-hyperbolisch (1)
- globally hyperbolic (1)
- globally hyperbolic spacetime (1)
- good-inner function (1)
- goodness of fit (1)
- goodness-of-fit (1)
- goodness-of-fit testing (1)
- gradient-free (1)
- gradient-free sampling methods (1)
- graph (1)
- graph Laplacian (1)
- graph theory (1)
- gravitation (1)
- gravitational wave (1)
- green function (1)
- group (1)
- group ring (1)
- groups (1)
- guiding idea “Daten und Zufall” (1)
- heat asymptotics (1)
- heavy-tailed distributions (1)
- helicates (1)
- hemodynamics (1)
- hermeneutics (1)
- high-dimensional inference (1)
- higher operations (1)
- higher order rectifiability (1)
- higher singularities (1)
- highly (1)
- history of branching processes (1)
- hitting times (1)
- holomorphic function (1)
- holonomic constraints (1)
- host-parasite stochastic particle system (1)
- hybrid model (1)
- hybrids (1)
- hydraulic tomography (1)
- hydrocortisone (1)
- hydrogeophysics (1)
- hydrostatic atmosphere (1)
- hyperbolic dynamical system (1)
- hyperbolic operators (1)
- hyperequational theory (1)
- hypoelliptic estimate (1)
- höhere Operationen (1)
- höhere Singularitäten (1)
- idealised turbulence (1)
- idleness (1)
- ill-posed (1)
- illposed problem (1)
- indecomposable varifold (1)
- independent splittings (1)
- index formula (1)
- index of elliptic operator (1)
- index of stability (1)
- inegral formulas (1)
- infinite-dimensional diffusions (1)
- infinitesimal generator (1)
- infliximab dosing (1)
- initial boundary value problem (1)
- instability of the process (1)
- integral Fourier operators (1)
- integral equation (1)
- integral representation method (1)
- integration by parts on path space (1)
- interacting particles (1)
- interassociativity (1)
- interfaces with conical singularities (1)
- interindividual differences (1)
- interstitial space fluid (1)
- intrinsic diameter (1)
- intrinsischer Diameter (1)
- invariance (1)
- invariant (1)
- invariant measure (1)
- inverse Probleme (1)
- inverse Sturm-Liouville problems (1)
- inverse correlation estimation (1)
- inverse potential problems (1)
- inverse problem (1)
- inverse scattering (1)
- inverse semigroup (1)
- inverse theory (1)
- ionospheric convection (1)
- ionospheric precursors of earthquakes (1)
- isoperimetric estimates (1)
- isoperimetric inequality (1)
- isoperimetrische Ungleichung (1)
- iterated asymptotics (1)
- jump processes (1)
- k-means clustering (1)
- kanten- und ecken-entartete Symbole (1)
- kernel estimator of the hazard rate (1)
- kernel method (1)
- kernel operator (1)
- kernel-based Bayesian inference (1)
- kernel-basierte Bayes'sche Inferenz (1)
- kleine Parameter (1)
- kollegiale Supervision (1)
- komplexe Systeme (1)
- komplexe mechanistische Systeme (1)
- konstitutive Gleichungen (1)
- label noise (1)
- large deviations principle (1)
- large-scale mechanistic systems (1)
- laser remote sensing (1)
- lattice point (1)
- learning (1)
- learning rates (1)
- least favorable configuration (1)
- least squares estimator (1)
- left ordered groups (1)
- left-right asymmetry (1)
- lifespan (1)
- likelihood function (1)
- limit theorem (1)
- limit theorem for integrated squared difference (1)
- limiting distribution (1)
- linear fractional case (1)
- linear hyperidentity (1)
- linear identity (1)
- linear inverse problems (1)
- linear programming (1)
- linear regression (1)
- linear response (1)
- linearly implicit time stepping methods (1)
- linical databases (1)
- linking of subject science and didactic (1)
- linksgeordnete Gruppen (1)
- locality principle (1)
- locally indicable (1)
- locally indicable group (1)
- log-concavity (1)
- logarithmic convergence rate (1)
- logarithmic residue (1)
- logic (1)
- logistic regression analysis (1)
- logistische Regression (1)
- lokal indizierbar (1)
- long-time corrections (1)
- low rank matrix recovery (1)
- low rank recovery (1)
- low-lying eignvalues (1)
- low-rank approximations (1)
- lumping (1)
- macromolecular decay (1)
- magnetic (1)
- magnetic field modeling (1)
- magnetic field variations through (1)
- magnetisch (1)
- magnitude errors (1)
- makromolekularer Zerfall (1)
- manifold (1)
- manifold with boundary (1)
- manifold with edge (1)
- manifolds with boundary (1)
- manifolds with corners (1)
- manifolds with cusps (1)
- manifolds with edge (1)
- manifolds with edge and boundary (1)
- many-electron systems (1)
- mapping class (1)
- mapping class group (1)
- mapping class groups (1)
- matching of asymptotic expansions (1)
- mathematical modeling (1)
- mathematical modelling (1)
- mathematical physics (1)
- mathematics (1)
- mathematische Physik (1)
- matrices (1)
- matrix completion (1)
- maturation (1)
- maximal regularity (1)
- maximal subsemigroup (1)
- mean curvature (1)
- mean ergodic (1)
- mean-field equations (1)
- mechanistic modeling (1)
- mechanistische Modellierung (1)
- mental arithmetic (1)
- mental number line (1)
- meromorphe Fortsetzung (1)
- meromorphic continuation (1)
- meromorphic family (1)
- meropenem (1)
- mesoscale forecasting (1)
- metal-organic (1)
- metaplectic operators (1)
- method (1)
- methods: data analysis (1)
- microlocal analysis (1)
- microlokale Analysis (1)
- microphysical properties (1)
- microphysics (1)
- microsaccades (1)
- middle school (1)
- middle-out approach (1)
- minimax hypothesis testing (1)
- minimax rate (1)
- minor planets, asteroids: individual: (162173) Ryugu (1)
- mit Anwendungen in der Laufzeittomographie, Seismischer Quellinversion und Magnetfeldmodellierung (1)
- mittlere Krümmung (1)
- mixed elliptic problems (1)
- mixed membership models (1)
- mixed problems (1)
- mixing (1)
- mixing optimization (1)
- mixture of bridges (1)
- mixture proportion estimation (1)
- mod k index (1)
- modal analysis (1)
- model error (1)
- model order reduction (1)
- model selection (1)
- model uncertainty (1)
- model-informed precision dosing (1)
- modellinformierte Präzisionsdosierung (1)
- moduli space of flat connections (1)
- modulo n index (1)
- mollifier method (1)
- moment map (1)
- monoclonal antibodies (1)
- monotone method (1)
- monotone random (1)
- monotonicity (1)
- monotonicity conditions (1)
- morphology (1)
- motion correction (1)
- motivation (1)
- motivic Feynman rules (1)
- multi-armed bandits (1)
- multi-change point detection (1)
- multi-hypersubstitutions (1)
- multi-modular morphology (1)
- multi-scale diffusion processes (1)
- multi-well potential (1)
- multiclass classification with label noise (1)
- multilayered coated and absorbing aerosol (1)
- multilevel Monte Carlo (1)
- multiple characteristics (1)
- multiple testing (1)
- multiple testing; (1)
- multiplicative Lévy noise (1)
- multiscale analysis (1)
- multiscale dynamics (1)
- multitype measure-valued branching processes (1)
- multivariable (1)
- multiwavelength Lidar (1)
- multiwavelength lidar (1)
- multizeta functions (1)
- mutual contamination models (1)
- n-ary operation (1)
- n-ary term (1)
- negative Zahlen (1)
- negative curvature (1)
- negative numbers (1)
- new recursive algorithm (1)
- nicht-lineare gemischte Modelle (NLME) (1)
- nichtlineare Modelle (1)
- nichtlineare partielle Differentialgleichung (1)
- nodal flow (1)
- noise Levy diffusions (1)
- non-Gaussian (1)
- non-coercive boundary conditions (1)
- non-dissipative regularisations (1)
- non-linear integro-differential equations (1)
- non-linear mixed effects modelling (NLME); (1)
- non-regular drift (1)
- non-uniqueness (1)
- nonasymptotic minimax separation rate (1)
- nondegenerate condition (1)
- nondeterministic linear hypersubstitution (1)
- nonhomogeneous boundary value problems (1)
- nonlinear PDI (1)
- nonlinear data assimilation (1)
- nonlinear invers problem (1)
- nonlinear optimization (1)
- nonlinear partial differential equations (1)
- nonlinear semigroup (1)
- nonlocal problem (1)
- nonparametric hypothesis testing (1)
- nonparametric regression estimation (1)
- nonparametric statistics (1)
- nonsmooth curves (1)
- norm estimates with respect to a parameter (1)
- normal bundle (1)
- nuclear norm (1)
- number (1)
- numerical (1)
- numerical analysis/modeling (1)
- numerical approximation (1)
- numerical extension (1)
- numerical methods (1)
- numerical relativity (1)
- numerical weather prediction (1)
- numerical weather prediction/forecasting (1)
- obesity (1)
- offene Wissenschaft (1)
- oncology (1)
- open mapping theorem (1)
- open science (1)
- operational momentum (1)
- operator (1)
- operator algebras on manifolds with singularities (1)
- operator calculus (1)
- operator valued symbols (1)
- operators (1)
- operators on manifolds with conical and edge singularities (1)
- operators on manifolds with edges (1)
- operators on manifolds with singularities (1)
- operators with corner symbols (1)
- optimal order (1)
- oracle inequalities (1)
- oracle inequality (1)
- oral anticancer drugs (1)
- order continuous norm (1)
- order filtration (1)
- order reduction (1)
- order-preserving (1)
- order-preserving mappings (1)
- ordered group (1)
- orientation-preserving (1)
- orientation-preserving and orientation-reversing transformations (1)
- orientation-preserving transformations (1)
- orthogroup (1)
- oscillatory systems (1)
- p-Branen (1)
- p-Laplace Operator (1)
- p-Laplace equation (1)
- p-branes (1)
- palaeomagnetism (1)
- paleoearthquakes (1)
- parabolic Harnack estimate (1)
- parabolic equations (1)
- parallelizable spheres (1)
- parameter-dependent cone operators (1)
- parameter-dependent ellipticity (1)
- parameter-dependent pseudodifferential operators (1)
- parametrices of elliptic operators (1)
- parity condition (1)
- parity conditions (1)
- part-whole concept (1)
- partial (1)
- partial Menger (1)
- partial algebras (1)
- partial averaging (1)
- partial ordering (1)
- particle filters (1)
- particle methods (1)
- particle microphysics (1)
- particle precipitation (1)
- partielle Integration (1)
- partielle Integration auf dem Pfadraum (1)
- patch antenna (1)
- pathwise expectations (1)
- percolation (1)
- periodic Gaussian process (1)
- periodic Ornstein-Uhlenbeck process (1)
- permanental and determinantal point processes (MSC 2010) 35K55 (1)
- permanental- (1)
- personalised medicine (1)
- pharmacodynamics (1)
- pharmacokinetic (1)
- pharmacokinetic/pharmacodynamic (1)
- phase dynamics (1)
- phase-locked loop (PLL) (1)
- photometer (1)
- physical SRB measures (1)
- physics (1)
- physiologically-based pharmacokinetics (PBPK) (1)
- physiologie-basierte Pharmacokinetic (PBPK) (1)
- pi-inverse monoid (1)
- piperacillin/tazobactam (1)
- planar rotors (1)
- polydisc (1)
- polymer (1)
- popPBPK (1)
- popPK (1)
- population analysis (1)
- populations (1)
- porous medium equation (1)
- poset (1)
- positive correlation (1)
- positive mass theorem (1)
- positive operators (1)
- posterior distribution (1)
- power amplifier (PA) (1)
- power series (1)
- presentations (1)
- principal symbolic hierarchies (1)
- principle (1)
- probabilistic modeling (1)
- probability distribution (1)
- probability generating function (1)
- probability of target attainment (1)
- probability theory (1)
- problem (1)
- problem of classification (1)
- problem-solving (1)
- problems (1)
- professional knowledge (1)
- profile likelihood (1)
- propor-tional hazard mode (1)
- proposal densities (1)
- proteasome (1)
- protein degradation (1)
- proteolysis (1)
- pseudo-diferential operators (1)
- pseudo-differential equation (1)
- pseudo-differentialboundary value problems (1)
- pseudo-differentielle Gleichungen (1)
- pseudodifferential boundary value problems (1)
- pseudodifferential subspace (1)
- pseudodifferential subspaces (1)
- pseudodifferentiale Operatoren (1)
- pseudospectral method (1)
- quantification (1)
- quantizer (1)
- quantum field theory (1)
- quasiconformal mapping (1)
- quasilinear Fredholm operator (1)
- quasilinear Fredholm operators (1)
- quasilinear equation (1)
- quasimodes (1)
- question of origin (1)
- r-hypersubstitution (1)
- r-term (1)
- radar (1)
- radiation mechanisms: thermal (1)
- random KMM-measure (1)
- random matrix theory (1)
- random sum (1)
- random variable (1)
- random walk (1)
- randomly forced Duffing equation (1)
- randomness (1)
- rank (1)
- rapid variations (1)
- rare events (1)
- rational Krylov (1)
- reading (1)
- real-variable harmonic analysis (1)
- rectifiable varifold (1)
- red blood cells (1)
- reflecting boundary (1)
- regimen (1)
- regular and singular inverse Sturm-Liouville problems (1)
- regular monoid (1)
- regular polyhedra (1)
- regularization method (1)
- regularly varying Levy process (1)
- reinforcement learning (1)
- rejection sampling (1)
- rekonstruktive Fallanalyse (1)
- rektifizierbare Varifaltigkeit (1)
- relative cohomology (1)
- relative index formulas (1)
- relative isoperimetric inequality (1)
- relative ranks (1)
- relative η-invariant (1)
- removable set (1)
- removable sets (1)
- representations of groups as automorphism groups of (1)
- reproduction rate (1)
- resampling (1)
- rescaled lattice (1)
- residue (1)
- resolvents (1)
- resonances (1)
- restricted isometry property (1)
- retrieval (1)
- reziproke Invarianten (1)
- right limits (1)
- rock mechanics (1)
- rooted trees (1)
- rough metrics (1)
- saccade detection (1)
- saccades (1)
- scalable (1)
- scaled lattice (1)
- scattering amplitude (1)
- scattering theory (1)
- schlecht gestellt (1)
- secular variation (1)
- seismic hazard (1)
- seismic source inversion (1)
- seismische Quellinversion (1)
- self-assembly (1)
- semi-Lagrangian method (1)
- semi-classical difference operator (1)
- semi-classical limit (1)
- semi-classical spectral estimates (1)
- semiclassical Agmon estimate (1)
- semiclassical spectral asymptotics (1)
- semiclassics (1)
- semiconductors (1)
- semigroup (1)
- semigroup representations (1)
- semigroup theory (1)
- semigroups on infinite chain (1)
- semipermeable barriers (1)
- semiprocess (1)
- sequences of microsaccades (1)
- sequential learning (1)
- series representation (1)
- shallow-water equations (1)
- shock wave (1)
- short-range prediction (1)
- signal detection (1)
- simulation (1)
- simulations (1)
- singular Sturm-Liouville (1)
- singular drifts (1)
- singular foliation (1)
- singular integral equations (1)
- singular point (1)
- singular points (1)
- singuläre Mannigfaltigkeiten (1)
- skew diffusion (1)
- skew field of fraction (1)
- small ball probabilities (1)
- small ball probabilities; (1)
- small noise asymptotics (1)
- smooth drift dependence (1)
- smoother (1)
- socialisation (1)
- soft matter (1)
- space-time Gibbs field (1)
- spacecraft operations (1)
- spacetimes with timelike boundary (1)
- sparsity (1)
- specific entropy (1)
- spectral boundary value problems (1)
- spectral cut-off (1)
- spectral independence (1)
- spectral kernel function (1)
- spectral regularization (1)
- spectral resolution (1)
- spectral theory (1)
- spin Hall effect (1)
- spirallike function (1)
- spread correction (1)
- stability (1)
- stable variety (1)
- stark Hughes-frei (1)
- starker Halbverband von Halbgruppen (1)
- stars: early-type (1)
- stars: individual: Vega (1)
- stars: oscillations (1)
- stars: rotation (1)
- starspots (1)
- state estimation (1)
- statistical (1)
- statistical inference (1)
- statistical inverse problem (1)
- statistical machine learning (1)
- statistical methods (1)
- statistics (1)
- statistische Inferenz (1)
- statistisches maschinelles Lernen (1)
- step process (1)
- step-up (1)
- stiff ODE (1)
- stochastic Burgers equations (1)
- stochastic Marcus (canonical) differential equation (1)
- stochastic bridges (1)
- stochastic completeness (1)
- stochastic interacting particles (1)
- stochastic mechanics (1)
- stochastic models (1)
- stochastic process (1)
- stochastic reaction diffusion equation with heavy-tailed Levy noise (1)
- stochastic systems (1)
- stochastics (1)
- stochastische Anordnung (1)
- stochastische Differentialgleichungen (1)
- stochastische Mechanik (1)
- stochastische Zellulare Automaten (1)
- stochastisches interagierendes System (1)
- stress variability (1)
- strong Feller property (1)
- strong semilattice of semigroups (1)
- strongly Hughes-free (1)
- strongly pseudoconvex domains (1)
- strongly tempered stable Levy measure (1)
- structure formation (1)
- structured cantilever (1)
- structured numbers (1)
- strukturierte Zahlen (1)
- sub-grid scale (1)
- subRiemannian geometry (1)
- subspaces (1)
- supergeometry (1)
- superposition (1)
- superposition of n-ary operations and n-ary (1)
- superposition of operations (1)
- surrogate loss (1)
- survival analysis (1)
- symbols (1)
- symmetry group (1)
- symplectic (canonical) transformations (1)
- symplectic manifold (1)
- symplectic methods (1)
- symplectic reduction (1)
- system Lame (1)
- systems of partial differential equations (1)
- systems pharmacology (1)
- target dimensions (1)
- targeted antineoplastic drugs (1)
- teacher training mathematics (1)
- teaching (1)
- temporal discretization (1)
- terms (1)
- terms and (1)
- terrigener Staub (1)
- terrigenous dust (1)
- test ability (1)
- tetration (1)
- the Dirichlet problem (1)
- the Goursat problem (1)
- the characteristic Cauchy problem (1)
- the first boundary value problem (1)
- the linearised Einstein equation (1)
- theorem (1)
- theory (1)
- therapeutic (1)
- thermospheric wind (1)
- thymoproteasome (1)
- thymus (1)
- tiling theory (1)
- time (1)
- time reversal (1)
- time series (1)
- time series with heavy tails (1)
- time symmetry (1)
- time-fractional derivative (1)
- tomogrphy (1)
- topic modeling (1)
- torsion forms (1)
- tracer tomography (1)
- transceiver (TRX) (1)
- transdimensional inversion (1)
- transformation (1)
- transformation semigroups (1)
- transformations on infinite set (1)
- transition paths (1)
- trapped surfaces (1)
- travel time tomography (1)
- triply periodic minimal surface (1)
- truncated SVD (1)
- two-dimensional topology (1)
- two-level interacting processes (1)
- tyrosine kinase inhibitors (1)
- ulcerative colitis (1)
- uncertainty (1)
- unendlich teilbare Punktprozesse (1)
- unendliche Teilbarkeit (1)
- uniform compact attractor (1)
- universal Hopf algebra of renormalization (1)
- unknown variance (1)
- unsteady flow (1)
- unzerlegbare Varifaltigkeit (1)
- upper atmosphere model (1)
- vagal sympathetic activity (1)
- value problems (1)
- variable projection method (1)
- variational calculus (1)
- variational iteration method (1)
- variational principle (1)
- varifold (1)
- verification (1)
- vibration (1)
- video study (1)
- viral fitness (1)
- wahrscheinlichkeitserzeugende Funktion (1)
- wave structure (1)
- wavelet analysis (1)
- weak dependence (1)
- weakly almost periodic (1)
- weiche Materie (1)
- weight-based formulations (1)
- weighted (1)
- weighted Hölder spaces (1)
- weighted Sobolev space (1)
- weighted Sobolev spaces (1)
- weighted Sobolev spaces with discrete saymptotics (1)
- weighted edge and corner spaces (1)
- weighted graphs (1)
- weighted spaces with asymptotics (1)
- well-posedness (1)
- zero-noise limit (1)
- zufällige Summe (1)
- η-invariant (1)
- когомологии (1)
- комплекс де Рама (1)
- проблема Неймана (1)
- теория Ходжа (1)
- ∂-operator (1)
Institute
- Institut für Mathematik (2151) (remove)
Das Eigene und das Fremde
(2023)
Die vorliegende Arbeit stellt eine Untersuchung des Fremdverstehens von Lehrkräften im Mathematikunterricht dar. Mit ‚Fremdverstehen‘ soll dabei – in Anlehnung an den Soziologen Alfred Schütz – der Prozess bezeichnet werden, in welchem eine Lehrkraft versucht, das Verhalten einer Schülerin oder eines Schülers zu verstehen, indem sie dieses Verhalten auf ein Erleben zurückführt, das ihm zugrunde gelegen haben könnte. Als ein wesentliches Merkmal des Prozesses stellt Schütz in seiner Theorie des Fremdverstehens heraus, dass das Fremdverstehen eines Menschen immer auch auf seinen eigenen Erlebnissen basiert. Aus diesem Grund wird in der Arbeit ein methodischer Zweischritt vorgenommen: Es werden zunächst die mathematikbezogenen Erlebnisse zweier Lehrkräfte nachgezeichnet, bevor dann ihr Fremdverstehen in konkreten Situationen im Mathematikunterricht rekonstruiert wird. In der ersten Teiluntersuchung (= der Rekonstruktion eigener Erlebnisse der untersuchten Lehrkräfte) erfolgt die Datenerhebung mit Hilfe biographisch-narrativer Interviews, in denen die untersuchten Lehrkräfte angeregt werden, ihre mathematikbezogene Lebensgeschichte zu erzählen. Die Analyse dieser Interviews wird im Sinne der rekonstruktiven Fallanalyse vorgenommen. Insgesamt führt die erste Teiluntersuchung zu textlichen Darstellungen der rekonstruierten mathematikbezogenen Lebensgeschichte der untersuchten Mathematiklehrkräfte. In der zweiten Teiluntersuchung (= der Rekonstruktion des Fremdverstehens der untersuchten Lehrkräfte) werden dann narrative Interviews geführt, in denen die untersuchten Lehrkräfte von ihrem Fremdverstehen in konkreten Situationen im Mathematikunterricht erzählen. Die Analyse dieser Interviews erfolgt mit Hilfe eines dreischrittigen Analyseverfahrens, welches die Autorin eigens zum Zweck der Rekonstruktion von Fremdverstehen entwickelte. Am Ende dieser zweiten Teiluntersuchung werden sowohl das rekonstruierte Fremdverstehen der Lehrkräfte in verschiedenen Unterrichtssituationen dargestellt als auch Strukturen, die sich in ihrem Fremdverstehen abzeichnen. Mit Hilfe einer theoretischen Verallgemeinerung werden schließlich – auf Basis der Ergebnisse der zweiten Teiluntersuchung – Aussagen über fünf Merkmale des Fremdverstehens von Lehrkräften im Mathematikunterricht im Allgemeinen gewonnen. Mit diesen Aussagen vermag die Arbeit eine erste Beschreibung davon hervorzubringen, wie sich das Phänomen des Fremdverstehens von Lehrkräften im Mathematikunterricht ausgestalten kann.
Zahlen in den Fingern
(2023)
Die Debatte über den Einsatz von digitalen Werkzeugen in der mathematischen Frühförderung ist hoch aktuell. Lernspiele werden konstruiert, mit dem Ziel, mathematisches, informelles Wissen aufzubauen und so einen besseren Schulstart zu ermöglichen. Doch allein die digitale und spielerische Aufarbeitung führt nicht zwingend zu einem Lernerfolg. Daher ist es umso wichtiger, die konkrete Implementation der theoretischen Konstrukte und Interaktionsmöglichkeiten mit den Werkzeugen zu analysieren und passend aufzubereiten.
In dieser Masterarbeit wird dazu exemplarisch ein mathematisches Lernspiel namens „Fingu“ für den Einsatz im vorschulischen Bereich theoretisch und empirisch im Rahmen der Artifact-Centric Activity Theory (ACAT) untersucht. Dazu werden zunächst die theoretischen Hintergründe zum Zahlensinn, Zahlbegriffserwerb, Teil-Ganze-Verständnis, der Anzahlwahrnehmung und -bestimmung, den Anzahlvergleichen und der Anzahldarstellung mithilfe von Fingern gemäß der Embodied Cognition sowie der Verwendung von digitalen Werkzeugen und Multi-Touch-Geräten umfassend beschrieben. Anschließend wird die App Fingu erklärt und dann theoretisch entlang des ACAT-Review-Guides analysiert. Zuletzt wird die selbstständig durchgeführte Studie mit zehn Vorschulkindern erläutert und darauf aufbauend Verbesserungs- und Entwicklungsmöglichkeiten der App auf wissenschaftlicher Grundlage beigetragen. Für Fingu lässt sich abschließend festhalten, dass viele Prozesse wie die (Quasi-)Simultanerfassung oder das Zählen gefördert werden können, für andere wie das Teil-Ganze-Verständnis aber noch Anpassungen und/oder die Begleitung durch Erwachsene nötig ist.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Übungsbuch zur Stochastik
(2023)
Dieses Buch stellt Übungen zu den Grundbegriffen und Grundsätzen der Stochastik und ihre Lösungen zur Verfügung. So wie man Tonleitern in der Musik trainiert, so berechnet man Übungsaufgaben in der Mathematik. In diesem Sinne soll dieses Übungsbuch vor allem als Vorlage dienen für das eigenständige, eigenverantwortliche Lernen und Üben.
Die Schönheit und Einzigartigkeit der Wahrscheinlichkeitstheorie besteht darin, dass sie eine Vielzahl von realen Phänomenen modellieren kann. Daher findet man hier Aufgaben mit Verbindungen zur Geometrie, zu Glücksspielen, zur Versicherungsmathematik, zur Demographie und vielen anderen Themen.
Das Mathematik-Teilprojekt SPIES-M zielt auf eine stärkere Professionsorientierung und die Verknüpfung von Fachwissenschaft und Fachdidaktik in der universitären Lehrkräftebildung. Zu allen großen Inhaltsgebieten der Mathematik wurden neue Lehrveranstaltungen konzipiert und in den Studienordnungen sämtlicher Lehrämter Mathematik an der Universität Potsdam implementiert. Für die Konzeption wurden theoriebasiert Gestaltungsprinzipien herausgearbeitet, die sowohl für das Design als auch für die Evaluation und Weiterentwicklung der Lehrveranstaltungen nach dem Design-Research-Ansatz genutzt werden können. Die Umsetzung der Gestaltungsprinzipien wird am Beispiel der Fundamentalen Idee der Proportionalität verdeutlicht und dabei aufgezeigt, wie Studierende dazu befähigt werden können, fachdidaktisches Wissen aus fachmathematischen Inhalten zu generieren. Die Entwicklung des Professionswissens der Studierenden wird mithilfe unterschiedlicher Instrumente untersucht, um Rückschlüsse auf die Wirksamkeit der neu konzipierten Lehrveranstaltungen zu ziehen. Für die Untersuchungen im Mixed-Methods-Design werden neben Beobachtungen in Lehrveranstaltungen eigens konzipierte Wissenstests, Gruppeninterviews, Unterrichtsentwürfe aus Praxisphasen und Lerntagebücher genutzt. Die Studierendenperspektive wird durch Befragungen zur wahrgenommenen (Berufs-)Relevanz der Lehrveranstaltungen erhoben. Weiteres wesentliches Element der Begleitforschung ist die kollegiale Supervision durch sogenannte „Spies“ (Spione), die die Veranstaltungen kriteriengeleitet beobachten und anschließend gemeinsam mit den Dozierenden reflektieren. Die bisherigen Ergebnisse werden hier präsentiert und hinsichtlich ihrer Implikationen diskutiert. Die im Projekt entwickelten Gestaltungsprinzipien als Werkzeug für Design und Evaluation sowie das Spies-Konzept der kollegialen Supervision werden für die Qualitätsentwicklung von Lehrveranstaltungen zum Transfer vorgeschlagen.
Amoeboid cell motility takes place in a variety of biomedical processes such as cancer metastasis, embryonic morphogenesis, and wound healing. In contrast to other forms of cell motility, it is mainly driven by substantial cell shape changes. Based on the interplay of explorative membrane protrusions at the front and a slower-acting membrane retraction at the rear, the cell moves in a crawling kind of way. Underlying these protrusions and retractions are multiple physiological processes resulting in changes of the cytoskeleton, a meshwork of different multi-functional proteins. The complexity and versatility of amoeboid cell motility raise the need for novel computational models based on a profound theoretical framework to analyze and simulate the dynamics of the cell shape.
The objective of this thesis is the development of (i) a mathematical framework to describe contour dynamics in time and space, (ii) a computational model to infer expansion and retraction characteristics of individual cell tracks and to produce realistic contour dynamics, (iii) and a complementing Open Science approach to make the above methods fully accessible and easy to use.
In this work, we mainly used single-cell recordings of the model organism Dictyostelium discoideum. Based on stacks of segmented microscopy images, we apply a Bayesian approach to obtain smooth representations of the cell membrane, so-called cell contours. We introduce a one-parameter family of regularized contour flows to track reference points on the contour (virtual markers) in time and space. This way, we define a coordinate system to visualize local geometric and dynamic quantities of individual contour dynamics in so-called kymograph plots. In particular, we introduce the local marker dispersion as a measure to identify membrane protrusions and retractions in a fully automated way.
This mathematical framework is the basis of a novel contour dynamics model, which consists of three biophysiologically motivated components: one stochastic term, accounting for membrane protrusions, and two deterministic terms to control the shape and area of the contour, which account for membrane retractions. Our model provides a fully automated approach to infer protrusion and retraction characteristics from experimental cell tracks while being also capable of simulating realistic and qualitatively different contour dynamics. Furthermore, the model is used to classify two different locomotion types: the amoeboid and a so-called fan-shaped type.
With the complementing Open Science approach, we ensure a high standard regarding the usability of our methods and the reproducibility of our research. In this context, we introduce our software publication named AmoePy, an open-source Python package to segment, analyze, and simulate amoeboid cell motility. Furthermore, we describe measures to improve its usability and extensibility, e.g., by detailed run instructions and an automatically generated source code documentation, and to ensure its functionality and stability, e.g., by automatic software tests, data validation, and a hierarchical package structure.
The mathematical approaches of this work provide substantial improvements regarding the modeling and analysis of amoeboid cell motility. We deem the above methods, due to their generalized nature, to be of greater value for other scientific applications, e.g., varying organisms and experimental setups or the transition from unicellular to multicellular movement. Furthermore, we enable other researchers from different fields, i.e., mathematics, biophysics, and medicine, to apply our mathematical methods. By following Open Science standards, this work is of greater value for the cell migration community and a potential role model for other Open Science contributions.
We present a Reduced Order Model (ROM) which exploits recent developments in Physics Informed Neural Networks (PINNs) for solving inverse problems for the Navier-Stokes equations (NSE). In the proposed approach, the presence of simulated data for the fluid dynamics fields is assumed. A POD-Galerkin ROM is then constructed by applying POD on the snapshots matrices of the fluid fields and performing a Galerkin projection of the NSE (or the modified equations in case of turbulence modeling) onto the POD reduced basis. A POD-Galerkin PINN ROM is then derived by introducing deep neural networks which approximate the reduced outputs with the input being time and/or parameters of the model. The neural networks incorporate the physical equations (the POD-Galerkin reduced equations) into their structure as part of the loss function. Using this approach, the reduced model is able to approximate unknown parameters such as physical constants or the boundary conditions. A demonstration of the applicability of the proposed ROM is illustrated by three cases which are the steady flow around a backward step, the flow around a circular cylinder and the unsteady turbulent flow around a surface mounted cubic obstacle.
Introduction:
Hydrocortisone is the standard of care in cortisol replacement therapy for congenital adrenal hyperplasia patients. Challenges in mimicking cortisol circadian rhythm and dosing individualization can be overcome by the support of mathematical modelling. Previously, a non-linear mixed-effects (NLME) model was developed based on clinical hydrocortisone pharmacokinetic (PK) pediatric and adult data. Additionally, a physiologically-based pharmacokinetic (PBPK) model was developed for adults and a pediatric model was obtained using maturation functions for relevant processes. In this work, a middle-out approach was applied. The aim was to investigate whether PBPK-derived maturation functions could provide a better description of hydrocortisone PK inter-individual variability when implemented in the NLME framework, with the goal of providing better individual predictions towards precision dosing at the patient level.
Methods:
Hydrocortisone PK data from 24 adrenal insufficiency pediatric patients and 30 adult healthy volunteers were used for NLME model development, while the PBPK model and maturation functions of clearance and cortisol binding globulin (CBG) were developed based on previous studies published in the literature.
Results:
Clearance (CL) estimates from both approaches were similar for children older than 1 year (CL/F increasing from around 150 L/h to 500 L/h), while CBG concentrations differed across the whole age range (CBG(NLME) stable around 0.5 mu M vs. steady increase from 0.35 to 0.8 mu M for CBG (PBPK)). PBPK-derived maturation functions were subsequently included in the NLME model. After inclusion of the maturation functions, none, a part of, or all parameters were re-estimated. However, the inclusion of CL and/or CBG maturation functions in the NLME model did not result in improved model performance for the CL maturation function (& UDelta;OFV > -15.36) and the re-estimation of parameters using the CBG maturation function most often led to unstable models or individual CL prediction bias.
Discussion:
Three explanations for the observed discrepancies could be postulated, i) non-considered maturation of processes such as absorption or first-pass effect, ii) lack of patients between 1 and 12 months, iii) lack of correction of PBPK CL maturation functions derived from urinary concentration ratio data for the renal function relative to adults. These should be investigated in the future to determine how NLME and PBPK methods can work towards deriving insights into pediatric hydrocortisone PK.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
Deriving mechanism-based pharmacodynamic models by reducing quantitative systems pharmacology models
(2023)
Quantitative systems pharmacology (QSP) models integrate comprehensive qualitative and quantitative knowledge about pharmacologically relevant processes. We previously proposed a first approach to leverage the knowledge in QSP models to derive simpler, mechanism-based pharmacodynamic (PD) models. Their complexity, however, is typically still too large to be used in the population analysis of clinical data. Here, we extend the approach beyond state reduction to also include the simplification of reaction rates, elimination of reactions, and analytic solutions. We additionally ensure that the reduced model maintains a prespecified approximation quality not only for a reference individual but also for a diverse virtual population. We illustrate the extended approach for the warfarin effect on blood coagulation. Using the model-reduction approach, we derive a novel small-scale warfarin/international normalized ratio model and demonstrate its suitability for biomarker identification. Due to the systematic nature of the approach in comparison with empirical model building, the proposed model-reduction algorithm provides an improved rationale to build PD models also from QSP models in other applications.
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
According to Radzikowski’s celebrated results, bisolutions of a wave operator on a globally hyperbolic spacetime are of the Hadamard form iff they are given by a linear combination of distinguished parametrices i2(G˜aF−G˜F+G˜A−G˜R) in the sense of Duistermaat and Hörmander [Acta Math. 128, 183–269 (1972)] and Radzikowski [Commun. Math. Phys. 179, 529 (1996)]. Inspired by the construction of the corresponding advanced and retarded Green operator GA, GR as done by Bär, Ginoux, and Pfäffle {Wave Equations on Lorentzian Manifolds and Quantization [European Mathematical Society (EMS), Zürich, 2007]}, we construct the remaining two Green operators GF, GaF locally in terms of Hadamard series. Afterward, we provide the global construction of i2(G˜aF−G˜F), which relies on new techniques such as a well-posed Cauchy problem for bisolutions and a patching argument using Čech cohomology. This leads to global bisolutions of the Hadamard form, each of which can be chosen to be a Hadamard two-point-function, i.e., the smooth part can be adapted such that, additionally, the symmetry and the positivity condition are exactly satisfied.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
An explicit Dobrushin uniqueness region for Gibbs point processes with repulsive interactions
(2022)
We present a uniqueness result for Gibbs point processes with interactions that come from a non-negative pair potential; in particular, we provide an explicit uniqueness region in terms of activity z and inverse temperature beta. The technique used relies on applying to the continuous setting the classical Dobrushin criterion. We also present a comparison to the two other uniqueness methods of cluster expansion and disagreement percolation, which can also be applied for this type of interaction.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
Conventional embeddings of the edge-graphs of Platonic polyhedra, {f,z}, where f,z denote the number of edges in each face and the edge-valence at each vertex, respectively, are untangled in that they can be placed on a sphere (S-2) such that distinct edges do not intersect, analogous to unknotted loops, which allow crossing-free drawings of S-1 on the sphere. The most symmetric (flag-transitive) realizations of those polyhedral graphs are those of the classical Platonic polyhedra, whose symmetries are *2fz, according to Conway's two-dimensional (2D) orbifold notation (equivalent to Schonflies symbols I-h, O-h, and T-d). Tangled Platonic {f,z} polyhedra-which cannot lie on the sphere without edge-crossings-are constructed as windings of helices with three, five, seven,... strands on multigenus surfaces formed by tubifying the edges of conventional Platonic polyhedra, have (chiral) symmetries 2fz (I, O, and T), whose vertices, edges, and faces are symmetrically identical, realized with two flags. The analysis extends to the "theta(z)" polyhedra, {2,z}. The vertices of these symmetric tangled polyhedra overlap with those of the Platonic polyhedra; however, their helicity requires curvilinear (or kinked) edges in all but one case. We show that these 2fz polyhedral tangles are maximally symmetric; more symmetric embeddings are necessarily untangled. On one hand, their topologies are very constrained: They are either self-entangled graphs (analogous to knots) or mutually catenated entangled compound polyhedra (analogous to links). On the other hand, an endless variety of entanglements can be realized for each topology. Simpler examples resemble patterns observed in synthetic organometallic materials and clathrin coats in vivo.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
We study boundary value problems for first-order elliptic differential operators on manifolds with compact boundary. The adapted boundary operator need not be selfadjoint and the boundary condition need not be pseudo-local.We show the equivalence of various characterisations of elliptic boundary conditions and demonstrate how the boundary conditions traditionally considered in the literature fit in our framework. The regularity of the solutions up to the boundary is proven. We show that imposing elliptic boundary conditions yields a Fredholm operator if the manifold is compact. We provide examples which are conveniently treated by our methods.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
We show that local deformations, near closed subsets, of solutions to open partial differential relations can be extended to global deformations, provided all but the highest derivatives stay constant along the subset. The applicability of this general result is illustrated by a number of examples, dealing with convex embeddings of hypersurfaces, differential forms, and lapse functions in Lorentzian geometry.
The main application is a general approximation result by sections that have very restrictive local properties on open dense subsets. This shows, for instance, that given any K is an element of Double-struck capital R every manifold of dimension at least 2 carries a complete C-1,C- 1-metric which, on a dense open subset, is smooth with constant sectional curvature K. Of course, this is impossible for C-2-metrics in general.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
We propose a global geomagnetic field model for the last 14 thousand years, based on thermoremanent records. We call the model ArchKalmag14k. ArchKalmag14k is constructed by modifying recently proposed algorithms, based on space-time correlations. Due to the amount of data and complexity of the model, the full Bayesian posterior is numerically intractable. To tackle this, we sequentialize the inversion by implementing a Kalman-filter with a fixed time step. Every step consists of a prediction, based on a degree dependent temporal covariance, and a correction via Gaussian process regression. Dating errors are treated via a noisy input formulation. Cross correlations are reintroduced by a smoothing algorithm and model parameters are inferred from the data. Due to the specific statistical nature of the proposed algorithms, the model comes with space and time-dependent uncertainty estimates. The new model ArchKalmag14k shows less variation in the large-scale degrees than comparable models. Local predictions represent the underlying data and agree with comparable models, if the location is sampled well. Uncertainties are bigger for earlier times and in regions of sparse data coverage. We also use ArchKalmag14k to analyze the appearance and evolution of the South Atlantic anomaly together with reverse flux patches at the core-mantle boundary, considering the model uncertainties. While we find good agreement with earlier models for recent times, our model suggests a different evolution of intensity minima prior to 1650 CE. In general, our results suggest that prior to 6000 BCE the data is not sufficient to support global models.
We introduce the class of "smooth rough paths" and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer-Cartan perspective is the key to a purely algebraic form of Lyons' extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of "sum of rough paths". We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.
Randomised one-step time integration methods for deterministic operator differential equations
(2022)
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al. (Stat Comput 27(4):1065-1082, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
We present a technique for the enumeration of all isotopically distinct ways of tiling a hyperbolic surface of finite genus, possibly nonorientable and with punctures and boundary. This generalizes the enumeration using Delaney--Dress combinatorial tiling theory of combinatorial classes of tilings to isotopy classes of tilings. To accomplish this, we derive an action of the mapping class group of the orbifold associated to the symmetry group of a tiling on the set of tilings. We explicitly give descriptions and presentations of semipure mapping class groups and of tilings as decorations on orbifolds. We apply this enumerative result to generate an array of isotopically distinct tilings of the hyperbolic plane with symmetries generated by rotations that are commensurate with the threedimensional symmetries of the primitive, diamond, and gyroid triply periodic minimal surfaces, which have relevance to a variety of physical systems.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
In this work, we present Raman lidar data (from a Nd:YAG operating at 355 nm, 532 nm and 1064 nm) from the international research village Ny-Alesund for the time period of January to April 2020 during the Arctic haze season of the MOSAiC winter. We present values of the aerosol backscatter, the lidar ratio and the backscatter Angstrom exponent, though the latter depends on wavelength. The aerosol polarization was generally below 2%, indicating mostly spherical particles. We observed that events with high backscatter and high lidar ratio did not coincide. In fact, the highest lidar ratios (LR > 75 sr at 532 nm) were already found by January and may have been caused by hygroscopic growth, rather than by advection of more continental aerosol. Further, we performed an inversion of the lidar data to retrieve a refractive index and a size distribution of the aerosol. Our results suggest that in the free troposphere (above approximate to 2500 m) the aerosol size distribution is quite constant in time, with dominance of small particles with a modal radius well below 100 nm. On the contrary, below approximate to 2000 m in altitude, we frequently found gradients in aerosol backscatter and even size distribution, sometimes in accordance with gradients of wind speed, humidity or elevated temperature inversions, as if the aerosol was strongly modified by vertical displacement in what we call the "mechanical boundary layer". Finally, we present an indication that additional meteorological soundings during MOSAiC campaign did not necessarily improve the fidelity of air backtrajectories.
The Levenberg–Marquardt regularization for the backward heat equation with fractional derivative
(2022)
The backward heat problem with time-fractional derivative in Caputo's sense is studied. The inverse problem is severely ill-posed in the case when the fractional order is close to unity. A Levenberg-Marquardt method with a new a posteriori stopping rule is investigated. We show that optimal order can be obtained for the proposed method under a Hölder-type source condition. Numerical examples for one and two dimensions are provided.
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude???distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude???distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies.
Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated in a simulation study and by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering.
However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood.
In this study, we investigate moderate but damaging earthquakes (M-w 6-7) and characterize ground- motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects.
We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the freq uency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes.
We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations.
Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect.
We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
Background
Cytochrome P450 (CYP) 3A contributes to the metabolism of many approved drugs. CYP3A perpetrator drugs can profoundly alter the exposure of CYP3A substrates. However, effects of such drug-drug interactions are usually reported as maximum effects rather than studied as time-dependent processes. Identification of the time course of CYP3A modulation can provide insight into when significant changes to CYP3A activity occurs, help better design drug-drug interaction studies, and manage drug-drug interactions in clinical practice.
Objective
We aimed to quantify the time course and extent of the in vivo modulation of different CYP3A perpetrator drugs on hepatic CYP3A activity and distinguish different modulatory mechanisms by their time of onset, using pharmacologically inactive intravenous microgram doses of the CYP3A-specific substrate midazolam, as a marker of CYP3A activity.
Methods
Twenty-four healthy individuals received an intravenous midazolam bolus followed by a continuous infusion for 10 or 36 h. Individuals were randomized into four arms: within each arm, two individuals served as a placebo control and, 2 h after start of the midazolam infusion, four individuals received the CYP3A perpetrator drug: voriconazole (inhibitor, orally or intravenously), rifampicin (inducer, orally), or efavirenz (activator, orally). After midazolam bolus administration, blood samples were taken every hour (rifampicin arm) or every 15 min (remaining study arms) until the end of midazolam infusion. A total of 1858 concentrations were equally divided between midazolam and its metabolite, 1'-hydroxymidazolam. A nonlinear mixed-effects population pharmacokinetic model of both compounds was developed using NONMEM (R). CYP3A activity modulation was quantified over time, as the relative change of midazolam clearance encountered by the perpetrator drug, compared to the corresponding clearance value in the placebo arm.
Results
Time course of CYP3A modulation and magnitude of maximum effect were identified for each perpetrator drug. While efavirenz CYP3A activation was relatively fast and short, reaching a maximum after approximately 2-3 h, the induction effect of rifampicin could only be observed after 22 h, with a maximum after approximately 28-30 h followed by a steep drop to almost baseline within 1-2 h. In contrast, the inhibitory impact of both oral and intravenous voriconazole was prolonged with a steady inhibition of CYP3A activity followed by a gradual increase in the inhibitory effect until the end of sampling at 8 h. Relative maximum clearance changes were +59.1%, +46.7%, -70.6%, and -61.1% for efavirenz, rifampicin, oral voriconazole, and intravenous voriconazole, respectively.
Conclusions
We could distinguish between different mechanisms of CYP3A modulation by the time of onset. Identification of the time at which clearance significantly changes, per perpetrator drug, can guide the design of an optimal sampling schedule for future drug-drug interaction studies. The impact of a short-term combination of different perpetrator drugs on the paradigm CYP3A substrate midazolam was characterized and can define combination intervals in which no relevant interaction is to be expected.
Alpine ecosystems on the Tibetan Plateau are being threatened by ongoing climate warming and intensified human activities. Ecological time-series obtained from sedimentary ancient DNA (sedaDNA) are essential for understanding past ecosystem and biodiversity dynamics on the Tibetan Plateau and their responses to climate change at a high taxonomic resolution. Hitherto only few but promising studies have been published on this topic. The potential and limitations of using sedaDNA on the Tibetan Plateau are not fully understood. Here, we (i) provide updated knowledge of and a brief introduction to the suitable archives, region-specific taphonomy, state-of-the-art methodologies, and research questions of sedaDNA on the Tibetan Plateau; (ii) review published and ongoing sedaDNA studies from the Tibetan Plateau; and (iii) give some recommendations for future sedaDNA study designs. Based on the current knowledge of taphonomy, we infer that deep glacial lakes with freshwater and high clay sediment input, such as those from the southern and southeastern Tibetan Plateau, may have a high potential for sedaDNA studies. Metabarcoding (for microorganisms and plants), metagenomics (for ecosystems), and hybridization capture (for prehistoric humans) are three primary sedaDNA approaches which have been successfully applied on the Tibetan Plateau, but their power is still limited by several technical issues, such as PCR bias and incompleteness of taxonomic reference databases. Setting up high-quality and open-access regional taxonomic reference databases for the Tibetan Plateau should be given priority in the future. To conclude, the archival, taphonomic, and methodological conditions of the Tibetan Plateau are favorable for performing sedaDNA studies. More research should be encouraged to address questions about long-term ecological dynamics at ecosystem scale and to bring the paleoecology of the Tibetan Plateau into a new era.
The objectives of this study were the identification in (morbidly) obese and nonobese patients of (i) the most appropriate body size descriptor for fosfomycin dose adjustments and (ii) adequacy of the currently employed dosing regimens. Plasma and target site (interstitial fluid of subcutaneous adipose tissue) concentrations after fosfomycin administration (8 g) to 30 surgery patients (15 obese/15 nonobese) were obtained from a prospective clinical trial. After characterization of plasma and microdialysis-derived target site pharmacokinetics via population analysis, short-term infusions of fosfomycin 3 to 4 times daily were simulated. The adequacy of therapy was assessed by probability of pharmacokinetic/pharmacodynamic target attainment (PTA) analysis based on the unbound drug-related targets of an %fT(>= MIC) (the fraction of time that unbound fosfomycin concentrations exceed the MIC during 24 h) of 70 and an fAUC(0-24h)/MIC (the area under the concentration-time curve from 0 to 24 h for the unbound fraction of fosfomycin relative to the MIC) of 40.8 to 83.3. Lean body weight, fat mass, and creatinine clearance calculated via adjusted body weight (ABW) (CLCRCG_ABW) of all patients (body mass index [BMI] = 20.1 to 52.0 kg/m(2)) explained a considerable proportion of between-patient pharmacokinetic variability (up to 31.0% relative reduction). The steady-state unbound target site/plasma concentration ratio was 26.3% lower in (morbidly) obese than nonobese patients. For infections with fosfomycin-susceptible pathogens (MIC <= 16 mg/L), intermittent "high-dosage" intravenous (i.v.) fosfomycin (8 g, three times daily) was sufficient to treat patients with a CLCRCG_ABW of,130 mL/min, irrespective of the pharmacokinetic/pharmacodynamic indices considered. For infections by Pseudomonas aeruginosa with a MIC of 32 mg/L, when the index fAUC0-24h/MIC is applied, fosfomycin might represent a promising treatment option in obese and nonobese patients, especially in combination therapy to complement beta-lactams, in which carbapenem-resistant P. aeruginosa is critical. In conclusion, fosfomycin showed excellent target site penetration in obese and nonobese patients. Dosing should be guided by renal function rather than obesity status.
The drug concentrations targeted in meropenem and piperacillin/tazobactam therapy also depend on the susceptibility of the pathogen. Yet, the pathogen is often unknown, and antibiotic therapy is guided by empirical targets. To reliably achieve the targeted concentrations, dosing needs to be adjusted for renal function. We aimed to evaluate a meropenem and piperacillin/tazobactam monitoring program in intensive care unit (ICU) patients by assessing (i) the adequacy of locally selected empirical targets, (ii) if dosing is adequately adjusted for renal function and individual target, and (iii) if dosing is adjusted in target attainment (TA) failure. In a prospective, observational clinical trial of drug concentrations, relevant patient characteristics and microbiological data (pathogen, minimum inhibitory concentration (MIC)) for patients receiving meropenem or piperacillin/tazobactam treatment were collected. If the MIC value was available, a target range of 1-5 x MIC was selected for minimum drug concentrations of both drugs. If the MIC value was not available, 8-40 mg/L and 16-80 mg/L were selected as empirical target ranges for meropenem and piperacillin, respectively. A total of 356 meropenem and 216 piperacillin samples were collected from 108 and 96 ICU patients, respectively. The vast majority of observed MIC values was lower than the empirical target (meropenem: 90.0%, piperacillin: 93.9%), suggesting empirical target value reductions. TA was found to be low (meropenem: 35.7%, piperacillin 50.5%) with the lowest TA for severely impaired renal function (meropenem: 13.9%, piperacillin: 29.2%), and observed drug concentrations did not significantly differ between patients with different targets, indicating dosing was not adequately adjusted for renal function or target. Dosing adjustments were rare for both drugs (meropenem: 6.13%, piperacillin: 4.78%) and for meropenem irrespective of TA, revealing that concentration monitoring alone was insufficient to guide dosing adjustment. Empirical targets should regularly be assessed and adjusted based on local susceptibility data. To improve TA, scientific knowledge should be translated into easy-to-use dosing strategies guiding antibiotic dosing.
Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling.
Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations.
In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions.
In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples.
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented.
Let X be an infinite linearly ordered set and let Y be a nonempty subset of X. We calculate the relative rank of the semigroup OP(X,Y) of all orientation-preserving transformations on X with restricted range Y modulo the semigroup O(X,Y) of all order-preserving transformations on X with restricted range Y. For Y = X, we characterize the relative generating sets of minimal size.
We study superharmonic functions for Schrodinger operators on general weighted graphs. Specifically, we prove two decompositions which both go under the name Riesz decomposition in the literature. The first one decomposes a superharmonic function into a harmonic and a potential part. The second one decomposes a superharmonic function into a sum of superharmonic functions with certain upper bounds given by prescribed superharmonic functions. As application we show a Brelot type theorem.
We adapt the Faddeev-LeVerrier algorithm for the computation of characteristic polynomials to the computation of the Pfaffian of a skew-symmetric matrix. This yields a very simple, easy to implement and parallelize algorithm of computational cost O(n(beta+1)) where nis the size of the matrix and O(n(beta)) is the cost of multiplying n x n-matrices, beta is an element of [2, 2.37286). We compare its performance to that of other algorithms and show how it can be used to compute the Euler form of a Riemannian manifold using computer algebra.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
In the semiclassical limit (h) over bar -> 0, we analyze a class of self-adjoint Schrodinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V center dot id(E) acting on sections of a vector bundle E over an oriented Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has non-degenerate minima at a finite number of points m(1),... m(r) is an element of M, called potential wells. Using quasimodes of WKB-type near m(j) for eigenfunctions associated with the low lying eigenvalues of H-(h) over bar, we analyze the tunneling effect, i.e. the splitting between low lying eigenvalues, which e.g. arises in certain symmetric configurations. Technically, we treat the coupling between different potential wells by an interaction matrix and we consider the case of a single minimal geodesic (with respect to the associated Agmon metric) connecting two potential wells and the case of a submanifold of minimal geodesics of dimension l + 1. This dimension l determines the polynomial prefactor for exponentially small eigenvalue splitting.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
The Rarita-Schwinger operator is the twisted Dirac operator restricted to 3/2-spinors. Rarita-Schwinger fields are solutions of this operator which are in addition divergence-free. This is an overdetermined problem and solutions are rare; it is even more unexpected for there to be large dimensional spaces of solutions. In this paper we prove the existence of a sequence of compact manifolds in any given dimension greater than or equal to 4 for which the dimension of the space of Rarita-Schwinger fields tends to infinity. These manifolds are either simply connected Kahler-Einstein spin with negative Einstein constant, or products of such spaces with flat tori. Moreover, we construct Calabi-Yau manifolds of even complex dimension with more linearly independent Rarita-Schwinger fields than flat tori of the same dimension.
Contributions to the theoretical analysis of the algorithms with adversarial and dependent data
(2021)
In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.
We provide an overview of the tools and techniques of resurgence theory used in the Borel-ecalle resummation method, which we then apply to the massless Wess-Zumino model. Starting from already known results on the anomalous dimension of the Wess-Zumino model, we solve its renormalisation group equation for the two-point function in a space of formal series. We show that this solution is 1-Gevrey and that its Borel transform is resurgent. The Schwinger-Dyson equation of the model is then used to prove an asymptotic exponential bound for the Borel transformed two-point function on a star-shaped domain of a suitable ramified complex plane. This proves that the two-point function of the Wess-Zumino model is Borel-ecalle summable.
Spiele und spieltypische Elemente wie das Sammeln von Treuepunkten sind aus dem Alltag kaum wegzudenken. Zudem werden sie zunehmend in Unternehmen oder in Lernumgebungen eingesetzt. Allerdings ist die Methode Gamification bisher für den pädagogischen Kontext wenig klassifiziert und für Lehrende kaum zugänglich gemacht worden.
Daher zielt diese Bachelorarbeit darauf ab, eine systematische Strukturierung und Aufarbeitung von Gamification sowie innovative Ansätze für die Verwendung spieltypischer Elemente im Unterricht, konkret dem Mathematikunterricht, zu präsentieren. Dies kann eine Grundlage für andere Fachgebiete, aber auch andere Lehrformen bieten und so die Umsetzbarkeit von Gamification in eigenen Lehrveranstaltungen aufzeigen.
In der Arbeit wird begründet, weshalb und mithilfe welcher Elemente Gamification die Motivation und Leistungsbereitschaft der Lernenden langfristig erhöhen, die Sozial- und Personalkompetenzen fördern sowie die Lernenden zu mehr Aktivität anregen kann. Zudem wird Gamification explizit mit grundlegenden mathematikdidaktischen Prinzipien in Verbindung gesetzt und somit die Relevanz für den Mathematikunterricht hervorgehoben.
Anschließend werden die einzelnen Elemente von Gamification wie Punkte, Level, Abzeichen, Charaktere und Rahmengeschichte entlang einer eigens für den pädagogischen Kontext entwickelten Klassifikation „FUN“ (Feedback – User specific elements – Neutral elements) schematisch beschrieben, ihre Funktionen und Wirkung dargestellt sowie Einsatzmöglichkeiten im Unterricht aufgezeigt. Dies beinhaltet Ideen zu lernförderlichem Feedback, Differenzierungsmöglichkeiten und Unterrichtsrahmengestaltung, die in Lehrveranstaltungen aller Art umsetzbar sein können. Die Bachelorarbeit umfasst zudem ein spezifisches Beispiel, einen Unterrichtsentwurf einer gamifizierten Mathematikstunde inklusive des zugehörigen Arbeitsmaterials, anhand dessen die Verwendung von Gamification deutlich wird.
Gamification offeriert oftmals Vorteile gegenüber dem traditionellen Unterricht, muss jedoch wie jede Methode an den Inhalt und die Zielgruppe angepasst werden. Weiterführende Forschung könnte sich mit konkreten motivationalen Strukturen, personenspezifischen Unterschieden sowie mit mathematischen Inhalten wie dem Problemlösen oder dem Wechsel zwischen verschiedenen Darstellungen hinsichtlich gamifizierter Lehrformen beschäftigen.
Die Bienaymé-Galton-Watson Prozesse können für die Untersuchung von speziellen und sich entwickelnden Populationen verwendet werden. Die Populationen umfassen Individuen, welche sich identisch, zufällig, selbstständig und unabhängig voneinander fortpflanzen und die jeweils nur eine Generation existieren. Die n-te Generation ergibt sich als zufällige Summe der Individuen der (n-1)-ten Generation. Die Relevanz dieser Prozesse begründet sich innerhalb der Historie und der inner- und außermathematischen Bedeutung. Die Geschichte der Bienaymé-Galton-Watson-Prozesse wird anhand der Entwicklung des Konzeptes bis heute dargestellt. Dabei werden die Wissenschaftler:innen verschiedener Disziplinen angeführt, die Erkenntnisse zu dem Themengebiet beigetragen und das Konzept in ihren Fachbereichen angeführt haben. Somit ergibt sich die außermathematische Signifikanz. Des Weiteren erhält man die innermathematische Bedeutsamkeit mittels des Konzeptes der Verzweigungsprozesse, welches auf die Bienaymé-Galton-Watson Prozesse zurückzuführen ist. Die Verzweigungsprozesse stellen eines der aussagekräftigsten Modelle für die Beschreibung des Populationswachstums dar. Darüber hinaus besteht die derzeitige Wichtigkeit durch die Anwendungsmöglichkeit der Verzweigungsprozesse und der Bienaymé-Galton-Watson Prozesse innerhalb der Epidemiologie. Es werden die Ebola- und die Corona-Pandemie als Anwendungsfelder angeführt. Die Prozesse dienen als Entscheidungsstütze für die Politik und ermöglichen Aussagen über die Auswirkungen von Maßnahmen bezüglich der Pandemien. Neben den Prozessen werden ebenfalls der bedingte Erwartungswert bezüglich diskreter Zufallsvariablen, die wahrscheinlichkeitserzeugende Funktion und die zufällige Summe eingeführt. Die Konzepte vereinfachen die Beschreibung der Prozesse und bilden somit die Grundlage der Betrachtungen. Außerdem werden die benötigten und weiterführenden Eigenschaften der grundlegenden Themengebiete und der Prozesse aufgeführt und bewiesen. Das Kapitel erreicht seinen Höhepunkt bei dem Beweis des Kritikalitätstheorems, wodurch eine Aussage über das Aussterben des Prozesses in verschiedenen Fällen und somit über die Aussterbewahrscheinlichkeit getätigt werden kann. Die Fälle werden anhand der zu erwartenden Anzahl an Nachkommen eines Individuums unterschieden. Es zeigt sich, dass ein Prozess bei einer zu erwartenden Anzahl kleiner gleich Eins mit Sicherheit ausstirbt und bei einer Anzahl größer als Eins, die Population nicht in jedem Fall aussterben muss. Danach werden einzelne Beispiele, wie der linear fractional case, die Population von Fibroblasten (Bindegewebszellen) von Mäusen und die Entstehungsfragestellung der Prozesse, angeführt. Diese werden mithilfe der erlangten Ergebnisse untersucht und einige ausgewählte zufällige Dynamiken werden im nachfolgenden Kapitel simuliert. Die Simulationen erfolgen durch ein in Python erstelltes Programm und werden mithilfe der Inversionsmethode realisiert. Die Simulationen stellen beispielhaft die Entwicklungen in den verschiedenen Kritikalitätsfällen der Prozesse dar. Zudem werden die Häufigkeiten der einzelnen Populationsgrößen in Form von Histogrammen angebracht. Dabei lässt sich der Unterschied zwischen den einzelnen Fällen bestätigen und es wird die Anwendungsmöglichkeit der Bienaymé-Galton-Watson Prozesse bei komplexeren Problemen deutlich. Histogramme bekräftigen, dass die einzelnen Populationsgrößen nur endlich oft vorkommen. Diese Aussage wurde von Galton aufgeworfen und in der Extinktions-Explosions-Dichotomie verwendet. Die dargestellten Erkenntnisse über das Themengebiet und die Betrachtung des Konzeptes werden mit einer didaktischen Analyse abgeschlossen. Die Untersuchung beinhaltet die Berücksichtigung der Fundamentalen Ideen, der Fundamentalen Ideen der Stochastik und der Leitidee „Daten und Zufall“. Dabei ergibt sich, dass in Abhängigkeit der gewählten Perspektive die Anwendung der Bienaymé-Galton-Watson Prozesse innerhalb der Schule plausibel ist und von Vorteil für die Schüler:innen sein kann. Für die Behandlung wird exemplarisch der Rahmenlehrplan für Berlin und Brandenburg analysiert und mit dem Kernlehrplan Nordrhein-Westfalens verglichen. Die Konzeption des Lehrplans aus Berlin und Brandenburg lässt nicht den Schluss zu, dass die Bienaymé-Galton-Watson Prozesse angewendet werden sollten. Es lässt sich feststellen, dass die zugrunde liegende Leitidee nicht vollumfänglich mit manchen Fundamentalen Ideen der Stochastik vereinbar ist. Somit würde eine Modifikation hinsichtlich einer stärkeren Orientierung des Lehrplans an den Fundamentalen Ideen die Anwendung der Prozesse ermöglichen. Die Aussage wird durch die Betrachtung und Übertragung eines nordrhein-westfälischen Unterrichtsentwurfes für stochastische Prozesse auf die Bienaymé-Galton-Watson Prozesse unterstützt. Darüber hinaus werden eine Concept Map und ein Vernetzungspentagraph nach von der Bank konzipiert um diesen Aspekt hervorzuheben.
For a closed, connected direct product Riemannian manifold (M, g) = (M-1, g(1)) x ... x (M-l, g(l)), we define its multiconformal class [[g]] as the totality {integral(2)(1)g(1) circle plus center dot center dot center dot integral(2)(l)g(l)} of all Riemannian metrics obtained from multiplying the metric gi of each factor Mi by a positive function fi on the total space M. A multiconformal class [[ g]] contains not only all warped product type deformations of g but also the whole conformal class [(g) over tilde] of every (g) over tilde is an element of[[ g]]. In this article, we prove that [[g]] contains a metric of positive scalar curvature if and only if the conformal class of some factor (Mi, gi) does, under the technical assumption dim M-i = 2. We also show that, even in the case where every factor (M-i, g(i)) has positive scalar curvature, [[g]] contains a metric of scalar curvature constantly equal to -1 and with arbitrarily large volume, provided l = 2 and dim M = 3.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data -driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems. (C) 2021 Elsevier B.V. All rights reserved.
In this paper, we bring together the worlds of model order reduction for stochastic linear systems and H-2-optimal model order reduction for deterministic systems. In particular, we supplement and complete the theory of error bounds for model order reduction of stochastic differential equations. With these error bounds, we establish a link between the output error for stochastic systems (with additive and multiplicative noise) and modified versions of the H-2-norm for both linear and bilinear deterministic systems. When deriving the respective optimality conditions for minimizing the error bounds, we see that model order reduction techniques related to iterative rational Krylov algorithms (IRKA) are very natural and effective methods for reducing the dimension of large-scale stochastic systems with additive and/or multiplicative noise. We apply modified versions of (linear and bilinear) IRKA to stochastic linear systems and show their efficiency in numerical experiments.
Identification of unknown parameters on the basis of partial and noisy data is a challenging task, in particular in high dimensional and non-linear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust and computationally cheap and often produce astonishingly accurate estimations despite the simplifying underlying assumptions. Yet there is a lot of room for improvement, specifically regarding a correct approximation of a non-Gaussian posterior distribution. The tempered ensemble transform particle filter is an adaptive Sequential Monte Carlo (SMC) method, whereby resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion, it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity, and the method is not as robust as ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy-inspired regularisation factor to the underlying optimal transport problem that allows the high computational cost to be considerably reduced via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as a hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov chain Monte Carlo methods results are computed as a benchmark.
We consider an initial problem for the Navier-Stokes type equations associated with the de Rham complex over R-n x[0, T], n >= 3, with a positive time T. We prove that the problem induces an open injective mappings on the scales of specially constructed function spaces of Bochner-Sobolev type. In particular, the corresponding statement on the intersection of these classes gives an open mapping theorem for smooth solutions to the Navier-Stokes equations.
A characterization of the essential spectrum of Schrodinger operators on infinite graphs is derived involving the concept of R-limits. This concept, which was introduced previously for operators on N and Z(d) as "right-limits," captures the behaviour of the operator at infinity. For graphs with sub-exponential growth rate, we show that each point in sigma(ss)(H) corresponds to a bounded generalized eigenfunction of a corresponding R-limit of H. If, additionally, the graph is of uniform sub-exponential growth, also the converse inclusion holds.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
Bayesian inference can be embedded into an appropriately defined dynamics in the space of probability measures. In this paper, we take Brownian motion and its associated Fokker-Planck equation as a starting point for such embeddings and explore several interacting particle approximations. More specifically, we consider both deterministic and stochastic interacting particle systems and combine them with the idea of preconditioning by the empirical covariance matrix. In addition to leading to affine invariant formulations which asymptotically speed up convergence, preconditioning allows for gradient-free implementations in the spirit of the ensemble Kalman filter. While such gradient-free implementations have been demonstrated to work well for posterior measures that are nearly Gaussian, we extend their scope of applicability to multimodal measures by introducing localized gradient-free approximations. Numerical results demonstrate the effectiveness of the considered methodologies.
In June 2018, after 4 years of cruise, the Japanese space probe Hayabusa2 [1-Watanabe S. et al.: Hayabusa2 Mission Overview. (2017)] reached the Near-Earth Asteroid (162173) Ryugu. Hayabusa2 carried a small Lander named MASCOT (Mobile Asteroid Surface Scout) [2-Ho T. M. et al.: MASCOT-The Mobile Asteroid Surface Scout onboard the Hayabusa2 mission. (2017)], jointly developed by the German Aerospace Center (DLR) and the French Space Agency (CNES), to investigate Ryugu's surface structure, composition and physical properties including its thermal behaviour and magnetization in-situ. The Microgravity User Support Centre (DLR-MUSC) in Cologne was in charge of providing all thermal conditions and constraints necessary for the selection of the final landing site and for the final operations of the Lander MASCOT on the surface of the asteroid Ryugu. This article provides a comprehensive assessment of these thermal conditions and constraints, based on predictions performed with the Thermal Mathematical Model (TMM) of MASCOT using different asteroid surface thermal models, ephemeris data for approach as well as descent and hopping trajectories, the related operation sequences and scenarios and the possible environmental conditions driven by the Hayabusa2 spacecraft. A comparison with the real telemetry data confirms the analysis and provides further information about the asteroid characteristics.
Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, this filter also has limitations due to its inherent assumptions of Gaussianity and linearity, which can manifest themselves in the form of dynamically inconsistent state estimates. This issue is investigated here for balanced, slowly evolving solutions to highly oscillatory Hamiltonian systems which are prototypical for applications in numerical weather prediction. It is demonstrated that the standard ensemble Kalman filter can lead to state estimates that do not satisfy the pertinent balance relations and ultimately lead to filter divergence. Two remedies are proposed, one in terms of blended asymptotically consistent time-stepping schemes, and one in terms of minimization-based postprocessing methods. The effects of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for balanced motions of two prototypical Hamiltonian reference systems.
The superposition operation S-n,S-A, n >= 1, n is an element of N, maps to each (n + 1)-tuple of n-ary operations on a set A an n-ary operation on A and satisfies the so-called superassociative law, a generalization of the associative law. The corresponding algebraic structures are Menger algebras of rank n. A partial algebra of type (n + 1) which satisfies the superassociative law as weak identity is said to be a partial Menger algebra of rank n. As a generalization of linear terms we define r-terms as terms where each variable occurs at most r-times. It will be proved that n-ary r-terms form partial Menger algebras of rank n. In this paper, some algebraic properties of partial Menger algebras such as generating systems, homomorphic images and freeness are investigated. As generalization of hypersubstitutions and linear hypersubstitutions we consider r-hypersubstitutions.U
The Kramers problem for SDEs driven by small, accelerated Lévy noise with exponentially light jumps
(2021)
We establish Freidlin-Wentzell results for a nonlinear ordinary differential equation starting close to the stable state 0, say, subject to a perturbation by a stochastic integral which is driven by an epsilon-small and (1/epsilon)-accelerated Levy process with exponentially light jumps. For this purpose, we derive a large deviations principle for the stochastically perturbed system using the weak convergence approach developed by Budhiraja, Dupuis, Maroulas and collaborators in recent years. In the sequel, we solve the associated asymptotic first escape problem from the bounded neighborhood of 0 in the limit as epsilon -> 0 which is also known as the Kramers problem in the literature.
Androulidakis-Skandalis (2009) showed that every singular foliation has an associated topological groupoid, called holonomy groupoid. In this note, we exhibit some functorial properties of this assignment: if a foliated manifold (M, FM ) is the quotient of a foliated manifold (P, FP ) along a surjective submersion with connected fibers, then the same is true for the corresponding holonomy groupoids. For quotients by a Lie group action, an analogue statement holds under suitable assumptions, yielding a Lie 2-group action on the holonomy groupoid.
In this paper we prove a strengthening of a theorem of Chang, Weinberger and Yu on obstructions to the existence of positive scalar curvature metrics on compact manifolds with boundary. They construct a relative index for the Dirac operator, which lives in a relative K-theory group, measuring the difference between the fundamental group of the boundary and of the full manifold.
Whenever the Riemannian metric has product structure and positive scalar curvature near the boundary, one can define an absolute index of the Dirac operator taking value in the K-theory of the C*-algebra of fundamental group of the full manifold. This index depends on the metric near the boundary. We prove that (a slight variation of) the relative index of Chang, Weinberger and Yu is the image of this absolute index under the canonical map of K-theory groups.
This has the immediate corollary that positive scalar curvature on the whole manifold implies vanishing of the relative index, giving a conceptual and direct proof of the vanishing theorem of Chang, Weinberger and Yu (rather: a slight variation). To take the fundamental groups of the manifold and its boundary into account requires working with maximal C*-completions of the involved *-algebras. A significant part of this paper is devoted to foundational results regarding these completions. On the other hand, we introduce and propose a more conceptual and more geometric completion, which still has all the required functoriality.
The geomagnetic Kp index is one of the most extensively used indices of geomagnetic activity, both for scientific and operational purposes. This article reviews the properties of the Kp index and provides a reference for users of the Kp index and associated data products as derived and distributed by the GFZ German Research Centre for Geosciences. The near real-time production of the nowcast Kp index is of particular interest for space weather services and here we describe and evaluate its current setup.
We prove a homology vanishing theorem for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Bochner on manifolds [3]. Specifically, we prove that if a graph has positive curvature at every vertex, then its first homology group is trivial, where the notion of homology that we use for graphs is the path homology developed by Grigor'yan, Lin, Muranov, and Yau [11]. We moreover prove that the fundamental group is finite for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Myers on manifolds [22]. The proofs draw on several separate areas of graph theory, including graph coverings, gain graphs, and cycle spaces, in addition to the Bakry-Emery curvature, path homology, and graph homotopy. The main results follow as a consequence of several different relationships developed among these different areas. Specifically, we show that a graph with positive curvature cannot have a non-trivial infinite cover preserving 3-cycles and 4-cycles, and give a combinatorial interpretation of the first path homology in terms of the cycle space of a graph. Furthermore, we relate gain graphs to graph homotopy and the fundamental group developed by Grigor'yan, Lin, Muranov, and Yau [12], and obtain an alternative proof of their result that the abelianization of the fundamental group of a graph is isomorphic to the first path homology over the integers.
Various particle filters have been proposed over the last couple of decades with the common feature that the update step is governed by a type of control law. This feature makes them an attractive alternative to traditional sequential Monte Carlo which scales poorly with the state dimension due to weight degeneracy. This article proposes a unifying framework that allows us to systematically derive the McKean-Vlasov representations of these filters for the discrete time and continuous time observation case, taking inspiration from the smooth approximation of the data considered in [D. Crisan and J. Xiong, Stochastics, 82 (2010), pp. 53-68; J. M. Clark and D. Crisan, Probab. Theory Related Fields, 133 (2005), pp. 43-56]. We consider three filters that have been proposed in the literature and use this framework to derive Ito representations of their limiting forms as the approximation parameter delta -> 0. All filters require the solution of a Poisson equation defined on R-d, for which existence and uniqueness of solutions can be a nontrivial issue. We additionally establish conditions on the signal-observation system that ensures well-posedness of the weighted Poisson equation arising in one of the filters.
Forecast verification
(2021)
The philosophy of forecast verification is rather different between deterministic and probabilistic verification metrics: generally speaking, deterministic metrics measure differences, whereas probabilistic metrics assess reliability and sharpness of predictive distributions. This article considers the root-mean-square error (RMSE), which can be seen as a deterministic metric, and the probabilistic metric Continuous Ranked Probability Score (CRPS), and demonstrates that under certain conditions, the CRPS can be mathematically expressed in terms of the RMSE when these metrics are aggregated. One of the required conditions is the normality of distributions. The other condition is that, while the forecast ensemble need not be calibrated, any bias or over/underdispersion cannot depend on the forecast distribution itself. Under these conditions, the CRPS is a fraction of the RMSE, and this fraction depends only on the heteroscedasticity of the ensemble spread and the measures of calibration. The derived CRPS-RMSE relationship for the case of perfect ensemble reliability is tested on simulations of idealised two-dimensional barotropic turbulence. Results suggest that the relationship holds approximately despite the normality condition not being met.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
The Arnoldi process can be applied to inexpensively approximate matrix functions of the form f (A)v and matrix functionals of the form v*(f (A))*g(A)v, where A is a large square non-Hermitian matrix, v is a vector, and the superscript * denotes transposition and complex conjugation. Here f and g are analytic functions that are defined in suitable regions in the complex plane. This paper reviews available approximation methods and describes new ones that provide higher accuracy for essentially the same computational effort by exploiting available, but generally not used, moment information. Numerical experiments show that in some cases the modifications of the Arnoldi decompositions proposed can improve the accuracy of v*(f (A))*g(A)v about as much as performing an additional step of the Arnoldi process.
Im Zuge der Covid-19 Pandemie werden zwei Werte täglich diskutiert: Die zuletzt gemeldete Zahl der neu Infizierten und die sogenannte Reproduktionsrate. Sie gibt wieder, wie viele weitere Menschen ein an Corona erkranktes Individuum im Durchschnitt ansteckt. Für die Schätzung dieses Wertes gibt es viele Möglichkeiten - auch das Robert Koch-Institut gibt in seinem täglichen Situationsbericht stets zwei R-Werte an: Einen 4-Tage-R-Wert und einen weniger schwankenden 7-Tage-R-Wert. Diese Arbeit soll eine weitere Möglichkeit vorstellen, einige Aspekte der Pandemie zu modellieren und die Reproduktionsrate zu schätzen.
In der ersten Hälfte der Arbeit werden die mathematischen Grundlagen vorgestellt, die man für die Modellierung benötigt. Hierbei wird davon ausgegangen, dass der Leser bereits ein Basisverständnis von stochastischen Prozessen hat. Im Abschnitt Grundlagen werden Verzweigungsprozesse mit einigen Beispielen eingeführt und die Ergebnisse aus diesem Themengebiet, die für diese Arbeit wichtig sind, präsentiert. Dabei gehen wir zuerst auf einfache Verzweigungsprozesse ein und erweitern diese dann auf Verzweigungsprozesse mit mehreren Typen. Um die Notation zu erleichtern, beschränken wir uns auf zwei Typen. Das Prinzip lässt sich aber auf eine beliebige Anzahl von Typen erweitern.
Vor allem soll die Wichtigkeit des Parameters λ herausgestellt werden. Dieser Wert kann als durchschnittliche Zahl von Nachfahren eines Individuums interpretiert werden und bestimmt die Dynamik des Prozesses über einen längeren Zeitraum. In der Anwendung auf die Pandemie hat der Parameter λ die gleiche Rolle wie die Reproduktionsrate R.
In der zweiten Hälfte dieser Arbeit stellen wir eine Anwendung der Theorie über Multitype Verzweigungsprozesse vor. Professor Yanev und seine Mitarbeiter modellieren in ihrer Veröffentlichung Branching stochastic processes as models of Covid-19 epidemic development die Ausbreitung des Corona Virus' über einen Verzweigungsprozess mit zwei Typen. Wir werden dieses Modell diskutieren und Schätzer daraus ableiten: Ziel ist es, die Reproduktionsrate zu ermitteln. Außerdem analysieren wir die Möglichkeiten, die Dunkelziffer (die Zahl nicht gemeldeter Krankheitsfälle) zu schätzen. Wir wenden die Schätzer auf die Zahlen von Deutschland an und werten diese schließlich aus.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
We establish a new approach of treating elliptic boundary value problems (BVPs) on manifolds with boundary and regular corners, up to singularity order 2. Ellipticity and parametrices are obtained in terms of symbols taking values in algebras of BVPs on manifolds of corresponding lower singularity orders. Those refer to Boutet de Monvel's calculus of operators with the transmission property, see Boutet de Monvel (Acta Math 126:11-51, 1971) for the case of smooth boundary. On corner configuration operators act in spaces with multiple weights. We mainly study the case of upper left entries in the respective 2 x 2 operator block-matrices of such a calculus. Green operators in the sense of Boutet de Monvel (Acta Math 126:11-51, 1971) analogously appear in singular cases, and they are complemented by contributions of Mellin type. We formulate a result on ellipticity and the Fredholm property in weighted corner spaces, with parametrices of analogous kind.
Diffusion maps is a manifold learning algorithm widely used for dimensionality reduction. Using a sample from a distribution, it approximates the eigenvalues and eigenfunctions of associated Laplace-Beltrami operators. Theoretical bounds on the approximation error are, however, generally much weaker than the rates that are seen in practice. This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus. For the data sampling (variance) component of the error we make spatially localized compact embedding estimates on certain Hardy spaces; we study the deterministic (bias) component as a perturbation of the Laplace-Beltrami operator's associated PDE and apply relevant spectral stability results. Using these approaches, we match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretization. We also introduce an alternative normalization for diffusion maps based on Sinkhorn weights. This normalization approximates a Langevin diffusion on the sample and yields a symmetric operator approximation. We prove that it has better convergence compared with the standard normalization on flat domains, and we present a highly efficient rigorous algorithm to compute the Sinkhorn weights.
In this article we prove upper bounds for the Laplace eigenvalues lambda(k) below the essential spectrum for strictly negatively curved Cartan-Hadamard manifolds. Our bound is given in terms of k(2) and specific geometric data of the manifold. This applies also to the particular case of non-compact manifolds whose sectional curvature tends to -infinity, where no essential spectrum is present due to a theorem of Donnelly/Li. The result stands in clear contrast to Laplacians on graphs where such a bound fails to be true in general.
Satellite-measured tidal magnetic signals are of growing importance. These fields are mainly used to infer Earth's mantle conductivity, but also to derive changes in the oceanic heat content. We present a new Kalman filter-based method to derive tidal magnetic fields from satellite magnetometers: KALMAG. The method's advantage is that it allows to study a precisely estimated posterior error covariance matrix. We present the results of a simultaneous estimation of the magnetic signals of 8 major tides from 17 years of Swarm and CHAMP data. For the first time, robustly derived posterior error distributions are reported along with the reported tidal magnetic fields. The results are compared to other estimates that are either based on numerical forward models or on satellite inversions of the same data. For all comparisons, maximal differences and the corresponding globally averaged RMSE are reported. We found that the inter-product differences are comparable with the KALMAG-based errors only in a global mean sense. Here, all approaches give values of the same order, e.g., 0.09 nT-0.14 nT for M2. Locally, the KALMAG posterior errors are up to one order smaller than the inter-product differences, e.g., 0.12 nT vs. 0.96 nT for M2.
Both ground- and satellite-based airglow imaging have significantly contributed to understanding the low-latitude ionosphere, especially the morphology and dynamics of the equatorial ionization anomaly (EIA). The NASA Global-scale Observations of the Limb and Disk (GOLD) mission focuses on far-ultraviolet airglow images from a geostationary orbit at 47.5 degrees W. This region is of particular interest at low magnetic latitudes because of the high magnetic declination (i.e., about -20 degrees) and proximity of the South Atlantic magnetic anomaly. In this study, we characterize an exciting feature of the nighttime EIA using GOLD observations from October 5, 2018 to June 30, 2020. It consists of a wavelike structure of a few thousand kilometers seen as poleward and equatorward displacements of the EIA-crests. Initial analyses show that the synoptic-scale structure is symmetric about the dip equator and appears nearly stationary with time over the night. In quasi-dipole coordinates, maxima poleward displacements of the EIA-crests are seen at about +/- 12 degrees latitude and around 20 and 60 degrees longitude (i.e., in geographic longitude at the dip equator, about 53 degrees W and 14 degrees W). The wavelike structure presents typical zonal wavelengths of about 6.7 x 10(3) km and 3.3 x 10(3) km. The structure's occurrence and wavelength are highly variable on a day-to-day basis with no apparent dependence on geomagnetic activity. In addition, a cluster or quasi-periodic wave train of equatorial plasma depletions (EPDs) is often detected within the synoptic-scale structure. We further outline the difference in observing these EPDs from FUV images and in situ measurements during a GOLD and Swarm mission conjunction.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
Nonparametric goodness-of-fit testing for parametric covariate models in pharmacometric analyses
(2021)
The characterization of covariate effects on model parameters is a crucial step during pharmacokinetic/pharmacodynamic analyses. Although covariate selection criteria have been studied extensively, the choice of the functional relationship between covariates and parameters, however, has received much less attention. Often, a simple particular class of covariate-to-parameter relationships (linear, exponential, etc.) is chosen ad hoc or based on domain knowledge, and a statistical evaluation is limited to the comparison of a small number of such classes. Goodness-of-fit testing against a nonparametric alternative provides a more rigorous approach to covariate model evaluation, but no such test has been proposed so far. In this manuscript, we derive and evaluate nonparametric goodness-of-fit tests for parametric covariate models, the null hypothesis, against a kernelized Tikhonov regularized alternative, transferring concepts from statistical learning to the pharmacological setting. The approach is evaluated in a simulation study on the estimation of the age-dependent maturation effect on the clearance of a monoclonal antibody. Scenarios of varying data sparsity and residual error are considered. The goodness-of-fit test correctly identified misspecified parametric models with high power for relevant scenarios. The case study provides proof-of-concept of the feasibility of the proposed approach, which is envisioned to be beneficial for applications that lack well-founded covariate models.
A sufficient quantitative understanding of aluminium (Al) toxicokinetics (TK) in man is still lacking, although highly desirable for risk assessment of Al exposure. Baseline exposure and the risk of contamination severely limit the feasibility of TK studies administering the naturally occurring isotope Al-27, both in animals and man. These limitations are absent in studies with Al-26 as a tracer, but tissue data are limited to animal studies. A TK model capable of inter-species translation to make valid predictions of Al levels in humans-especially in toxicological relevant tissues like bone and brain-is urgently needed. Here, we present: (i) a curated dataset which comprises all eligible studies with single doses of Al-26 tracer administered as citrate or chloride salts orally and/or intravenously to rats and humans, including ultra-long-term kinetic profiles for plasma, blood, liver, spleen, muscle, bone, brain, kidney, and urine up to 150 weeks; and (ii) the development of a physiology-based (PB) model for Al TK after intravenous and oral administration of aqueous Al citrate and Al chloride solutions in rats and humans. Based on the comprehensive curated Al-26 dataset, we estimated substance-dependent parameters within a non-linear mixed-effect modelling context. The model fitted the heterogeneous Al-26 data very well and was successfully validated against datasets in rats and humans. The presented PBTK model for Al, based on the most extensive and diverse dataset of Al exposure to date, constitutes a major advancement in the field, thereby paving the way towards a more quantitative risk assessment in humans.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.