Refine
Year of publication
- 2018 (67) (remove)
Document Type
- Article (58)
- Other (3)
- Monograph/Edited Volume (2)
- Doctoral Thesis (2)
- Conference Proceeding (1)
- Review (1)
Keywords
- Boolean model (2)
- Edge calculus (2)
- Kompetenzen (2)
- Teamarbeit (2)
- adaptive estimation (2)
- data assimilation (2)
- discrepancy principle (2)
- early stopping (2)
- uncertainty quantification (2)
- Aerosol (1)
- Agmon estimates (1)
- Alternatividentitäten (1)
- Alternativvarietäten (1)
- Analyse (1)
- Andere Fachrichtungen (1)
- Angiogenesis (1)
- Anisotropic pseudo-differential operators (1)
- Artof (1)
- Bayesian inversion (1)
- Beweisaufgaben (1)
- Big Data (1)
- Blended Learning (1)
- Blended learning (1)
- Blood coagulation network (1)
- Boutet de Monvel's calculus (1)
- Boutet de Monvels Kalkül (1)
- C-Test (1)
- Chicken chorioallantoic membrane (CAM) (1)
- Clifford semigroup (1)
- Clifford-Halbgruppen (1)
- Codeverständnis (1)
- Continuum (1)
- Continuum random cluster model (1)
- Control theory (1)
- DLR equations (1)
- Data Literacy (1)
- Data Science (1)
- Degenerationsprozesse (1)
- Dependent thinning (1)
- Disagreement percolation (1)
- Dispositional learning analytics (1)
- Distributed Learning (1)
- Electron spectroscopy (1)
- Ellipticity and parametrices (1)
- Empirische Untersuchung (1)
- Endothelin (ET) (1)
- Energy resolution (1)
- Explorative Datenanalyse (1)
- Exponential decay of pair correlation (1)
- Fertigkeiten (1)
- Finsler distance (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Forschung (1)
- Fortuin-Kasteleyn representation (1)
- Future time interval (1)
- Gaussian sequence model (1)
- Gibbs point process (1)
- Graphs (1)
- Halbgruppentheorie (1)
- High dimensional statistical inference (1)
- Hochschulkurse (1)
- Hochschullehre (1)
- Inference post model-selection (1)
- Informatik (1)
- Informatik für alle (1)
- Informationskompetenz (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Inverse problem (1)
- Iran (1)
- Kalman Bucy filter (1)
- Kanten-Randwertprobleme (1)
- Kette von Halbgruppen (1)
- Kollaboration (1)
- Kompetenzmessung (1)
- Landweber iteration (1)
- Laplacian (1)
- Learning analytics (1)
- Learning dispositions (1)
- Lehre (1)
- Lehrevaluation (1)
- Level of confidence (1)
- Linear inverse problems (1)
- Lipopolysaccharides (LPS) (1)
- Lückentext (1)
- Mannigfaltigkeiten mit Kante (1)
- Mannigfaltigkeiten mit Singularitäten (1)
- Mathematical modeling (1)
- Maximum expected earthquake magnitude (1)
- Mellin quantizations (1)
- Mellin transform (1)
- Metastasis (1)
- Minimax Optimality (1)
- Minimax convergence rates (1)
- Minimax hypothesis testing (1)
- Model order reduction (1)
- Navier-Stokes equations (1)
- Nonlinear systems (1)
- OSSS inequality (1)
- Ollivier-Ricci (1)
- Operator-valued symbols of Mellin type (1)
- Operators on singular manifolds (1)
- Orthogruppen (1)
- Phase transition (1)
- PoSI constants (1)
- Poisson process (1)
- Primary 26D15 (1)
- Programmierausbildung (1)
- Projekte (1)
- Raman lidar (1)
- Random cluster model (1)
- Randomised tree algorithm (1)
- Randwertprobleme (1)
- Re-Engineering (1)
- Reproducing kernel Hilbert space (1)
- Scattering theory (1)
- Semiclassical difference operator (1)
- Software Engineering (1)
- Softwareentwicklung (1)
- Spectral Regularization (1)
- Spectral regularization (1)
- Statistical learning (1)
- Stochastic domination (1)
- Stratified spaces (1)
- Structured population equation (1)
- Strukturverbesserung (1)
- Studiengänge (1)
- Studierendenperformance (1)
- Synchrotron (1)
- Theoretische Informatik (1)
- Time of flight (1)
- Unique Gibbs state (1)
- Volterra operator (1)
- Wartung von Lehrveranstaltungen (1)
- Wide angle (1)
- Wissenschaftliches Arbeiten (1)
- accuracy (1)
- adaptivity (1)
- alignment (1)
- alternative variety (1)
- articulation (1)
- asymptotic behavior (1)
- asymptotic expansion (1)
- boundary element method (1)
- boundary value problems (1)
- chain of semigroups (1)
- classical solution (1)
- conditions of success (1)
- confidence intervals (1)
- confidence sets (1)
- congruence (1)
- corner parametrices (1)
- disjunction of identities (1)
- doppelsemigroup (1)
- duale IT-Ausbildung (1)
- e-Assessment (1)
- e-Learning (1)
- early mathematical education (1)
- edge boundary value problems (1)
- ensemble Kalman filter (1)
- fluid-structure interaction (1)
- forschungsorientiertes Lernen (1)
- fracture network (1)
- free algebra (1)
- heterogeneity (1)
- high-dimensional inference (1)
- idleness (1)
- interaction matrix (1)
- interassociativity (1)
- inverse problem (1)
- inversion (1)
- left-right asymmetry (1)
- limiting distribution (1)
- linear inverse problems (1)
- linear regression (1)
- low rank matrix recovery (1)
- low rank recovery (1)
- manifolds with edge (1)
- manifolds with singularities (1)
- matrix completion (1)
- minimax hypothesis testing (1)
- model error (1)
- morphology (1)
- motion correction (1)
- multi-modular morphology (1)
- multi-well potential (1)
- nodal flow (1)
- nonasymptotic minimax separation rate (1)
- nonparametric statistics (1)
- numerical methods (1)
- operator-valued symbols (1)
- optimal transport (1)
- oracle inequalities (1)
- oracle inequality (1)
- orthogroup (1)
- paleoearthquakes (1)
- particle filter (1)
- particle microphysics (1)
- percolation (1)
- photometer (1)
- pseudo-differential equation (1)
- pseudo-differentielle Gleichungen (1)
- regularization (1)
- restricted isometry property (1)
- rock mechanics (1)
- seismic hazard (1)
- semigroup (1)
- semigroup theory (1)
- sparsity (1)
- spectral cut-off (1)
- stability (1)
- starker Halbverband von Halbgruppen (1)
- statistical seismology (1)
- stochastic models (1)
- stress variability (1)
- strong semilattice of semigroups (1)
- target dimensions (1)
- truncated SVD (1)
- tunneling (1)
- unknown variance (1)
- weighted edge and corner spaces (1)
Institute
- Institut für Mathematik (67) (remove)
Die Lehre von wissenschaftlichem Arbeiten stellt einen zentralen Aspekt in forschungsorientierten Studiengängen wie der Informatik dar. Trotz diverser Angebote werden mittel- und langfristig Mängel in der
Arbeitsqualität von Studierenden sichtbar. Dieses Paper analysiert daher das Profil der Studierenden, deren Anwendung des wissenschaftlichen Arbeitens, und das Angebot von Proseminaren zum Thema „Einführung in das wissenschaftliche Arbeiten“ einer deutschen Universität. Die Ergebnisse mehrerer Erhebungen zeigen dabei diverse Probleme bei Studierenden auf, u. a. bei dem Prozessverständnis, dem Zeitmanagement und der Kommunikation.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
Um für ein Leben in der digitalen Gesellschaft vorbereitet zu sein, braucht jeder heute in verschiedenen Situationen umfangreiche informatische Grundlagen. Die Bedeutung von Informatik nimmt nicht nur in immer mehr
Bereichen unseres täglichen Lebens zu, sondern auch in immer mehr Ausbildungsrichtungen. Um junge Menschen auf ihr zukünftiges Leben und/oder ihre zukünftige berufliche Tätigkeit vorzubereiten, bieten verschiedene Hochschulen Informatikmodule für Studierende anderer Fachrichtungen an. Die Materialien jener Kurse bilden einen umfangreichen Datenpool, um die für Studierende anderer Fächer bedeutenden Aspekte der Informatik mithilfe eines empirischen Ansatzes zu identifizieren. Im Folgenden werden 70 Module zu informatischer Bildung für Studierende anderer Fachrichtungen analysiert. Die Materialien – Publikationen, Syllabi und Stundentafeln – werden zunächst mit einer qualitativen Inhaltsanalyse nach Mayring untersucht und anschließend quantitativ ausgewertet. Basierend auf der Analyse werden Ziele, zentrale Themen und Typen eingesetzter Werkzeuge identifiziert.
Was ist Data Science?
(2018)
In Zusammenhang mit den Entwicklungen der vergangenen Jahre, insbesondere in den Bereichen Big Data, Datenmanagement und Maschinenlernen, hat sich der Umgang mit Daten und deren Analyse wesentlich weiterentwickelt. Mittlerweile wird die Datenwissenschaft als eigene Disziplin angesehen, die auch immer stärker durch entsprechende Studiengänge an Hochschulen repräsentiert wird. Trotz dieser zunehmenden Bedeutung ist jedoch oft unklar, welche konkreten Inhalte mit ihr in Verbindung stehen, da sie in verschiedensten Ausprägungen auftritt. In diesem Beitrag werden daher die hinter der Data Science stehenden informatischen Inhalte durch eine qualitative Analyse der Modulhandbücher etablierter Studiengänge aus diesem Bereich ermittelt und so ein Beitrag zur Charakterisierung dieser Disziplin geleistet. Am Beispiel der Entwicklung eines Data-Literacy-Kompetenzmodells, die als Ausblick skizziert wird, wird die Bedeutung dieser Charakterisierung für die weitere Forschung expliziert.
Vorlesungs-Pflege
(2018)
Ähnlich zu Alterungsprozessen bei Software degenerieren auch Vorlesungen, wenn sie nicht hinreichend gepflegt werden. Die Gründe hierfür werden ebenso beleuchtet wie mögliche Indikatoren und Maßnahmen – der Blick ist dabei immer der eines Informatikers. An drei Vorlesungen wird erläutert, wie der Degeneration von Lehrveranstaltungen
gegengewirkt werden kann. Mangels hinreichend großer empirischer Daten liefert das Paper keine unumstößlichen Wahrheiten. Ein Ziel ist es vielmehr Kollegen, die ähnliche Phänomene beobachten, einen ersten Anker für einen
inneren Diskurs zu bieten. Ein langfristiges Ziel ist die Sammlung eines Katalogs an Maßnahmen zur Pflege von Informatikvorlesungen.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
Understanding and reducing complex systems pharmacology models based on a novel input-response index
(2018)
A growing understanding of complex processes in biology has led to large-scale mechanistic models of pharmacologically relevant processes. These models are increasingly used to study the response of the system to a given input or stimulus, e.g., after drug administration. Understanding the input–response relationship, however, is often a challenging task due to the complexity of the interactions between its constituents as well as the size of the models. An approach that quantifies the importance of the different constituents for a given input–output relationship and allows to reduce the dynamics to its essential features is therefore highly desirable. In this article, we present a novel state- and time-dependent quantity called the input–response index that quantifies the importance of state variables for a given input–response relationship at a particular time. It is based on the concept of time-bounded controllability and observability, and defined with respect to a reference dynamics. In application to the brown snake venom–fibrinogen (Fg) network, the input–response indices give insight into the coordinated action of specific coagulation factors and about those factors that contribute only little to the response. We demonstrate how the indices can be used to reduce large-scale models in a two-step procedure: (i) elimination of states whose dynamics have only minor impact on the input–response relationship, and (ii) proper lumping of the remaining (lower order) model. In application to the brown snake venom–fibrinogen network, this resulted in a reduction from 62 to 8 state variables in the first step, and a further reduction to 5 state variables in the second step. We further illustrate that the sequence, in which a recursive algorithm eliminates and/or lumps state variables, has an impact on the final reduced model. The input–response indices are particularly suited to determine an informed sequence, since they are based on the dynamics of the original system. In summary, the novel measure of importance provides a powerful tool for analysing the complex dynamics of large-scale systems and a means for very efficient model order reduction of nonlinear systems.
We analyze a general class of difference operators Hε=Tε+Vε on ℓ2((εZ)d), where Vε is a multi-well potential and ε is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two “wells” (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol h0(x,ξ) of Hε) connecting the two minima and the case where the minimal geodesics form an ℓ+1 dimensional manifold, ℓ≥1. These results on the tunneling problem are as sharp as the classical results for the Schrödinger operator in Helffer and Sjöstrand (Commun PDE 9:337–408, 1984). Technically, our approach is pseudo-differential and we adapt techniques from Helffer and Sjöstrand [Analyse semi-classique pour l’équation de Harper (avec application à l’équation de Schrödinger avec champ magnétique), Mémoires de la S.M.F., 2 series, tome 34, pp 1–113, 1988)] and Helffer and Parisse (Ann Inst Henri Poincaré 60(2):147–187, 1994) to our discrete setting.
Tomographic Reservoir Imaging with DNA-Labeled Silica Nanotracers: The First Field Validation
(2018)
This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource-multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
Left-right (L-R) asymmetry in the body plan is determined by nodal flow in vertebrate embryos. Shinohara et al. (Shinohara K et al. 2012 Nat. Commun. 3, 622 (doi:10.1038/ncomms1624)) used Dpcd and Rfx3 mutant mouse embryos and showed that only a few cilia were sufficient to achieve L-R asymmetry. However, the mechanism underlying the breaking of symmetry by such weak ciliary flow is unclear. Flow-mediated signals associated with the L-R asymmetric organogenesis have not been clarified, and two different hypotheses-vesicle transport and mechanosensing-are now debated in the research field of developmental biology. In this study, we developed a computational model of the node system reported by Shinohara et al. and examined the feasibilities of the two hypotheses with a small number of cilia. With the small number of rotating cilia, flow was induced locally and global strong flow was not observed in the node. Particles were then effectively transported only when they were close to the cilia, and particle transport was strongly dependent on the ciliary positions. Although the maximum wall shear rate was also influenced by ciliary position, the mean wall shear rate at the perinodal wall increased monotonically with the number of cilia. We also investigated the membrane tension of immotile cilia, which is relevant to the regulation of mechanotransduction. The results indicated that tension of about 0.1 mu Nm(-1) was exerted at the base even when the fluid shear rate was applied at about 0.1 s(-1). The area of high tension was also localized at the upstream side, and negative tension appeared at the downstream side. Such localization may be useful to sense the flow direction at the periphery, as time-averaged anticlockwise circulation was induced in the node by rotation of a few cilia. Our numerical results support the mechanosensing hypothesis, and we expect that our study will stimulate further experimental investigations of mechanotransduction in the near future.
The Widom-Rowlinson model (or the Area-interaction model) is a Gibbs point process in R-d with the formal Hamiltonian defined as the volume of Ux epsilon omega B1(x), where. is a locally finite configuration of points and B-1(x) denotes the unit closed ball centred at x. The model is also tuned by two other parameters: the activity z > 0 related to the intensity of the process and the inverse temperature beta >= 0 related to the strength of the interaction. In the present paper we investigate the phase transition of the model in the point of view of percolation theory and the liquid-gas transition. First, considering the graph connecting points with distance smaller than 2r > 0, we show that for any beta >= 0, there exists 0 <(similar to a)(zc) (beta, r) < +infinity such that an exponential decay of connectivity at distance n occurs in the subcritical phase (i.e. z <(similar to a)(zc) (beta, r)) and a linear lower bound of the connection at infinity holds in the supercritical case (i.e. z >(similar to a)(zc) (beta, r)). These results are in the spirit of recent works using the theory of randomised tree algorithms (Probab. Theory Related Fields 173 (2019) 479-490, Ann. of Math. 189 (2019) 75-99, Duminil-Copin, Raoufi and Tassion (2018)). Secondly we study a standard liquid-gas phase transition related to the uniqueness/non-uniqueness of Gibbs states depending on the parameters z, beta. Old results (Phys. Rev. Lett. 27 (1971) 1040-1041, J. Chem. Phys. 52 (1970) 1670-1684) claim that a non-uniqueness regime occurs for z = beta large enough and it is conjectured that the uniqueness should hold outside such an half line ( z = beta >= beta(c) > 0). We solve partially this conjecture in any dimension by showing that for beta large enough the non-uniqueness holds if and only if z = beta. We show also that this critical value z = beta corresponds to the percolation threshold (similar to a)(zc) (beta, r) = beta for beta large enough, providing a straight connection between these two notions of phase transition.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Given two weighted graphs (X, b(k), m(k)), k = 1, 2 with b(1) similar to b(2) and m(1) similar to m(2), we prove a weighted L-1-criterion for the existence and completeness of the wave operators W-+/- (H-2, H-1, I-1,I-2), where H-k denotes the natural Laplacian in l(2)(X, m(k)) w.r.t. (X, b(k), m(k)) and I-1,I-2 the trivial identification of l(2)(X, m(1)) with l(2) (X, m(2)). In particular, this entails a general criterion for the absolutely continuous spectra of H-1 and H-2 to be equal.
SmB6 is predicted to be the first member of the intersection of topological insulators and Kondo insulators, strongly correlated materials in which the Fermi level lies in the gap of a many-body resonance that forms by hybridization between localized and itinerant states. While robust, surface-only conductivity at low temperature and the observation of surface states at the expected high symmetry points appear to confirm this prediction, we find both surface states at the (100) surface to be topologically trivial. We find the (Gamma) over bar state to appear Rashba split and explain the prominent (X) over bar state by a surface shift of the many-body resonance. We propose that the latter mechanism, which applies to several crystal terminations, can explain the unusual surface conductivity. While additional, as yet unobserved topological surface states cannot be excluded, our results show that a firm connection between the two material classes is still outstanding.
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering similar to 10% of the permafrost region. Lake area loss (-1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (< 10(-5)%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
Transition metals in inorganic systems and metalloproteins can occur in different oxidation states, which makes them ideal redox-active catalysts. To gain a mechanistic understanding of the catalytic reactions, knowledge of the oxidation state of the active metals, ideally in operando, is therefore critical. L-edge X-ray absorption spectroscopy (XAS) is a powerful technique that is frequently used to infer the oxidation state via a distinct blue shift of L-edge absorption energies with increasing oxidation state. A unified description accounting for quantum-chemical notions whereupon oxidation does not occur locally on the metal but on the whole molecule and the basic understanding that L-edge XAS probes the electronic structure locally at the metal has been missing to date. Here we quantify how charge and spin densities change at the metal and throughout the molecule for both redox and core-excitation processes. We explain the origin of the L-edge XAS shift between the high-spin complexes Mn-II(acac)(2) and Mn-III(acac)(3) as representative model systems and use ab initio theory to uncouple effects of oxidation-state changes from geometric effects. The shift reflects an increased electron affinity of Mn-III in the core-excited states compared to the ground state due to a contraction of the Mn 3d shell upon core-excitation with accompanied changes in the classical Coulomb interactions. This new picture quantifies how the metal-centered core hole probes changes in formal oxidation state and encloses and substantiates earlier explanations. The approach is broadly applicable to mechanistic studies of redox-catalytic reactions in molecular systems where charge and spin localization/delocalization determine reaction pathways.
Background and objective Optimisation of hydrocortisone replacement therapy in children is challenging as there is currently no licensed formulation and dose in Europe for children under 6 years of age. In addition, hydrocortisone has non-linear pharmacokinetics caused by saturable plasma protein binding. A paediatric hydrocortisone formulation, Infacort (R) oral hydrocortisone granules with taste masking, has therefore been developed. The objective of this study was to establish a population pharmacokinetic model based on studies in healthy adult volunteers to predict hydrocortisone exposure in paediatric patients with adrenal insufficiency. Methods Cortisol and binding protein concentrations were evaluated in the absence and presence of dexamethasone in healthy volunteers (n = 30). Dexamethasone was used to suppress endogenous cortisol concentrations prior to and after single doses of 0.5, 2, 5 and 10 mg of Infacort (R) or 20 mg of Infacort (R)/hydrocortisone tablet/hydrocortisone intravenously. A plasma protein binding model was established using unbound and total cortisol concentrations, and sequentially integrated into the pharmacokinetic model. Results Both specific (non-linear) and non-specific (linear) protein binding were included in the cortisol binding model. A two-compartment disposition model with saturable absorption and constant endogenous cortisol baseline (Baseline (cort),15.5 nmol/L) described the data accurately. The predicted cortisol exposure for a given dose varied considerably within a small body weight range in individuals weighing < 20 kg. Conclusions Our semi-mechanistic population pharmacokinetic model for hydrocortisone captures the complex pharmacokinetics of hydrocortisone in a simplified but comprehensive framework. The predicted cortisol exposure indicated the importance of defining an accurate hydrocortisone dose to mimic physiological concentrations for neonates and infants weighing < 20 kg.
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
For a given subcritical discrete Schrodinger operator H on a weighted infinite graph X, we construct a Hardy-weight w which is optimal in the following sense. The operator H - lambda w is subcritical in X for all lambda < 1, null-critical in X for lambda = 1, and supercritical near any neighborhood of infinity in X for any lambda > 1. Our results rely on a criticality theory for Schrodinger operators on general weighted graphs.
For linear inverse problems Y = A mu + zeta, it is classical to recover the unknown signal mu by iterative regularization methods ((mu) over cap,(m) = 0,1, . . .) and halt at a data-dependent iteration tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error parallel to A((mu) over cap (()(tau)) - mu)parallel to(2) is controlled. In the context of statistical estimation with stochastic noise zeta, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared- error E[parallel to((mu) over cap (()(tau)) - mu)parallel to(2)]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularization methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L-2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound.
In this thesis, we discuss the characterization of orthogroups by so-called disjunctions of identities. The orthogroups are a subclass of the class of completely regular semigroups, a generalization of the concept of a group. Thus there is for all elements of an orthogroup some kind of an inverse element such that both elements commute. Based on a fundamental result by A.H. Clifford, every completely regular semigroup is a semilattice of completely simple semigroups. This allows the description the gross structure of such semigroup. In particular every orthogroup is a semilattice of rectangular groups which are isomorphic to direct products of rectangular bands and groups. Semilattices of rectangular groups coming from various classes are characterized using the concept of an alternative variety, a generalization of the classical idea of a variety by Birkhoff.
After starting with some fundamental definitions and results concerning semigroups, we introduce the concept of disjunctions of identities and summarize some necessary properties. In particular we present some disjunction of identities which is sufficient for a semigroup for being completely regular. Furthermore we derive from this identity some statements concerning Rees matrix semigroups, a possible representation of completely simple semigroups. A main result of this thesis is the general description of disjunctions of identities such that a completely regular semigroup satisfying the described identity is a semilattice of left groups (right groups / groups). In this case the completely regular semigroup is an orthogroup. Furthermore we define various classes of rectangular groups such that there is an exponent taken from a set of pairwise coprime positive integers. An important result is the characterization of the class of all semilattices of particular rectangular groups (taken from the classes defined before) using a set-theoretic minimal set of disjunctions of identities. Additionally we investigate semilattices of groups (so-called Clifford semigroups). For this purpose we consider abelian groups of particular exponents and prove some well-known results from the theory of Clifford semigroups in an alternative way applying the concept of disjunctions of identities. As a practical application of the results concerning semilattices of left zero semigroups and right zero semigroups we identify a particular transformation semigroup. For more detailed information about the product of two arbitrary elements of a semilattice of semigroups we introduce the concept of strong semilattices of semigroups. It is well-known that a semilattice of groups is a strong semilattice of groups. So we can characterize a strong semilattice of groups of particular pairwise coprime exponents by disjunctions of identities. Additionally we describe the class of all strong semilattices of left zero semigroups and right zero semigroups with the help of such kind of identity, and we relate this statement to the theory of normal bands. A possible extension of the already described semilattices of rectangular groups can be achieved by an auxiliary total order (in terms of chains of semigroups). To this end we present a corresponding characterization due to disjunctions of identities which is obviously minimal. A list of open questions which have arisen during the research for this thesis, but left crude, is attached.
We study the Ollivier-Ricci curvature of graphs as a function of the chosen idleness. We show that this idleness function is concave and piecewise linear with at most three linear parts, and at most two linear parts in the case of a regular graph. We then apply our result to show that the idleness function of the Cartesian product of two regular graphs is completely determined by the idleness functions of the factors.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C.
Neue Medien“ war über viele Jahre hinweg das Codewort für Computer, die den Einzug in den Schulunterricht schaffen sollten – wenn es nach den Befürwortern ging. Die Widerstände, gerade in der Grundschule, waren groß und vielfältig. Es ist verständlich, dass kurz nach der spielerischen Heranführung an Bildung im Kindergarten, in einer Zeit, in der die Schülerinnen und Schüler auch das soziale Miteinander einüben müssen und auch fein- und grobmotorische Fähigkeiten erwerben sollen, das vereinzelnde Sitzen vor einem Bildschirm nicht zu den obersten Prioritäten gehört – und auch unserer Meinung nach nicht gehören sollte. In den letzten Jahren hat sich der Begriff der neuen Medien aber verändert, und das, was bisher damit verbunden wurde, ist mit der „Digitalisierung“ nicht nur des Schulunterrichts, sondern des ganzen Lebens, zu einem Dreh- und Angelpunkt der Bildung geworden. Statt klobigen Computern mit Bildschirmen, die das Miteinander schon über die Ausstattung der Computerräume in die falsche Bahn lenken, haben mobile Geräte in der Hand der Schülerinnen und Schüler übernommen. Diese können nun gemeinsam an einem Gerät arbeiten, sie können direkt mit den Bildschirminhalten interagieren, sie können die Kameras, Mikrophone und Sensoren nutzen, um authentische Daten zu erfassen und zu verarbeiten, sie können auch außerhalb des Klassenraums oder der Schule damit arbeiten und haben inzwischen fast jederzeit das ganze Wissen des Internets mit dabei. Schwerpunkt dieses Bandes ist daher der Umgang mit Tablets und den darauf laufenden „Apps“ im Mathematikunterricht. In fünf Beiträgen werden konkrete Unterrichtsvorschläge gemacht, die als Blaupausen für App-gestützten Unterricht dienen können. Ergänzt wird dieser Band durch einen allgemeinen Leitfaden zur Beurteilung von Apps für den Mathematikunterricht samt Beispielen.
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
If (T-t) is a semigroup of Markov operators on an L-1-space that admits a nontrivial lower bound, then a well-known theorem of Lasota and Yorke asserts that the semigroup is strongly convergent as t -> infinity. In this article we generalize and improve this result in several respects. First, we give a new and very simple proof for the fact that the same conclusion also holds if the semigroup is merely assumed to be bounded instead of Markov. As a main result, we then prove a version of this theorem for semigroups which only admit certain individual lower bounds. Moreover, we generalize a theorem of Ding on semigroups of Frobenius-Perron operators. We also demonstrate how our results can be adapted to the setting of general Banach lattices and we give some counterexamples to show optimality of our results. Our methods combine some rather concrete estimates and approximation arguments with abstract functional analytical tools. One of these tools is a theorem which relates the convergence of a time-continuous operator semigroup to the convergence of embedded discrete semigroups.
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade.
In der dualen IT-Ausbildung als Verbindung von beruflicher und akademischer Qualifikation werden die berufstypischen Werkzeuge, wie z. B. Laptops, ebenso in den Lehr-Lern-Prozessen der akademischen Unterrichtseinheiten eingesetzt. Im Prüfungswesen wird oft auf klassische Papierklausuren zurückgegriffen. Unterrichtseinheiten mit hohem Blended-Learning-Anteil ohne E-Prüfung werden dabei als „nicht konsistent“ wahrgenommen. In diesem Artikel wird eine empirische Studie dargelegt, die untersucht, welche Einflüsse aus der persönlichen Lernbiografie bei den Lehrenden in einer dualen IT-Ausbildung dazu führen können, die Möglichkeiten eines E-Assessments als summative Modulprüfung anzunehmen oder abzulehnen. Beispielhaft wurden in der dargelegten Studie Interviews mit Dozenten geführt und diese hinsichtlich der Verbindung zwischen Lernbiografie, Gestaltung der Didaktik der Lehr-Lern-Prozesse, Zufriedenheit und Veränderungsbereitschaft untersucht.
Im Rahmen eines Informatikstudiums wird neben theoretischen Grundlagen und Programmierfähigkeiten auch gezielt vermittelt, wie moderne Software in der Praxis entwickelt wird. Dabei wird oftmals eine Form der Projektarbeit gewählt, um Studierenden möglichst realitätsnahe Erfahrungen zu ermöglichen. Die Studierenden entwickeln einzeln oder in kleineren Teams Softwareprodukte für ausgewählte Problemstellungen. Neben fachlichen Inhalte stehen durch gruppendynamische Prozesse auch überfachliche Kompetenzen im Fokus. Dieser Beitrag präsentiert eine Interviewstudie mit Dozierenden von Softwareprojektpraktika an der RWTH Aachen und konzentriert sich auf die Ausgestaltung der Veranstaltungen sowie Förderung von überfachlichen Kompetenzen nach einem Kompetenzprofil für Softwareingenieure.
Frühe mathematische Bildung
(2018)
Im vorliegenden Beitrag werden aktuelle Forschungstrends im Bereich der frühen mathematischen Bildung im Kontext jüngst formulierter Zieldimensionen für die frühe mathematische Bildung (siehe Benz et al., 2017) dargestellt. Es wird auf spielbasierte Fördermaßnahmen, Kompetenzen im Bereich „Raum und Form“, den Einfluss sprachlicher Parameter auf die Entwicklung mathematischer Kompetenzen sowie auf mathematikbezogene Kompetenzen frühpädagogischer Fachkräfte eingegangen. Darüber hinaus werden die Ergebnisse einer aktuellen Feldstudie zur Förderung früher mathematischer Kompetenzen (siehe Dillon, Kannan, Dean, Spelke & Duflo, 2017) vorgestellt. Abschließend wird die Entwicklung und Implementierung anschlussfähiger Bildungskonzepte als eine der zentralen Herausforderungen zukünftiger Forschungs- und Bildungsbemühungen diskutiert
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
In diesem Artikel werden die Ergebnisse einer explorativen Datenanalyse über die Studierendenperformance in Klausur- und Hausaufgaben eines Einführungskurses der Theoretischen Informatik vorgestellt. Da bisher empirisch wenig untersucht ist, welche Probleme Studierenden in den Einführungskursen haben und die Durchfallquoten in diesen Kursen sehr hoch sind, soll auf diesem Weg ein Überblick gegeben werden. Die Ergebnisse zeigen, dass alle Studierenden unabhängig von ihrer Klausurnote die niedrigste Performance in den Klausur- und Hausaufgaben aufweisen, in denen formale Beweise gefordert sind. Dieses Ergebnis stärkt die Vermutung, dass didaktische
Ansätze und Maßnahmen sich insbesondere auf das Erlernen formaler Beweismethoden fokussieren sollten, um Informatik-Studierende nachhaltiger dabei zu unterstützen, in Theoretischer Informatik erfolgreich zu sein.
Genetic and environmental factors both contribute to cognitive test performance. A substantial increase in average intelligence test results in the second half of the previous century within one generation is unlikely to be explained by genetic changes. One possible explanation for the strong malleability of cognitive performance measure is that environmental factors modify gene expression via epigenetic mechanisms. Epigenetic factors may help to understand the recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events. The possible manifestation of malleable biomarkers contributing to variance in cognitive test performance, and thus possibly contributing to the "missing heritability" between estimates from twin studies and variance explained by genetic markers, is still unclear. Here we show in 1475 healthy adolescents from the IMaging and GENetics (IMAGEN) sample that general IQ (gIQ) is associated with (1) polygenic scores for intelligence, (2) epigenetic modification of DRD2 gene, (3) gray matter density in striatum, and (4) functional striatal activation elicited by temporarily surprising reward-predicting cues. Comparing the relative importance for the prediction of gIQ in an overlapping subsample, our results demonstrate neurobiological correlates of the malleability of gIQ and point to equal importance of genetic variance, epigenetic modification of DRD2 receptor gene, as well as functional striatal activation, known to influence dopamine neurotransmission. Peripheral epigenetic markers are in need of confirmation in the central nervous system and should be tested in longitudinal settings specifically assessing individual and environmental factors that modify epigenetic structure.
Empirische Untersuchungen von Lückentext-Items zur Beherrschung der Syntax einer Programmiersprache
(2018)
Lückentext-Items auf der Basis von Programmcode können eingesetzt werden, um Kenntnisse in der Syntax einer Programmiersprache zu prüfen, ohne dazu komplexe Programmieraufgaben zu stellen, deren Bearbeitung weitere Kompetenzen erfordert. Der vorliegende Beitrag dokumentiert den Einsatz von insgesamt zehn derartigen Items in einer universitären Erstsemestervorlesung zur Programmierung mit Java. Es werden sowohl Erfahrungen mit der Konstruktion der Items als auch empirische Daten aus dem Einsatz diskutiert. Der Beitrag zeigt dadurch insbesondere die Herausforderungen bei der Konstruktion valider Instrumente zur Kompetenzmessung in der Programmierausbildung auf. Die begrenzten und teilweise vorläufigen Ergebnisse zur Qualität der erzeugten Items legen trotzdem nahe, dass Erstellung und Einsatz entsprechender Items möglich ist und einen Beitrag zur Kompetenzmessung leisten kann.
In paper (Flad and Harutyunyan in Discrete Contin Dyn Syst 420-429, 2011) is shown that the Hamiltonian of the helium atom in the Born-Oppenheimer approximation, in the case if two particles coincide, is an edge-degenerate operator, which is elliptic in the corresponding edge calculus. The aim of this paper is an analogous investigation in the case if all three particles coincide. More precisely, we show that the Hamiltonian in the mentioned case is a corner-degenerate operator, which is elliptic as an operator in the corner analysis.
Um beim Berufseinstieg erfolgreich als Informatiker wirken zu können, reicht es oft nicht aus nur separierte Kenntnisse über technische und theoretische Grundlagen, Programmiersprachen, Werkzeuge und Selbst- und Zeitmanagement zu besitzen. Vielmehr sollten Absolventen diese Kenntnisse praktisch miteinander verzahnt einsetzen können. An der Universität wird Studierenden leider selten die Möglichkeit geboten, diese verschiedenen Bereiche der Informatik miteinander integriert auszuüben. Dafür entwickeln wir seit über zwei Dekaden ein Lehr- und Lernkonzept zur Unterstützung praktischer Softwareentwicklungsveranstaltungen und setzen dieses um. Dadurch bieten wir angehenden SoftwareentwicklerInnen und ProjektmanagerInnen eine Umgebung, in der sie neues, praktisch relevantes Wissen erwerben können, sich selbst praktisch erproben und ihr Wissen konkret einsetzen können. Hier legen wir einen Schwerpunkt auf das Arbeiten im Team. Das hier vorgestellte Konzept kann auf ähnliche Lehrveranstaltungen übertragen und aufgrund seiner Modularisierung verändert und erweitert werden.