Refine
Has Fulltext
- yes (16) (remove)
Year of publication
Document Type
- Postprint (14)
- Doctoral Thesis (1)
- Master's Thesis (1)
Keywords
- models (16) (remove)
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
Flooding is an imminent natural hazard threatening most river deltas, e.g. the Mekong Delta. An appropriate flood management is thus required for a sustainable development of the often densely populated regions. Recently, the traditional event-based hazard control shifted towards a risk management approach in many regions, driven by intensive research leading to new legal regulation on flood management. However, a large-scale flood risk assessment does not exist for the Mekong Delta. Particularly, flood risk to paddy rice cultivation, the most important economic activity in the delta, has not been performed yet. Therefore, the present study was developed to provide the very first insight into delta-scale flood damages and risks to rice cultivation. The flood hazard was quantified by probabilistic flood hazard maps of the whole delta using a bivariate extreme value statistics, synthetic flood hydrographs, and a large-scale hydraulic model. The flood risk to paddy rice was then quantified considering cropping calendars, rice phenology, and harvest times based on a time series of enhanced vegetation index (EVI) derived from MODIS satellite data, and a published rice flood damage function. The proposed concept provided flood risk maps to paddy rice for the Mekong Delta in terms of expected annual damage. The presented concept can be used as a blueprint for regions facing similar problems due to its generic approach. Furthermore, the changes in flood risk to paddy rice caused by changes in land use currently under discussion in the Mekong Delta were estimated. Two land-use scenarios either intensifying or reducing rice cropping were considered, and the changes in risk were presented in spatially explicit flood risk maps. The basic risk maps could serve as guidance for the authorities to develop spatially explicit flood management and mitigation plans for the delta. The land-use change risk maps could further be used for adaptive risk management plans and as a basis for a cost-benefit of the discussed land-use change scenarios. Additionally, the damage and risks maps may support the recently initiated agricultural insurance programme in Vietnam.
Sprache
Englisch
Modern single-particle-tracking techniques produce extensive time-series of diffusive motion in a wide variety of systems, from single-molecule motion in living-cells to movement ecology. The quest is to decipher the physical mechanisms encoded in the data and thus to better understand the probed systems. We here augment recently proposed machine-learning techniques for decoding anomalous-diffusion data to include an uncertainty estimate in addition to the predicted output. To avoid the Black-Box-Problem a Bayesian-Deep-Learning technique named Stochastic-Weight-Averaging-Gaussian is used to train models for both the classification of the diffusionmodel and the regression of the anomalous diffusion exponent of single-particle-trajectories. Evaluating their performance, we find that these models can achieve a wellcalibrated error estimate while maintaining high prediction accuracies. In the analysis of the output uncertainty predictions we relate these to properties of the underlying diffusion models, thus providing insights into the learning process of the machine and the relevance of the output.
We evaluate the spatial and temporal evolution of Earth's long-wavelength surface dynamic topography since the Jurassic using a series of high-resolution global mantle convection models. These models are Earth-like in terms of convective vigour, thermal structure, surface heat-flux and the geographic distribution of heterogeneity. The models generate a degree-2-dominated spectrum of dynamic topography with negative amplitudes above subducted slabs (i.e. circum-Pacific regions and southern Eurasia) and positive amplitudes elsewhere (i.e. Africa, north-western Eurasia and the central Pacific). Model predictions are compared with published observations and subsidence patterns from well data, both globally and for the Australian and southern African regions. We find that our models reproduce the long-wavelength component of these observations, although observed smaller-scale variations are not reproduced. We subsequently define "geodynamic rules" for how different surface tectonic settings are affected by mantle processes: (i) locations in the vicinity of a subduction zone show large negative dynamic topography amplitudes; (ii) regions far away from convergent margins feature long-term positive dynamic topography; and (iii) rapid variations in dynamic support occur along the margins of overriding plates (e.g. the western US) and at points located on a plate that rapidly approaches a subduction zone (e.g. India and the Arabia Peninsula). Our models provide a predictive quantitative framework linking mantle convection with plate tectonics and sedimentary basin evolution, thus improving our understanding of how subduction and mantle convection affect the spatio-temporal evolution of basin architecture.
Observations of rift and rifted margin architecture suggest that significant spatial and temporal structural heterogeneity develops during the multiphase evolution of continental rifting. Inheritance is often invoked to explain this heterogeneity, such as preexisting anisotropies in rock composition, rheology, and deformation. Here, we use high-resolution 3-D thermal-mechanical numerical models of continental extension to demonstrate that rift-parallel heterogeneity may develop solely through fault network evolution during the transition from distributed to localized deformation. In our models, the initial phase of distributed normal faulting is seeded through randomized initial strength perturbations in an otherwise laterally homogeneous lithosphere extending at a constant rate. Continued extension localizes deformation onto lithosphere-scale faults, which are laterally offset by tens of km and discontinuous along-strike. These results demonstrate that rift- and margin-parallel heterogeneity of large-scale fault patterns may in-part be a natural byproduct of fault network coalescence.
We investigate the ergodic properties of a random walker performing (anomalous) diffusion on a random fractal geometry. Extensive Monte Carlo simulations of the motion of tracer particles on an ensemble of realisations of percolation clusters are performed for a wide range of percolation densities. Single trajectories of the tracer motion are analysed to quantify the time averaged mean squared displacement (MSD) and to compare this with the ensemble averaged MSD of the particle motion. Other complementary physical observables associated with ergodicity are studied, as well. It turns out that the time averaged MSD of individual realisations exhibits non-vanishing fluctuations even in the limit of very long observation times as the percolation density approaches the critical value. This apparent non-ergodic behaviour concurs with the ergodic behaviour on the ensemble averaged level. We demonstrate how the non-vanishing fluctuations in single particle trajectories are analytically expressed in terms of the fractal dimension and the cluster size distribution of the random geometry, thus being of purely geometrical origin. Moreover, we reveal that the convergence scaling law to ergodicity, which is known to be inversely proportional to the observation time T for ergodic diffusion processes, follows a power-law ∼T−h with h < 1 due to the fractal structure of the accessible space. These results provide useful measures for differentiating the subdiffusion on random fractals from an otherwise closely related process, namely, fractional Brownian motion. Implications of our results on the analysis of single particle tracking experiments are provided.
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
Background: Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications.
Results: Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study.
Conclusions: Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices.
Der Rangsdorfer See (A = 2,44 km² , z(max) = 6 m, z(mean) = 1,930 m) im Landkreis Teltow Fläming ist einer von vielen Gewässern in Brandenburg, die derzeit den nach EU-Wasserrahmenrichtlinie geforderten guten Zustand nicht erreichen. Bekanntlich gilt Phosphor für viele Gewässer als der bedeutendste produktionslimitierende Nährstoff und ist somit aussichtsreicher Steuerfaktor für eine erfolgreiche Seentherapie.
Ziel dieser Arbeit war es, die Gewässergüte des Rangsdorfer Sees nach trophischen Aspekten zu bewerten, Phosphor-Eintragspfade zu identifizieren, welche die höchsten Frachten verursachen sowie Therapiemaßnahmen zu finden, die eine langfristige Zustandsverbesserung ermöglichen. In einer Szenarioanalyse wurde das modifizierte Einbox Modell angewendet, um die Wirksamkeit externer und interner Therapiemaßnahmen abzuschätzen. Nach Abschluss der Studienarbeiten können folgende Schlüsse gezogen werden:
Der Rangsdorfer See ist aufgrund seiner Morphometrie ein naturgegebenes nährstoffreiches Gewässer und war das auch schon, bevor anthropogene Einflüsse auf ihn einwirkten. Langjährige Nährstoffeinträge verschiedener Herkunft (Abwassereinleitungen, Fischintensivhaltung, Rieselfelder) führten jedoch zu einer übermäßigen Produktivität. Viele Belastungsquellen wurden ausgeschaltet, es findet jedoch immer noch ein relevanter Nährstoffaustrag aus dem Einzugsgebiet statt. Unter Verwendung von Phosphor-Bilanzmodellen und seetypspezifischen kritischen Phosphor-Seekonzentrationen zeigt sich, dass die aktuell stattfindende externe Phosphor-Belastung den kritischen Phosphor-Eintrag zur mutmaßlichen Erreichung des guten ökologischen Zustandes überschreitet. Anteilig die größte Fracht wird über den natürlichen Hauptzufluss in den Rangsdorfer See transportiert. Sanierungsmaßnahmen in dessen Einzugsgebiet stellen ein effektives Mittel dar. Eine technische Lösung zur Nährstoffminderung im Zufluss (Eliminierungsanlage) kann unterstützend eingesetzt werden, muss aber dann bei unveränderter hoher Phosphor-Konzentration im Zufluss dauerhaft betrieben werden. Das Einbox Modell stellte sich als hilfreiches Instrument zur Vorauswahl geeigneter Therapiemaßnahmen heraus.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
The time-dependent approach to electronic spectroscopy, as popularized by Heller and coworkers in the 1980's, is applied here in conjunction with linear-response, time-dependent density functional theory to study vibronic absorption, emission and resonance Raman spectra of several diamondoids. Two-state models, the harmonic and the Condon approximations, are used for the calculations, making them easily applicable to larger molecules. The method is applied to nine pristine lower and higher diamondoids: adamantane, diamantane, triamantane, and three isomers each of tetramantane and pentamantane. We also consider a hybrid species “Dia = Dia” – a shorthand notation for a recently synthesized molecule comprising two diamantane units connected by a C[double bond, length as m-dash]C double bond. We resolve and interpret trends in optical and vibrational properties of these molecules as a function of their size, shape, and symmetry, as well as effects of “blending” with sp2-hybridized C-atoms. Time-dependent correlation functions facilitate the computations and shed light on the vibrational dynamics following electronic transitions.
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.