Refine
Has Fulltext
- yes (448) (remove)
Year of publication
- 2018 (448) (remove)
Document Type
- Postprint (274)
- Doctoral Thesis (107)
- Article (30)
- Working Paper (18)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Part of Periodical (3)
- Conference Proceeding (2)
- Review (2)
- Habilitation Thesis (1)
Language
- English (448) (remove)
Keywords
- climate change (9)
- dynamics (7)
- adaptation (6)
- climate-change (6)
- expression (5)
- permafrost (5)
- inflammation (4)
- protein (4)
- remote sensing (4)
- stress (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (106)
- Institut für Geowissenschaften (43)
- Institut für Biochemie und Biologie (42)
- Humanwissenschaftliche Fakultät (33)
- Institut für Physik und Astronomie (32)
- Vereinigung für Jüdische Studien e. V. (25)
- Institut für Chemie (22)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (16)
- Department Sport- und Gesundheitswissenschaften (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (16)
Recovering genomics clusters of secondary metabolites from lakes using genome-resolved metagenomics
(2018)
Metagenomic approaches became increasingly popular in the past decades due to decreasing costs of DNA sequencing and bioinformatics development. So far, however, the recovery of long genes coding for secondary metabolites still represents a big challenge. Often, the quality of metagenome assemblies is poor, especially in environments with a high microbial diversity where sequence coverage is low and complexity of natural communities high. Recently, new and improved algorithms for binning environmental reads and contigs have been developed to overcome such limitations. Some of these algorithms use a similarity detection approach to classify the obtained reads into taxonomical units and to assemble draft genomes. This approach, however, is quite limited since it can classify exclusively sequences similar to those available (and well classified) in the databases. In this work, we used draft genomes from Lake Stechlin, north-eastern Germany, recovered by MetaBat, an efficient binning tool that integrates empirical probabilistic distances of genome abundance, and tetranucleotide frequency for accurate metagenome binning. These genomes were screened for secondary metabolism genes, such as polyketide synthases (PKS) and non-ribosomal peptide synthases (NRPS), using the Anti-SMASH and NAPDOS workflows. With this approach we were able to identify 243 secondary metabolite clusters from 121 genomes recovered from our lake samples. A total of 18 NRPS, 19 PKS, and 3 hybrid PKS/NRPS clusters were found. In addition, it was possible to predict the partial structure of several secondary metabolite clusters allowing for taxonomical classifications and phylogenetic inferences. Our approach revealed a high potential to recover and study secondary metabolites genes from any aquatic ecosystem.
Global climate change is one of the greatest challenges of the 21st century, with influence on the environment, societies, politics and economies. The (semi-)arid areas of Southern Africa already suffer from water scarcity. There is a great variety of ongoing research related to global climate history but important questions on regional differences still exist.
In southern African regions terrestrial climate archives are rare, which makes paleoclimate studies challenging. Based on the assumption that continental pans (sabkhas) represent a suitable geo-archive for the climate history, two different pans were studied in the southern and western Kalahari Desert. A combined approach of molecular biological and biogeochemical analyses is utilized to investigate the diversity and abundance of microorganisms and to trace temporal and spatial changes in paleoprecipitation in arid environments. The present PhD thesis demonstrates the applicability of pan sediments as a late Quaternary geo-archive based on microbial signature lipid biomarkers, such as archaeol, branched and isoprenoid glycerol dialkyl glycerol tetraethers (GDGTs) as well as phospholipid fatty acids (PLFA). The microbial signatures contained in the sediment provide information on the current or past microbial community from the Last Glacial Maximum to the recent epoch, the Holocene. The results are discussed in the context of regional climate evolution in southwestern Africa. The seasonal shift of the Innertropical Convergence Zone (ITCZ) along the equator influences the distribution of precipitation- and climate zones. The different expansion of the winter- and summer rainfall zones in southern Africa was confirmed by the frequency of certain microbial biomarkers. A period of increased precipitation in the south-western Kalahari could be described as a result of the extension of the winter rainfall zone during the last glacial maximum (21 ± 2 ka). Instead a period of increased paleoprecipitation in the western Kalahari was indicated during the Late Glacial to Holocene transition. This was possibly caused by a southwestern shift in the position of the summer rainfall zone associated to the southward movement of the ITCZ.
Furthermore, for the first time this study characterizes the bacterial and archaeal life based on 16S rRNA gene high-throughput sequencing in continental pan sediments and provides an insight into the recent microbial community structure. Near-surface processes play an important role for the modern microbial ecosystem in the pans. Water availability as well as salinity might determine the abundance and composition of the microbial communities. The microbial community of pan sediments is dominated by halophilic and dry-adapted archaea and bacteria. Frequently occurring microorganisms such as, Halobacteriaceae, Bacillus and Gemmatimonadetes are described in more detail in this study.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
This study investigates the reform of the public budgeting and accounting system (Doppik) in Brandenburg. On the one hand, this thesis aims to identify the key variables shaping employees’ commitment to change and, on the other hand, to examine the extent employees’ commitment to change influences the implementation process of the reform. The results of this study show that the commitment of civil servants towards the Doppik is primarily determined by the content, but also by the context of the reform. Moreover, it is revealed for the case of Brandenburg that civil servants’ affective commitment to change has a significant positive influence on the perceived success of the reform implementation. The results of the study are not only of high scientific importance, but also of practical relevance. The recommendations developed in this study offer grounded guidelines on how to successfully implement the Doppik on local level in Brandenburg.
A comprehensive hydro-sedimentological dataset for the Isábena catchment, northeastern (NE) Spain, for the period 2010–2018 is presented to analyse water and sediment fluxes in a Mediterranean mesoscale catchment. The dataset includes rainfall data from 12 rain gauges distributed within the study area complemented by meteorological data of 12 official meteo-stations. It comprises discharge data derived from water stage measurements as well as suspended sediment concentrations (SSCs) at six gauging stations of the River Isábena and its sub-catchments. Soil spectroscopic data from 351 suspended sediment samples and 152 soil samples were collected to characterize sediment source regions and sediment properties via fingerprinting analyses. The Isábena catchment (445 km 2 ) is located in the southern central Pyrenees ranging from 450 m to 2720 m a.s.l.; together with a pronounced topography, this leads to distinct temperature and precipitation gradients. The River Isábena shows marked discharge variations and high sediment yields causing severe siltation problems in the downstream Barasona Reservoir. The main sediment source is badland areas located on Eocene marls that are well connected to the river network. The dataset features a comprehensive set of variables in a high spatial and temporal resolution suitable for the advanced process understanding of water and sediment fluxes, their origin and connectivity and sediment budgeting and for the evaluation and further development of hydro-sedimentological models in
Mediterranean mesoscale mountainous catchments.
Power training programs have proved to be effective in improving components of physical fitness such as speed. According to the concept of training specificity, it was postulated that exercises must attempt to closely mimic the demands of the respective activity. When transferring this idea to speed development, the purpose of the present study was to examine the effects of resisted sprint (RST) vs. traditional power training (TPT) on physical fitness in healthy young adults. Thirty-five healthy, physically active adults were randomly assigned to a RST (n = 10, 23 ± 3 years), a TPT (n = 9, 23 ± 3 years), or a passive control group (n = 16, 23 ± 2 years). RST and TPT exercised for 6 weeks with three training sessions/week each lasting 45–60 min. RST comprised frontal and lateral sprint exercises using an expander system with increasing levels of resistance that was attached to a treadmill (h/p/cosmos). TPT included ballistic strength training at 40% of the one-repetition-maximum for the lower limbs (e.g., leg press, knee extensions). Before and after training, sprint (20-m sprint), change-of-direction speed (T-agility test), jump (drop, countermovement jump), and balance performances (Y balance test) were assessed. ANCOVA statistics revealed large main effects of group for 20-m sprint velocity and ground contact time (0.81 ≤ d ≤ 1.00). Post-hoc tests showed higher sprint velocity following RST and TPT (0.69 ≤ d ≤ 0.82) when compared to the control group, but no difference between RST and TPT. Pre-to-post changes amounted to 4.5% for RST [90%CI: (−1.1%;10.1%), d = 1.23] and 2.6% for TPT [90%CI: (0.4%;4.8%), d = 1.59]. Additionally, ground contact times during sprinting were shorter following RST and TPT (0.68 ≤ d ≤ 1.09) compared to the control group, but no difference between RST and TPT. Pre-to-post changes amounted to −6.3% for RST [90%CI: (−11.4%;−1.1%), d = 1.45) and −2.7% for TPT [90%CI: (−4.2%;−1.2%), d = 2.36]. Finally, effects for change-of-direction speed, jump, and balance performance varied from small-to-large. The present findings indicate that 6 weeks of RST and TPT produced similar effects on 20-m sprint performance compared with a passive control in healthy and physically active, young adults. However, no training-related effects were found for change-of-direction speed, jump and balance performance. We conclude that both training regimes can be applied for speed development.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of −5 to −17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
Natural hazards such as floods, earthquakes, landslides, and multi-hazard events heavily affect human societies and call for better management strategies. Due to the severity of such events, it is of utmost importance to understand whether and how they change in re-sponse to evolving hydro-climatological, geo-physical and socio-economic conditions. These conditions jointly determine the magnitude, frequency, and impact of disasters, and are changing in response to climate change and human behavior. Therefore methods are need-ed for hazard and risk quantification accounting for the transient nature of hazards and risks in response to changing natural and anthropogenic altered systems. The purpose of this conference is to bring together researchers from natural sciences (e.g. hydrology, meteorology, geomorphology, hydraulic engineering, environmental science, seismology, geography), risk research, nonlinear systems dynamics, and applied mathematics to discuss new insights and developments about data science, changing systems, multi-hazard events and the linkage between hazard and vulnerabilities under unstable environmental conditions. Knowledge transfer, communication and networking will be key issues of the conference. The conference is organized by means of invited talks given by outstanding experts, oral presentations, poster sessions and discussions.
Plyometric jump training (PJT) is a frequently used and effective means to improve amateur and elite soccer players' physical fitness. However, it is unresolved how different PJT frequencies per week with equal overall training volume may affect training-induced adaptations. Therefore, the aim of this study was to compare the effects of an in-season 8 week PJT with one session vs. two sessions per week and equal training volume on components of physical fitness in amateur female soccer players. A single-blind randomized controlled trial was conducted. Participants (N = 23; age, 21.4 ± 3.2 years) were randomly assigned to a one session PJT per-week (PJT-1, n = 8), two sessions PJT per-week (PJT-2, n = 8) or an active control group (CON, n = 7). Before and after training, participants performed countermovement jumps (CMJ), drop-jumps from a 20-cm drop-height (DJ20), a maximal kicking velocity test (MKV), the 15-m linear sprint-time test, the Meylan test for the assessment of change of direction ability (CoDA), and the Yo-Yo intermittent recovery endurance test (Yo-YoIR1). Results revealed significant main effects of time for the CMJ, DJ20, MKV, 15-m sprint, CoDA, and the Yo-YoIR1 (all p < 0.001; d = 0.57–0.83). Significant group × time interactions were observed for the CMJ, DJ20, MKV, 15-m sprint, CoDA, and the Yo-YoIR1 (all p < 0.05; d = 0.36–0.51). Post-hoc analyses showed similar improvements for PJT-1 and PJT-2 groups in CMJ (Δ10.6%, d = 0.37; and Δ10.1%, d = 0.51, respectively), DJ20 (Δ12.9%, d = 0.47; and Δ13.1%, d = 0.54, respectively), MKV (Δ8.6%, d = 0.52; and Δ9.1%, d = 0.47, respectively), 15-m sprint (Δ8.3%, d = 2.25; and Δ9.5%, d = 2.67, respectively), CoDA (Δ7.5%, d = 1.68; and Δ7.4%, d = 1.16, respectively), and YoYoIR1 (Δ10.3%, d = 0.22; and Δ9.9%, d = 0.26, respectively). No significant pre-post changes were found for CON (all p > 0.05; Δ0.5–4.2%, d = 0.03–0.2). In conclusion, higher PJT exposure in terms of session frequency has no extra effects on female soccer players' physical fitness development when jump volume is equated during a short-term (i.e., 8 weeks) training program. From this, it follows that one PJT session per week combined with regular soccer-specific training appears to be sufficient to induce physical fitness improvements in amateur female soccer players.
The regular monitoring of physical fitness and sport-specific performance is important in elite sports to increase the likelihood of success in competition. This study aimed to systematically review and to critically appraise the methodological quality, validation data, and feasibility of the sport-specific performance assessment in Olympic combat sports like amateur boxing, fencing, judo, karate, taekwondo, and wrestling. A systematic search was conducted in the electronic databases PubMed, Google-Scholar, and Science-Direct up to October 2017. Studies in combat sports were included that reported validation data (e.g., reliability, validity, sensitivity) of sport-specific tests. Overall, 39 studies were eligible for inclusion in this review. The majority of studies (74%) contained sample sizes <30 subjects. Nearly, 1/3 of the reviewed studies lacked a sufficient description (e.g., anthropometrics, age, expertise level) of the included participants. Seventy-two percent of studies did not sufficiently report inclusion/exclusion criteria of their participants. In 62% of the included studies, the description and/or inclusion of a familiarization session (s) was either incomplete or not existent. Sixty-percent of studies did not report any details about the stability of testing conditions. Approximately half of the studies examined reliability measures of the included sport-specific tests (intraclass correlation coefficient [ICC] = 0.43–1.00). Content validity was addressed in all included studies, criterion validity (only the concurrent aspect of it) in approximately half of the studies with correlation coefficients ranging from r = −0.41 to 0.90. Construct validity was reported in 31% of the included studies and predictive validity in only one. Test sensitivity was addressed in 13% of the included studies. The majority of studies (64%) ignored and/or provided incomplete information on test feasibility and methodological limitations of the sport-specific test. In 28% of the included studies, insufficient information or a complete lack of information was provided in the respective field of the test application. Several methodological gaps exist in studies that used sport-specific performance tests in Olympic combat sports. Additional research should adopt more rigorous validation procedures in the application and description of sport-specific performance tests in Olympic combat sports.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
The Cheb Basin (CZ) is a shallow Neogene intracontinental basin filled with fluvial and lacustrine sediments that is located in the western part of the Eger Rift. The basin is situated in a seismically active area and is characterized by diffuse degassing of mantle-derived CO2 in mofette fields. The Hartousov mofette field shows a daily CO2 flux of 23-97 tons of CO2 released over an area of 0.35 km(2) and a soil gas concentration of up to 100% CO2. The present study aims to explore the geo-bio interactions provoked by the influence of elevated CO2 concentrations on the geochemistry and microbial community of soils and sediments. To sample the strata, two 3-m cores were recovered. One core stems from the center of the degassing structure, whereas the other core was taken 8 m from the ENE and served as an undisturbed reference site. The sites were compared regarding their geochemical features, microbial abundances, and microbial community structures. The mofette site is characterized by a low pH and high TOC/sulfate contents. Striking differences in the microbial community highlight the substantial impact of elevated CO2 concentrations and their associated side effects on microbial processes. The abundance of microbes did not show a typical decrease with depth, indicating that the uprising CO2-rich fluid provides sufficient substrate for chemolithoautotrophic anaerobic microorganisms. Illumine MiSeq sequencing of the 16S rRNA genes and multivariate statistics reveals that the pH strongly influences microbial composition and explains around 38.7% of the variance at the mofette site and 22.4% of the variance between the mofette site and the undisturbed reference site. Accordingly, acidophilic microorganisms (e.g., OTUs assigned to Acidobacteriaceae and Acidithiobacillus) displayed a much higher relative abundance at the mofette site than at the reference site. The microbial community at the mofette site is characterized by a high relative abundance of methanogens and taxa involved in sulfur cycling. The present study provides intriguing insights into microbial life and geo-bio interactions in an active seismic region dominated by emanating mantle-derived CO2-rich fluids, and thereby builds the basis for further studies, e.g., focusing on the functional repertoire of the communities. However, it remains open if the observed patterns can be generalized for different time-points or sites.
More than 41% of the Earth’s land area is covered by permanent or seasonally arid dryland ecosystems. Global development and human activity have led to an increase in aridity, resulting in ecosystem degradation and desertification around the world. The objective of the present work was to investigate and compare the microbial community structure and geochemical characteristics of two geographically distinct saline pan sediments in the Kalahari Desert of southern Africa. Our data suggest that these microbial communities have been shaped by geochemical drivers, including water content, salinity, and the supply of organic matter. Using Illumina 16S rRNA gene sequencing, this study provides new insights into the diversity of bacteria and archaea in semi-arid, saline, and low-carbon environments. Many of the observed taxa are halophilic and adapted to water-limiting conditions. The analysis reveals a high relative abundance of halophilic archaea (primarily Halobacteria), and the bacterial diversity is marked by an abundance of Gemmatimonadetes and spore-forming Firmicutes. In the deeper, anoxic layers, candidate division MSBL1, and acetogenic bacteria (Acetothermia) are abundant. Together, the taxonomic information and geochemical data suggest that acetogenesis could be a prevalent form of metabolism in the deep layers of a saline pan.
Organic matter characteristics in yedoma and thermokarst deposits on Baldwin Peninsula, west Alaska
(2018)
As Arctic warming continues and permafrost thaws, more soil and sedimentary organic matter (OM) will be decomposed in northern high latitudes. Still, uncertainties remain in the quality of the OM and the size of the organic carbon (OC) pools stored in different deposit types of permafrost landscapes. This study presents OM data from deep permafrost and lake deposits on the Baldwin Peninsula which is located in the southern portion of the continuous permafrost zone in west Alaska. Sediment samples from yedoma and drained thermokarst lake basin (DTLB) deposits as well as thermokarst lake sediments were analyzed for cryostratigraphical and biogeochemical parameters and their lipid biomarker composition to identify the below-ground OC pool size and OM quality of ice-rich permafrost on the Baldwin Peninsula. We provide the first detailed characterization of yedoma deposits on Baldwin Peninsula. We show that three-quarters of soil OC in the frozen deposits of the study region (total of 68 Mt) is stored in DTLB deposits (52 Mt) and one-quarter in the frozen yedoma deposits (16 Mt). The lake sediments contain a relatively small OC pool (4 Mt), but have the highest volumetric OC content (93 kgm(-3)) compared to the DTLB (35 kgm(-3)) and yedoma deposits (8 kgm(-3)), largely due to differences in the ground ice content. The biomarker analysis indicates that the OM in both yedoma and DTLB deposits is mainly of terrestrial origin. Nevertheless, the relatively high carbon preference index of plant leaf waxes in combination with a lack of a degradation trend with depth in the yedoma deposits indi-cates that OM stored in yedoma is less degraded than that stored in DTLB deposits. This suggests that OM in yedoma has a higher potential for decomposition upon thaw, despite the relatively small size of this pool. These findings show that the use of lipid biomarker analysis is valuable in the assessment of the potential future greenhouse gas emissions from thawing permafrost, especially because this area, close to the discontinuous permafrost boundary, is projected to thaw substantially within the 21st century.
Systems biology aims at investigating biological systems in its entirety by gathering and analyzing large-scale data sets about the underlying components. Computational systems biology approaches use these large-scale data sets to create models at different scales and cellular levels. In addition, it is concerned with generating and testing hypotheses about biological processes. However, such approaches are inevitably leading to computational challenges due to the high dimensionality of the data and the differences in the dimension of data from different cellular layers.
This thesis focuses on the investigation and development of computational approaches to analyze metabolite profiles in the context of cellular networks. This leads to determining what aspects of the network functionality are reflected in the metabolite levels. With these methods at hand, this thesis aims to answer three questions: (1) how observability of biological systems is manifested in metabolite profiles and if it can be used for phenotypical comparisons; (2) how to identify couplings of reaction rates from metabolic profiles alone; and (3) which regulatory mechanism that affect metabolite levels can be distinguished by integrating transcriptomics and metabolomics read-outs.
I showed that sensor metabolites, identified by an approach from observability theory, are more correlated to each other than non-sensors. The greater correlations between sensor metabolites were detected both with publicly available metabolite profiles and synthetic data simulated from a medium-scale kinetic model. I demonstrated through robustness analysis that correlation was due to the position of the sensor metabolites in the network and persisted irrespectively of the experimental conditions. Sensor metabolites are therefore potential candidates for phenotypical comparisons between conditions through targeted metabolic analysis.
Furthermore, I demonstrated that the coupling of metabolic reaction rates can be investigated from a purely data-driven perspective, assuming that metabolic reactions can be described by mass action kinetics. Employing metabolite profiles from domesticated and wild wheat and tomato species, I showed that the process of domestication is associated with a loss of regulatory control on the level of reaction rate coupling. I also found that the same metabolic pathways in Arabidopsis thaliana and Escherichia coli exhibit differences in the number of reaction rate couplings.
I designed a novel method for the identification and categorization of transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approach determines the partial correlation of metabolites with control by the principal components of the transcript levels. The principle components contain the majority of the transcriptomic information allowing to partial out the effect of the transcriptional layer from the metabolite profiles. Depending whether the correlation between metabolites persists upon controlling for the effect of the transcriptional layer, the approach allows us to group metabolite pairs into being associated due to post-transcriptional or transcriptional regulation, respectively. I showed that the classification of metabolite pairs into those that are associated due to transcriptional or post-transcriptional regulation are in agreement with existing literature and findings from a Bayesian inference approach.
The approaches developed, implemented, and investigated in this thesis open novel ways to jointly study metabolomics and transcriptomics data as well as to place metabolic profiles in the network context. The results from these approaches have the potential to provide further insights into the regulatory machinery in a biological system.
Cells and organelles are not homogeneous but include microcompartments that alter the spatiotemporal characteristics of cellular processes. The effects of microcompartmentation on metabolic pathways are however difficult to study experimentally. The pyrenoid is a microcompartment that is essential for a carbon concentrating mechanism (CCM) that improves the photosynthetic performance of eukaryotic algae. Using Chlamydomonas reinhardtii, we obtained experimental data on photosynthesis, metabolites, and proteins in CCM-induced and CCM-suppressed cells. We then employed a computational strategy to estimate how fluxes through the Calvin-Benson cycle are compartmented between the pyrenoid and the stroma. Our model predicts that ribulose-1,5-bisphosphate (RuBP), the substrate of Rubisco, and 3-phosphoglycerate (3PGA), its product, diffuse in and out of the pyrenoid, respectively, with higher fluxes in CCM-induced cells. It also indicates that there is no major diffusional barrier to metabolic flux between the pyrenoid and stroma. Our computational approach represents a stepping stone to understanding microcompartmentalized CCM in other organisms.
Most large-scale hydrologic models fall short in reproducing groundwater head dynamics and simulating transport process due to their oversimplified representation of groundwater flow. In this study, we aim to extend the applicability of the mesoscale Hydrologic Model (mHM v5.7) to subsurface hydrology by coupling it with the porous media simulator OpenGeoSys (OGS). The two models are one-way coupled through model interfaces GIS2FEM and RIV2FEM, by which the grid-based fluxes of groundwater recharge and the river-groundwater exchange generated by mHM are converted to fixed-flux boundary conditions of the groundwater model OGS. Specifically, the grid-based vertical reservoirs in mHM are completely preserved for the estimation of land-surface fluxes, while OGS acts as a plug-in to the original mHM modeling framework for groundwater flow and transport modeling. The applicability of the coupled model (mHM-OGS v1.0) is evaluated by a case study in the central European mesoscale river basin - Nagelstedt. Different time steps, i.e., daily in mHM and monthly in OGS, are used to account for fast surface flow and slow groundwater flow. Model calibration is conducted following a two-step procedure using discharge for mHM and long-term mean of groundwater head measurements for OGS. Based on the model summary statistics, namely the Nash-Sutcliffe model efficiency (NSE), the mean absolute error (MAE), and the interquartile range error (QRE), the coupled model is able to satisfactorily represent the dynamics of discharge and groundwater heads at several locations across the study basin. Our exemplary calculations show that the one-way coupled model can take advantage of the spatially explicit modeling capabilities of surface and groundwater hydrologic models and provide an adequate representation of the spatiotemporal behaviors of groundwater storage and heads, thus making it a valuable tool for addressing water resources and management problems.
Magnetotellurics (MT) is a geophysical method that is able to image the electrical conductivity structure of the subsurface by recording time series of natural electromagnetic (EM) field variations. During the data processing these time series are divided into small segments and for each segment spectral values are computed which are typically averaged in a statistical manner to obtain MT transfer functions. Unfortunately, the presence of man-made EM noise sources often deteriorates a significant amount of the recorded time series resulting in disturbed transfer functions. Many advanced processing techniques, e.g. robust statistics, pre-stack data selection or remote reference, have been developed to tackle this problem. The first two techniques reduce the amount of outliers and noise in the data whereas the latter approach removes noise by using data from another MT station. However, especially in populated regions the data processing is still quite challenging even with these approaches. In this thesis, I present two novel pre-stack data confinement and selection criteria for the detection of outliers and noise affected data based on (i) a distance measure of each data segment with regard to the entire sample distribution and (ii) the evaluation of the magnetic polarisation direction of all segments. The first criterion is able to remove data points that scatter around the desired MT distribution and furthermore it can, under some circumstances, even reject complete data cluster originating from noise sources. The second criterion eliminates data points caused by a strongly polarised magnetic signal. Both criteria have been successfully applied to many stations with different noise contaminations showing that they can significantly improve the transfer function estimation. The novel criteria were used to evaluate a MT data set from the Eastern Karoo Basin in South Africa. The corresponding field experiment is part of an extensive research programme to collect information of the current e.g. geological setting in this region prior to a potential shale gas exploitation. The aim was to investigate whether a three-dimensional (3D) inversion of the newly measured data fosters a more realistic mapping of physical properties of the target horizon. For this purpose, a comprehensive 3D model was derived by using all available data. In a second step, I analysed parameters of the target horizon, e.g. its conductivity, that are proxies for physical properties such as thermal maturity and porosity.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.
Modern gamma-ray telescopes, provide the main stream of data for astrophysicists in quest of detecting the sources of gamma rays such as active galactic nuclei (AGN). Many blazars have been detected with gamma-ray telescopes such as HESS, VERITAS, MAGIC and Fermi satellite as sources of gamma-rays with the energy E ≥ 100 GeV. These very-high-energy photons interact with extragalactic background light (EBL) producing ultra-relativistic electron-positron pairs. Observations with Fermi-LAT indicate that the GeV gamma-ray flux from some blazars is lower than that predicted from the full electromagnetic cascade. The pairs can induce electrostatic and electromagnetic instabilities. In this case, wave-particle interactions can reduce the energy of the pairs. Therefore, the collective plasma effects can also substantially suppress the GeV-band gamma-ray emission affecting as well the IGMF constraints. Using Particle in cell (PIC) simulations, we have revisited the issue of plasma instabilities induced by electron-positron beams in the fully ionized intergalactic medium. This problem is related to pair beams produced by TeV radiation of blazars. The main objective of our study is to clarify the feedback of the beam-driven instabilities on the pairs. The present dissertation provides new results regarding the plasma instabilities from blazar induced pair beams interacting with intergalactic medium. This clarifies the relevance of plasma instabilities and improves our understanding of blazars.
At Saturn electrons are trapped in the planet's magnetic field and accelerated to relativistic energies to form the radiation belts, but how this dramatic increase in electron energy occurs is still unknown. Until now the mechanism of radial diffusion has been assumed but we show here that in-situ acceleration through wave particle interactions, which initial studies dismissed as ineffectual at Saturn, is in fact a vital part of the energetic particle dynamics there. We present evidence from numerical simulations based on Cassini spacecraft data that a particular plasma wave, known as Z-mode, accelerates electrons to MeV energies inside 4 R-S (1 R-S = 60,330 km) through a Doppler shifted cyclotron resonant interaction. Our results show that the Z-mode waves observed are not oblique as previously assumed and are much better accelerators than O-mode waves, resulting in an electron energy spectrum that closely approaches observed values without any transport effects included.
Gamma-ray astronomy has proven to provide unique insights into cosmic-ray accelerators in the past few decades. By combining information at the highest photon energies with the entire electromagnetic spectrum in multi-wavelength studies, detailed knowledge of non-thermal particle populations in astronomical objects and systems has been gained: Many individual classes of gamma-ray sources could be identified inside our galaxy and outside of it. Different sources were found to exhibit a wide range of temporal evolution, ranging from seconds to stable behaviours over many years of observations. With the dawn of both neutrino- and gravitational wave astronomy, additional messengers have come into play over the last years. This development presents the advent of multi-messenger astronomy: a novel approach not only to search for sources of cosmic rays, but for astronomy in general.
In this thesis, both traditional multi-wavelength studies and multi-messenger studies will be presented. They were carried out with the H.E.S.S. experiment, an imaging air Cherenkov telescope array located in the Khomas Highland of Namibia. H.E.S.S. has entered its second phase in 2012 with the addition of a large, fifth telescope. While the initial array was limited to the study of gamma-rays with energies above 100 GeV, the new instrument allows to access gamma-rays with energies down to a few tens of GeV. Strengths of the multi-wavelength approach will be demonstrated at the example of the galaxy NGC253, which is undergoing an episode of enhanced star-formation. The gamma-ray emission will be discussed in light of all the information on this system available from radio, infrared and X-rays. These wavelengths reveal detailed information on the population of supernova remnants, which are suspected cosmic-ray accelerators. A broad-band gamma-ray spectrum is derived from H.E.S.S. and Fermi-LAT data. The improved analysis of H.E.S.S. data provides a measurement which is no longer dominated by systematic uncertainties. The long-term behaviour of cosmic rays in the starburst galaxy NGC253 is finally characterised.
In contrast to the long time-scale evolution of a starburst galaxy, multi-messenger studies are especially intriguing when shorter time-scales are being probed. A prime example of a short time-scale transient are Gamma Ray Bursts. The efforts to understand this phenomenon effectively founded the branch of gamma-ray astronomy. The multi-messenger approach allows for the study of illusive phenomena such as Gamma Ray Bursts and other transients using electromagnetic radiation, neutrinos, cosmic rays and gravitational waves contemporaneously. With contemporaneous observations getting more important just recently, the execution of such observation campaigns still presents a big challenge due to the different limitations and strengths of the infrastructures.
An alert system for transient phenomena has been developed over the course of this thesis for H.E.S.S. It aims to address many follow-up challenges in order to maximise the science return of the new large telescope, which is able to repoint much faster than the initial four telescopes. The system allows for fully automated observations based on scientific alerts from any wavelength or messenger and allows H.E.S.S. to participate in multi-messenger campaigns. Utilising this new system, many interesting multi-messenger observation campaigns have been performed. Several highlight observations with H.E.S.S. are analysed, presented and discussed in this work. Among them are observations of Gamma Ray Bursts with low latency and low energy threshold, the follow-up of a neutrino candidate in spatial coincidence with a flaring active galactic nucleus and of the merger of two neutron stars, which was revealed by the coincidence of gravitational waves and a Gamma-Ray Burst.
Great megathrust earthquakes arise from the sudden release of energy accumulated during centuries of interseismic plate convergence. The moment deficit (energy available for future earthquakes) is commonly inferred by integrating the rate of interseismic plate locking over the time since the previous great earthquake. But accurate integration requires knowledge of how interseismic plate locking changes decades after earthquakes, measurements not available for most great earthquakes. Here we reconstruct the post-earthquake history of plate locking at Guafo Island, above the seismogenic zone of the giant 1960 (M-w = 9.5) Chile earthquake, through forward modeling of land-level changes inferred from aerial imagery (since 1974) and measured by GPS (since 1994). We find that interseismic locking increased to similar to 70% in the decade following the 1960 earthquake and then gradually to 100% by 2005. Our findings illustrate the transient evolution of plate locking in Chile, and suggest a similarly complex evolution elsewhere, with implications for the time- and magnitude-dependent probability of future events.
The simultaneous detection of energy, momentum and temporal information in electron spectroscopy is the key aspect to enhance the detection efficiency in order to broaden the range of scientific applications. Employing a novel 60 degrees wide angle acceptance lens system, based on an additional accelerating electron optical element, leads to a significant enhancement in transmission over the previously employed 30 degrees electron lenses. Due to the performance gain, optimized capabilities for time resolved electron spectroscopy and other high transmission applications with pulsed ionizing radiation have been obtained. The energy resolution and transmission have been determined experimentally utilizing BESSY II as a photon source. Four different and complementary lens modes have been characterized. (C) 2017 The Authors. Published by Elsevier B.V.
The concept of hydrologic connectivity summarizes all flow processes that link separate regions of a landscape. As such, it is a central theme in the field of catchment hydrology, with influence on neighboring disciplines such as ecology and geomorphology. It is widely acknowledged to be an important key in understanding the response behavior of a catchment and has at the same time inspired research on internal processes over a broad range of scales. From this process-hydrological point of view, hydrological connectivity is the conceptual framework to link local observations across space and scales.
This is the context in which the four studies this thesis comprises of were conducted. The focus was on structures and their spatial organization as important control on preferential subsurface flow. Each experiment covered a part of the conceptualized flow path from hillslopes to the stream: soil profile, hillslope, riparian zone, and stream.
For each study site, the most characteristic structures of the investigated domain and scale, such as slope deposits and peat layers were identified based on preliminary or previous investigations or literature reviews. Additionally, further structural data was collected and topographical analyses were carried out. Flow processes were observed either based on response observations (soil moisture changes or discharge patterns) or direct measurement (advective heat transport). Based on these data, the flow-relevance of the characteristic structures was evaluated, especially with regard to hillslope to stream connectivity.
Results of the four studies revealed a clear relationship between characteristic spatial structures and the hydrological behavior of the catchment. Especially the spatial distribution of structures throughout the study domain and their interconnectedness were crucial for the establishment of preferential flow paths and their relevance for large-scale processes. Plot and hillslope-scale irrigation experiments showed that the macropores of a heterogeneous, skeletal soil enabled preferential flow paths at the scale of centimeters through the otherwise unsaturated soil. These flow paths connected throughout the soil column and across the hillslope and facilitated substantial amounts of vertical and lateral flow through periglacial slope deposits.
In the riparian zone of the same headwater catchment, the connectivity between hillslopes and stream was controlled by topography and the dualism between characteristic subsurface structures and the geomorphological heterogeneity of the stream channel. At the small scale (1 m to 10 m) highest gains always occurred at steps along the longitudinal streambed profile, which also controlled discharge patterns at the large scale (100 m) during base flow conditions (number of steps per section). During medium and high flow conditions, however, the impact of topography and parafluvial flow through riparian zone structures prevailed and dominated the large-scale response patterns.
In the streambed of a lowland river, low permeability peat layers affected the connectivity between surface water and groundwater, but also between surface water and the hyporheic zone. The crucial factor was not the permeability of the streambed itself, but rather the spatial arrangement of flow-impeding peat layers, causing increased vertical flow through narrow “windows” in contrast to predominantly lateral flow in extended areas of high hydraulic conductivity sediments.
These results show that the spatial organization of structures was an important control for hydrological processes at all scales and study areas. In a final step, the observations from different scales and catchment elements were put in relation and compared. The main focus was on the theoretical analysis of the scale hierarchies of structures and processes and the direction of causal dependencies in this context. Based on the resulting hierarchical structure, a conceptual framework was developed which is capable of representing the system’s complexity while allowing for adequate simplifications.
The resulting concept of the parabolic scale series is based on the insight that flow processes in the terrestrial part of the catchment (soil and hillslopes) converge. This means that small-scale processes assemble and form large-scale processes and responses. Processes in the riparian zone and the streambed, however, are not well represented by the idea of convergence. Here, the large-scale catchment signal arrives and is modified by structures in the riparian zone, stream morphology, and the small-scale interactions between surface water and groundwater. Flow paths diverge and processes can better be represented by proceeding from large scales to smaller ones. The catchment-scale representation of processes and structures is thus the conceptual link between terrestrial hillslope processes and processes in the riparian corridor.
This study analyzes the influence of local and regional climatic factors on the stable isotopic composition of rainfall in the Vietnamese Mekong Delta (VMD) as part of the Asian monsoon region. It is based on 1.5 years of weekly rainfall samples. In the first step, the isotopic composition of the samples is analyzed by local meteoric water lines (LMWLs) and single-factor linear correlations. Additionally, the contribution of several regional and local factors is quantified by multiple linear regression (MLR) of all possible factor combinations and by relative importance analysis. This approach is novel for the interpretation of isotopic records and enables an objective quantification of the explained variance in isotopic records for individual factors. In this study, the local factors are extracted from local climate records, while the regional factors are derived from atmospheric backward trajectories of water particles. The regional factors, i.e., precipitation, temperature, relative humidity and the length of backward trajectories, are combined with equivalent local climatic parameters to explain the response variables delta O-18, delta H-2, and d-excess of precipitation at the station of measurement.
The results indicate that (i) MLR can better explain the isotopic variation in precipitation (R-2 = 0.8) compared to single-factor linear regression (R-2 = 0.3); (ii) the isotopic variation in precipitation is controlled dominantly by regional moisture regimes (similar to 70 %) compared to local climatic conditions (similar to 30 %); (iii) the most important climatic parameter during the rainy season is the precipitation amount along the trajectories of air mass movement; (iv) the influence of local precipitation amount and temperature is not sig-nificant during the rainy season, unlike the regional precipitation amount effect; (v) secondary fractionation processes (e.g., sub-cloud evaporation) can be identified through the d-excess and take place mainly in the dry season, either locally for delta O-18 and delta H-2, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors.
The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.
Flooding is an imminent natural hazard threatening most river deltas, e.g. the Mekong Delta. An appropriate flood management is thus required for a sustainable development of the often densely populated regions. Recently, the traditional event-based hazard control shifted towards a risk management approach in many regions, driven by intensive research leading to new legal regulation on flood management. However, a large-scale flood risk assessment does not exist for the Mekong Delta. Particularly, flood risk to paddy rice cultivation, the most important economic activity in the delta, has not been performed yet. Therefore, the present study was developed to provide the very first insight into delta-scale flood damages and risks to rice cultivation. The flood hazard was quantified by probabilistic flood hazard maps of the whole delta using a bivariate extreme value statistics, synthetic flood hydrographs, and a large-scale hydraulic model. The flood risk to paddy rice was then quantified considering cropping calendars, rice phenology, and harvest times based on a time series of enhanced vegetation index (EVI) derived from MODIS satellite data, and a published rice flood damage function. The proposed concept provided flood risk maps to paddy rice for the Mekong Delta in terms of expected annual damage. The presented concept can be used as a blueprint for regions facing similar problems due to its generic approach. Furthermore, the changes in flood risk to paddy rice caused by changes in land use currently under discussion in the Mekong Delta were estimated. Two land-use scenarios either intensifying or reducing rice cropping were considered, and the changes in risk were presented in spatially explicit flood risk maps. The basic risk maps could serve as guidance for the authorities to develop spatially explicit flood management and mitigation plans for the delta. The land-use change risk maps could further be used for adaptive risk management plans and as a basis for a cost-benefit of the discussed land-use change scenarios. Additionally, the damage and risks maps may support the recently initiated agricultural insurance programme in Vietnam.
While the role of and consequences of being a bystander to face-to-face bullying has received some attention in the literature, to date, little is known about the effects of being a bystander to cyberbullying. It is also unknown how empathy might impact the negative consequences associated with being a bystander of cyberbullying. The present study focused on examining the longitudinal association between bystander of cyberbullying depression, and anxiety, and the moderating role of empathy in the relationship between bystander of cyberbullying and subsequent depression and anxiety. There were 1,090 adolescents (M-age = 12.19; 50% female) from the United States included at Time 1, and they completed questionnaires on empathy, cyberbullying roles (bystander, perpetrator, victim), depression, and anxiety. One year later, at Time 2, 1,067 adolescents (M-age = 13.76; 51% female) completed questionnaires on depression and anxiety. Results revealed a positive association between bystander of cyberbullying and depression and anxiety. Further, empathy moderated the positive relationship between bystander of cyberbullying and depression, but not for anxiety. Implications for intervention and prevention programs are discussed.
Manganese (Mn) is an essential nutrient for intracellular activities; it functions as a cofactor for a variety of enzymes, including arginase, glutamine synthetase (GS), pyruvate carboxylase and Mn superoxide dismutase (Mn-SOD). Through these metalloproteins, Mn plays critically important roles in development, digestion, reproduction, antioxidant defense, energy production, immune response and regulation of neuronal activities. Mn deficiency is rare. In contrast Mn poisoning may be encountered upon overexposure to this metal. Excessive Mn tends to accumulate in the liver, pancreas, bone, kidney and brain, with the latter being the major target of Mn intoxication. Hepatic cirrhosis, polycythemia, hypermanganesemia, dystonia and Parkinsonism-like symptoms have been reported in patients with Mn poisoning. In recent years, Mn has come to the forefront of environmental concerns due to its neurotoxicity. Molecular mechanisms of Mn toxicity include oxidative stress, mitochondrial dysfunction, protein misfolding, endoplasmic reticulum (ER) stress, autophagy dysregulation, apoptosis, and disruption of other metal homeostasis. The mechanisms of Mn homeostasis are not fully understood. Here, we will address recent progress in Mn absorption, distribution and elimination across different tissues, as well as the intracellular regulation of Mn homeostasis in cells. We will conclude with recommendations for future research areas on Mn metabolism.
The purpose of the present study was to examine the moderation of parental mediation in the longitudinal association between being a bystander of cyberbullying and cyberbullying perpetration and cyberbullying victimization. Participants were 1067 7th and 8th graders between 12 and 15 years old (51% female) from six middle schools in predominantly middle-class neighborhoods in the Midwestern United States. Increases in being bystanders of cyberbullying was related positively to restrictive and instructive parental mediation. Restrictive parental mediation was related positively to Time 2 (T2) cyberbullying victimization, while instructive parental mediation was negatively related to T2 cyberbullying perpetration and victimization. Restrictive parental mediation was a moderator in the association between bystanders of cyberbullying and T2 cyberbullying victimization. Increases in restrictive parental mediation strengthened the positive relationship between these variables. In addition, instructive mediation moderated the association between bystanders of cyberbullying and T2 cyberbullying victimization such that increases in this form of parental mediation strategy weakened the association between bystanders of cyberbullying and T2 cyberbullying victimization. The current findings indicate a need for parents to be aware of how they can impact adolescents’ involvement in cyberbullying as bullies and victims. In addition, greater attention should be given to developing parental intervention programs that focus on the role of parents in helping to mitigate adolescents’ likelihood of cyberbullying involvement.
Flood risk is impacted by a range of physical and socio-economic processes. Hence, the quantification of flood risk ideally considers the complete flood risk chain, from atmospheric processes through catchment and river system processes to damage mechanisms in the affected areas. Although it is generally accepted that a multitude of changes along the risk chain can occur and impact flood risk, there is a lack of knowledge of how and to what extent changes in influencing factors propagate through the chain and finally affect flood risk. To fill this gap, we present a comprehensive sensitivity analysis which considers changes in all risk components, i.e. changes in climate, catchment, river system, land use, assets, and vulnerability. The application of this framework to the mesoscale Mulde catchment in Germany shows that flood risk can vary dramatically as a consequence of plausible change scenarios. It further reveals that components that have not received much attention, such as changes in dike systems or in vulnerability, may outweigh changes in often investigated components, such as climate. Although the specific results are conditional on the case study area and the selected assumptions, they emphasize the need for a broader consideration of potential drivers of change in a comprehensive way. Hence, our approach contributes to a better understanding of how the different risk components influence the overall flood risk.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
Although school climate and self-efficacy have received some attention in the literature, as correlates of students’ willingness to intervene in bullying, to date, very little is known about the potential mediating role of self-efficacy in the relationship between classroom climate and students’ willingness to intervene in bullying. To this end, the present study analyzes whether the relationship between classroom cohesion (as one facet of classroom climate) and students’ willingness to intervene in bullying situations is mediated by self-efficacy in social conflicts. This study is based on a representative stratified random sample of two thousand and seventy-one students (51.3% male), between the ages of twelve and seventeen, from twenty-four schools in Germany. Results showed that between 43% and 48% of students reported that they would not intervene in bullying. A mediation test using the structural equation modeling framework revealed that classroom cohesion and self-efficacy in social conflicts were directly associated with students’ willingness to intervene in bullying situations. Furthermore, classroom cohesion was indirectly associated with higher levels of students’ willingness to intervene in bullying situations, due to self-efficacy in social conflicts. We thus conclude that: (1) It is crucial to increase students’ willingness to intervene in bullying; (2) efforts to increase students’ willingness to intervene in bullying should promote students’ confidence in dealing with social conflicts and interpersonal relationships; and (3) self-efficacy plays an important role in understanding the relationship between classroom cohesion and students’ willingness to intervene in bullying. Recommendations are provided to help increase adolescents’ willingness to intervene in bullying and for future research.
Hatred directed at members of groups due to their origin, race, gender, religion, or sexual orientation is not new, but it has taken on a new dimension in the online world. To date, very little is known about online hate among adolescents. It is also unknown how online disinhibition might influence the association between being bystanders and being perpetrators of online hate. Thus, the present study focused on examining the associations among being bystanders of online hate, being perpetrators of online hate, and the moderating role of toxic online disinhibition in the relationship between being bystanders and perpetrators of online hate. In total, 1480 students aged between 12 and 17 years old were included in this study. Results revealed positive associations between being online hate bystanders and perpetrators, regardless of whether adolescents had or had not been victims of online hate themselves. The results also showed an association between toxic online disinhibition and online hate perpetration. Further, toxic online disinhibition moderated the relationship between being bystanders of online hate and being perpetrators of online hate. Implications for prevention programs and future research are discussed.
Insight into how environmental change determines the production and distribution of cyanobacterial toxins is necessary for risk assessment. Management guidelines currently focus on hepatotoxins (microcystins). Increasing attention is given to other classes, such as neurotoxins (e.g., anatoxin-a) and cytotoxins (e.g., cylindrospermopsin) due to their potency. Most studies examine the relationship between individual toxin variants and environmental factors, such as nutrients, temperature and light. In summer 2015, we collected samples across Europe to investigate the effect of nutrient and temperature gradients on the variability of toxin production at a continental scale. Direct and indirect effects of temperature were the main drivers of the spatial distribution in the toxins produced by the cyanobacterial community, the toxin concentrations and toxin quota. Generalized linear models showed that a Toxin Diversity Index (TDI) increased with latitude, while it decreased with water stability. Increases in TDI were explained through a significant increase in toxin variants such as MC-YR, anatoxin and cylindrospermopsin, accompanied by a decreasing presence of MC-LR. While global warming continues, the direct and indirect effects of increased lake temperatures will drive changes in the distribution of cyanobacterial toxins in Europe, potentially promoting selection of a few highly toxic species or strains.
The continuously increasing pollution of aquatic environments with microplastics (plastic particles < 5 mm) is a global problem with potential implications for organisms of all trophic levels. For microorganisms, trillions of these floating microplastics particles represent a huge surface area for colonization. Due to the very low biodegradability, microplastics remain years to centuries in the environment and can be transported over thousands of kilometers together with the attached organisms. Since also pathogenic, invasive, or otherwise harmful species could be spread this way, it is essential to study microplastics-associated communities.
For this doctoral thesis, eukaryotic communities were analyzed for the first time on microplastics in brackish environments and compared to communities in the surrounding water and on the natural substrate wood. With Illumina MiSeq high-throughput sequencing, more than 500 different eukaryotic taxa were detected on the microplastics samples. Among them were various green algae, dinoflagellates, ciliates, fungi, fungal-like protists and small metazoans such as nematodes and rotifers. The most abundant organisms was a dinoflagellate of the genus Pfiesteria, which could include fish pathogenic and bloom forming toxigenic species. Network analyses revealed that there were numerous interaction possibilities among prokaryotes and eukaryotes in microplastics biofilms. Eukaryotic community compositions on microplastics differed significantly from those on wood and in water, and compositions were additionally distinct among the sampling locations. Furthermore, the biodiversity was clearly lower on microplastics in comparison to the diversity on wood or in the surrounding water.
In another experiment, a situation was simulated in which treated wastewater containing microplastics was introduced into a freshwater lake. With increasing microplastics concentrations, the resulting bacterial communities became more similar to those from the treated wastewater. Moreover, the abundance of integrase I increased together with rising concentrations of microplastics. Integrase I is often used as a marker for anthropogenic environmental pollution and is further linked to genes conferring, e.g., antibiotic resistance.
This dissertation gives detailed insights into the complexity of prokaryotic and eukaryotic communities on microplastics in brackish and freshwater systems. Even though microplastics provide novel microhabitats for various microbes, they might also transport toxigenic, pathogenic, antibiotic-resistant or parasitic organisms; meaning their colonization can pose potential threats to humans and the environment. Finally, this thesis explains the urgent need for more research as well as for strategies to minimize the global microplastic pollution.
Plastic pollution is ubiquitous on the planet since several millions of tons of plastic waste enter aquatic ecosystems each year. Furthermore, the amount of plastic produced is expected to increase exponentially shortly. The heterogeneity of materials, additives and physical characteristics of plastics are typical of these emerging contaminants and affect their environmental fate in marine and freshwaters. Consequently, plastics can be found in the water column, sediments or littoral habitats of all aquatic ecosystems. Most of this plastic debris will fragment as a product of physical, chemical and biological forces, producing particles of small size. These particles (< 5mm) are known as “microplastics” (MP). Given their high surface-to-volume ratio, MP stimulate biofouling and the formation of biofilms in aquatic systems.
As a result of their unique structure and composition, the microbial communities in MP biofilms are referred to as the “Plastisphere.” While there is increasing data regarding the distinctive composition and structure of the microbial communities that form part of the plastisphere, scarce information exists regarding the activity of microorganisms in MP biofilms. This surface-attached lifestyle is often associated with the increase in horizontal gene transfer (HGT) among bacteria. Therefore, this type of microbial activity represents a relevant function worth to be analyzed in MP biofilms. The horizontal exchange of mobile genetic elements (MGEs) is an essential feature of bacteria. It accounts for the rapid evolution of these prokaryotes and their adaptation to a wide variety of environments. The process of HGT is also crucial for spreading antibiotic resistance and for the evolution of pathogens, as many MGEs are known to contain antibiotic resistance genes (ARGs) and genetic determinants of pathogenicity.
In general, the research presented in this Ph.D. thesis focuses on the analysis of HGT and heterotrophic activity in MP biofilms in aquatic ecosystems. The primary objective was to analyze the potential of gene exchange between MP bacterial communities vs. that of the surrounding water, including bacteria from natural aggregates. Moreover, the thesis addressed the potential of MP biofilms for the proliferation of biohazardous bacteria and MGEs from wastewater treatment plants (WWTPs) and associated with antibiotic resistance. Finally, it seeks to prove if the physiological profile of MP biofilms under different limnological conditions is divergent from that of the water communities. Accordingly, the thesis is composed of three independent studies published in peer-reviewed journals. The two laboratory studies were performed using both model and environmental microbial communities. In the field experiment, natural communities from freshwater ecosystems were examined.
In Chapter I, the inflow of treated wastewater into a temperate lake was simulated with a concentration gradient of MP particles. The effects of MP on the microbial community structure and the occurrence of integrase 1 (int 1) were followed. The int 1 is a marker associated with mobile genetic elements and known as a proxy for anthropogenic effects on the spread of antimicrobial resistance genes. During the experiment, the abundance of int1 increased in the plastisphere with increasing MP particle concentration, but not in the surrounding water. In addition, the microbial community on MP was more similar to the original wastewater community with increasing microplastic concentrations. Our results show that microplastic particles indeed promote persistence of standard indicators of microbial anthropogenic pollution in natural waters.
In Chapter II, the experiments aimed to compare the permissiveness of aquatic bacteria towards model antibiotic resistance plasmid pKJK5, between communities that form biofilms on MP vs. those that are free-living. The frequency of plasmid transfer in bacteria associated with MP was higher when compared to bacteria that are free-living or in natural aggregates. Moreover, comparison increased gene exchange occurred in a broad range of phylogenetically-diverse bacteria. The results indicate a different activity of HGT in MP biofilms, which could affect the ecology of aquatic microbial communities on a global scale and the spread of antibiotic resistance.
Finally, in Chapter III, physiological measurements were performed to assess whether microorganisms on MP had a different functional diversity from those in water. General heterotrophic activity such as oxygen consumption was compared in microcosm assays with and without MP, while diversity and richness of heterotrophic activities were calculated by using Biolog® EcoPlates. Three lakes with different nutrient statuses presented differences in MP-associated biomass build up. Functional diversity profiles of MP biofilms in all lakes differed from those of the communities in the surrounding water, but only in the oligo-mesotrophic lake MP biofilms had a higher functional richness compared to the ambient water. The results support that MP surfaces act as new niches for aquatic microorganisms and can affect global carbon dynamics of pelagic environments.
Overall, the experimental works presented in Chapters I and II support a scenario where MP pollution affects HGT dynamics among aquatic bacteria. Among the consequences of this alteration is an increase in the mobilization and transfer efficiency of ARGs. Moreover, it supposes that changes in HGT can affect the evolution of bacteria and the processing of organic matter, leading to different catabolic profiles such as demonstrated in Chapter III. The results are discussed in the context of the fate and magnitude of plastic pollution and the importance of HGT for bacterial evolution and the microbial loop, i.e., at the base of aquatic food webs. The thesis supports a relevant role of MP biofilm communities for the changes observed in the aquatic microbiome as a product of intense human intervention.
Microbial processing of organic matter (OM) in the freshwater biosphere is a key component of global biogeochemical cycles. Freshwaters receive and process valuable amounts of leaf OM from their terrestrial landscape. These terrestrial subsidies provide an essential source of energy and nutrients to the aquatic environment as a function of heterotrophic processing by fungi and bacteria. Particularly in freshwaters with low in-situ primary production from algae (microalgae, cyanobacteria), microbial turnover of leaf OM significantly contributes to the productivity and functioning of freshwater ecosystems and not least their contribution to global carbon cycling.
Based on differences in their chemical composition, it is believed that leaf OM is less bioavailable to microbial heterotrophs than OM photosynthetically produced by algae. Especially particulate leaf OM, consisting predominantly of structurally complex and aromatic polymers, is assumed highly resistant to enzymatic breakdown by microbial heterotrophs. However, recent research has demonstrated that OM produced by algae promotes the heterotrophic breakdown of leaf OM in aquatic ecosystems, with profound consequences for the metabolism of leaf carbon (C) within microbial food webs. In my thesis, I aimed at investigating the underlying mechanisms of this so called priming effect of algal OM on the use of leaf C in natural microbial communities, focusing on fungi and bacteria.
The works of my thesis underline that algal OM provides highly bioavailable compounds to the microbial community that are quickly assimilated by bacteria (Paper II). The substrate composition of OM pools determines the proportion of fungi and bacteria within the microbial community (Paper I). Thereby, the fraction of algae OM in the aquatic OM pool stimulates the activity and hence contribution of bacterial communities to leaf C turnover by providing an essential energy and nutrient source for the assimilation of the structural complex leaf OM substrate. On the contrary, the assimilation of algal OM remains limited for fungal communities as a function of nutrient competition between fungi and bacteria (Paper I, II). In addition, results provide evidence that environmental conditions determine the strength of interactions between microalgae and heterotrophic bacteria during leaf OM decomposition (Paper I, III). However, the stimulatory effect of algal photoautotrophic activities on leaf C turnover remained significant even under highly dynamic environmental conditions, highlighting their functional role for ecosystem processes (Paper III).
The results of my thesis provide insights into the mechanisms by which algae affect the microbial turnover of leaf C in freshwaters. This in turn contributes to a better understanding of the function of algae in freshwater biogeochemical cycles, especially with regard to their interaction with the heterotrophic community.
Properly designed (randomized and/or balanced) experiments are standard in ecological research. Molecular methods are increasingly used in ecology, but studies generally do not report the detailed design of sample processing in the laboratory. This may strongly influence the interpretability of results if the laboratory procedures do not account for the confounding effects of unexpected laboratory events. We demonstrate this with a simple experiment where unexpected differences in laboratory processing of samples would have biased results if randomization in DNA extraction and PCR steps do not provide safeguards. We emphasize the need for proper experimental design and reporting of the laboratory phase of molecular ecology research to ensure the reliability and interpretability of results.
It is of major interest to estimate the feedback of arctic ecosystems to the global warming we expect in upcoming decades. The speed of this response is driven by the potential of species to migrate, tracking their climate optimum. For this, sessile plants have to produce and disperse seeds to newly available habitats, and pollination of ovules is needed for the seeds to be viable. These two processes are also the vectors that pass genetic information through a population. A restricted exchange among subpopulations might lead to a maladapted population due to diversity losses. Hence, a realistic implementation of these dispersal processes into a simulation model would allow an assessment of the importance of diversity for the migration of plant species in various environments worldwide. To date, dynamic global vegetation models have been optimized for a global application and overestimate the migration of biome shifts in currently warming temperatures. We hypothesize that this is caused by neglecting important fine-scale processes, which are necessary to estimate realistic vegetation trajectories. Recently, we built and parameterized a simulation model LAVESI for larches that dominate the latitudinal treelines in the northernmost areas of Siberia. In this study, we updated the vegetation model by including seed and pollen dispersal driven by wind speed and direction. The seed dispersal is modelled as a ballistic flight, and for the pollination of ovules of seeds produced, we implemented a wind-determined and distance-dependent probability distribution function using a von Mises distribution to select the pollen donor. A local sensitivity analysis of both processes supported the robustness of the model's results to the parameterization, although it highlighted the importance of recruitment and seed dispersal traits for migration rates. This individual-based and spatially explicit implementation of both dispersal processes makes it easily feasible to inherit plant traits and genetic information to assess the impact of migration processes on the genetics. Finally, we suggest how the final model can be applied to substantially help in unveiling the important drivers of migration dynamics and, with this, guide the improvement of recent global vegetation models.
Understanding of wave environments is critical for the understanding of how particles are accelerated and lost in space. This study shows that in the vicinity of Europa and Ganymede, that respectively have induced and internal magnetic fields, chorus wave power is significantly increased. The observed enhancements are persistent and exceed median values of wave activity by up to 6 orders of magnitude for Ganymede. Produced waves may have a pronounced effect on the acceleration and loss of particles in the Jovian magnetosphere and other astrophysical objects. The generated waves are capable of significantly modifying the energetic particle environment, accelerating particles to very high energies, or producing depletions in phase space density. Observations of Jupiter's magnetosphere provide a unique opportunity to observe how objects with an internal magnetic field can interact with particles trapped in magnetic fields of larger scale objects.
In this study, we analyze interactions in lake and lake catchment systems of a continuous permafrost area. We assessed colored dissolved organic matter (CDOM) absorption at 440 nm (a(440)(CDOM)) and absorption slope (S300-500) in lakes using field sampling and optical remote sensing data for an area of 350 km(2) in Central Yamal, Siberia. Applying a CDOM algorithm (ratio of green and red band reflectance) for two high spatial resolution multispectral GeoEye-1 and Worldview-2 satellite images, we were able to extrapolate the a()(CDOM) data from 18 lakes sampled in the field to 356 lakes in the study area (model R-2 = 0.79). Values of a(440)(CDOM) in 356 lakes varied from 0.48 to 8.35 m(-1) with a median of 1.43 m(-1). This a()(CDOM) dataset was used to relate lake CDOM to 17 lake and lake catchment parameters derived from optical and radar remote sensing data and from digital elevation model analysis in order to establish the parameters controlling CDOM in lakes on the Yamal Peninsula. Regression tree model and boosted regression tree analysis showed that the activity of cryogenic processes (thermocirques) in the lake shores and lake water level were the two most important controls, explaining 48.4% and 28.4% of lake CDOM, respectively (R-2 = 0.61). Activation of thermocirques led to a large input of terrestrial organic matter and sediments from catchments and thawed permafrost to lakes (n = 15, mean a(440)(CDOM) = 5.3 m(-1)). Large lakes on the floodplain with a connection to Mordy-Yakha River received more CDOM (n = 7, mean a(440)(CDOM) = 3.8 m(-1)) compared to lakes located on higher terraces.
TerraSAR-X time series fill a gap in spaceborne snowmelt monitoring of small Arctic catchments
(2018)
The timing of snowmelt is an important turning point in the seasonal cycle of small Arctic catchments. The TerraSAR-X (TSX) satellite mission is a synthetic aperture radar system (SAR) with high potential to measure the high spatiotemporal variability of snow cover extent (SCE) and fractional snow cover (FSC) on the small catchment scale. We investigate the performance of multi-polarized and multi-pass TSX X-Band SAR data in monitoring SCE and FSC in small Arctic tundra catchments of Qikiqtaruk (Herschel Island) off the Yukon Coast in the Western Canadian Arctic. We applied a threshold based segmentation on ratio images between TSX images with wet snow and a dry snow reference, and tested the performance of two different thresholds. We quantitatively compared TSX- and Landsat 8-derived SCE maps using confusion matrices and analyzed the spatiotemporal dynamics of snowmelt from 2015 to 2017 using TSX, Landsat 8 and in situ time lapse data. Our data showed that the quality of SCE maps from TSX X-Band data is strongly influenced by polarization and to a lesser degree by incidence angle. VH polarized TSX data performed best in deriving SCE when compared to Landsat 8. TSX derived SCE maps from VH polarization detected late lying snow patches that were not detected by Landsat 8. Results of a local assessment of TSX FSC against the in situ data showed that TSX FSC accurately captured the temporal dynamics of different snow melt regimes that were related to topographic characteristics of the studied catchments. Both in situ and TSX FSC showed a longer snowmelt period in a catchment with higher contributions of steep valleys and a shorter snowmelt period in a catchment with higher contributions of upland terrain. Landsat 8 had fundamental data gaps during the snowmelt period in all 3 years due to cloud cover. The results also revealed that by choosing a positive threshold of 1 dB, detection of ice layers due to diurnal temperature variations resulted in a more accurate estimation of snow cover than a negative threshold that detects wet snow alone. We find that TSX X-Band data in VH polarization performs at a comparable quality to Landsat 8 in deriving SCE maps when a positive threshold is used. We conclude that TSX data polarization can be used to accurately monitor snowmelt events at high temporal and spatial resolution, overcoming limitations of Landsat 8, which due to cloud related data gaps generally only indicated the onset and end of snowmelt.
Changes in species' distributions are classically projected based on their climate envelopes. For Siberian forests, which have a tremendous significance for vegetation-climate feedbacks, this implies future shifts of each of the forest-forming larch (Larix) species to the north-east. However, in addition to abiotic factors, reliable projections must assess the role of historical biogeography and biotic interactions. Here, we use sedimentary ancient DNA and individual-based modelling to investigate the distribution of larch species and mitochondrial haplotypes through space and time across the treeline ecotone on the southern Taymyr peninsula, which at the same time presents a boundary area of two larch species. We find spatial and temporal patterns, which suggest that forest density is the most influential driver determining the precise distribution of species and mitochondrial haplotypes. This suggests a strong influence of competition on the species' range shifts. These findings imply possible climate change outcomes that are directly opposed to projections based purely on climate envelopes. Investigations of such fine-scale processes of biodiversity change through time are possible using paleoenvironmental DNA, which is available much more readily than visible fossils and can provide information at a level of resolution that is not reached in classical palaeoecology.
The nucleus of Hen 2-428 is a short orbital period (4.2 h) spectroscopic binary, whose status as potential supernovae type Ia progenitor has raised some controversy in the literature. We present preliminary results of a thorough analysis of this interesting system, which combines quantitative non-local thermodynamic (non-LTE) equilibrium spectral modelling, radial velocity analysis, multi-band light curve fitting, and state-of-the art stellar evolutionary calculations. Importantly, we find that the dynamical system mass that is derived by using all available He II lines does not exceed the Chandrasekhar mass limit. Furthermore, the individual masses of the two central stars are too small to lead to an SN Ia in case of a dynamical explosion during the merger process.
Ice-wedge polygons are common features of northeastern Siberian lowland periglacial tundra landscapes. To deduce the formation and alternation of ice-wedge polygons in the Kolyma Delta and in the Indigirka Lowland, we studied shallow cores, up to 1.3 m deep, from polygon center and rim locations. The formation of well-developed low-center polygons with elevated rims and wet centers is shown by the beginning of peat accumulation, increased organic matter contents, and changes in vegetation cover from Poaceae-, Alnus-, and Betula-dominated pollen spectra to dominating Cyperaceae and Botryoccocus presence, and Carex and Drepanocladus revolvens macro-fossils. Tecamoebae data support such a change from wetland to open-water conditions in polygon centers by changes from dominating eurybiontic and sphagnobiontic to hydrobiontic species assemblages. The peat accumulation indicates low-center polygon formation and started between 2380 +/- 30 and 1676 +/- 32 years before present (BP) in the Kolyma Delta. We recorded an opposite change from open-water to wetland conditions because of rim degradation and consecutive high-center polygon formation in the Indigirka Lowland between 2144 +/- 33 and 1632 +/- 32 years BP. The late Holocene records of polygon landscape development reveal changes in local hydrology and soil moisture.
High-latitude treeless ecosystems represent spatially highly heterogeneous landscapes with small net carbon fluxes and a short growing season. Reliable observations and process understanding are critical for projections of the carbon balance of the climate-sensitive tundra. Space-borne remote sensing is the only tool to obtain spatially continuous and temporally resolved information on vegetation greenness and activity in remote circumpolar areas. However, confounding effects from persistent clouds, low sun elevation angles, numerous lakes, widespread surface inundation, and the sparseness of the vegetation render it highly challenging. Here, we conduct an extensive analysis of the timing of peak vegetation productivity as shown by satellite observations of complementary indicators of plant greenness and photosynthesis. We choose to focus on productivity during the peak of the growing season, as it importantly affects the total annual carbon uptake. The suite of indicators are as follows: (1) MODIS-based vegetation indices (VIs) as proxies for the fraction of incident photosynthetically active radiation (PAR) that is absorbed (fPAR), (2) VIs combined with estimates of PAR as a proxy of the total absorbed radiation (APAR), (3) sun-induced chlorophyll fluorescence (SIF) serving as a proxy for photosynthesis, (4) vegetation optical depth (VOD), indicative of total water content and (5) empirically upscaled modelled gross primary productivity (GPP). Averaged over the pan-Arctic we find a clear order of the annual peak as APAR ≦ GPP<SIF<VIs/VOD. SIF as an indicator of photosynthesis is maximised around the time of highest annual temperatures. The modelled GPP peaks at a similar time to APAR. The time lag of the annual peak between APAR and instantaneous SIF fluxes indicates that the SIF data do contain information on light-use efficiency of tundra vegetation, but further detailed studies are necessary to verify this. Delayed peak greenness compared to peak photosynthesis is consistently found across years and land-cover classes. A particularly late peak of the normalised difference vegetation index (NDVI) in regions with very small seasonality in greenness and a high amount of lakes probably originates from artefacts. Given the very short growing season in circumpolar areas, the average time difference in maximum annual photosynthetic activity and greenness or growth of 3 to 25 days (depending on the data sets chosen) is important and needs to be considered when using satellite observations as drivers in vegetation models.