Refine
Year of publication
Document Type
- Article (20738)
- Doctoral Thesis (3141)
- Postprint (2090)
- Monograph/Edited Volume (1198)
- Other (661)
- Review (586)
- Conference Proceeding (328)
- Part of a Book (232)
- Preprint (232)
- Working Paper (134)
Language
- English (29555) (remove)
Is part of the Bibliography
- yes (29555) (remove)
Keywords
- climate change (172)
- Germany (103)
- machine learning (86)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (67)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4880)
- Institut für Biochemie und Biologie (4712)
- Institut für Geowissenschaften (3310)
- Institut für Chemie (2855)
- Institut für Mathematik (1573)
- Department Psychologie (1406)
- Institut für Ernährungswissenschaft (1033)
- Department Linguistik (924)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (796)
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes. First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations. Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations. Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
The levels of environmental light experienced by organisms during the behavioral activity phase deeply influence the performance of important ecological tasks. As a result, their shape and coloring may experience a light-driven selection process via the day-night rhythmic behavior. In this study, we tested the phenotypic and genetic variability of the western Mediterranean squat lobster (Munida tenuimana). We sampled at depths with different photic conditions and potentially, different burrow emergence rhythms. We performed day-night hauling at different depths, above and below the twilight zone end (i.e., 700 m, 1200 m, 1350 m, and 1500 m), to portray the occurrence of any burrow emergence rhythmicity. Collected animals were screened for shape and size (by geometric morphometry), spectrum and color variation (by photometric analysis), as well as for sequence variation at the mitochondria] DNA gene encoding for the NADH dehydrogenase subunit I. We found that a weak genetic structuring and shape homogeneity occurred together with significant variations in size, with the smaller individuals living at the twilight zone inferior limit and the larger individuals above and below. The infra-red wavelengths of spectral reflectance varied significantly with depth while the blue-green ones were size-dependent and expressed in smaller animals, which has a very small spectral reflectance. The effects of solar and bioluminescence lighting are discussed as depth-dependent evolutionary forces likely influencing the behavioral rhythms and coloring of M. tenuimana.
Massive stars that become stripped of their hydrogen envelope through binary interaction or winds can be observed either as Wolf-Rayet stars, if they have optically thick winds, or as transparent-wind stripped-envelope stars. We approximate their evolution through evolutionary models of single helium stars, and compute detailed model grids in the initial mass range 1.5-70 M. for metallicities between 0.01 and 0.04, from core helium ignition until core collapse. Throughout their lifetimes some stellar models expose the ash of helium burning. We propose that models that have nitrogen-rich envelopes are candidate WN stars, while models with a carbon-rich surface are candidate WC stars during core helium burning, and WO stars afterwards. We measure the metallicity dependence of the total lifetimes of our models and the duration of their evolutionary phases. We propose an analytic estimate of the wind's optical depth to distinguish models of Wolf-Rayet stars from transparent-wind stripped-envelope stars, and find that the luminosity ranges at which WN-, WC-, and WO-type stars can exist is a strong function of metallicity. We find that all carbon-rich models produced in our grids have optically thick winds and match the luminosity distribution of observed populations. We construct population models and predict the numbers of transparent-wind stripped-envelope stars and Wolf-Rayet stars, and derive their number ratios at different metallicities. We find that as metallicity increases, the number of transparent-wind stripped-envelope stars decreases and the number of Wolf-Rayet stars increases. At high metallicities WC- and WO-type stars become more common. We apply our population models to nearby galaxies, and find that populations are more sensitive to the transition luminosity between Wolf-Rayet stars and transparent-wind helium stars than to the metallicity-dependent mass loss rates.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
In this work we tackle the problem of checking strong equivalence of logic programs that may contain local auxiliary atoms, to be removed from their stable models and to be forbidden in any external context. We call this property projective strong equivalence (PSE). It has been recently proved that not any logic program containing auxiliary atoms can be reformulated, under PSE, as another logic program or formula without them – this is known as strongly persistent forgetting. In this paper, we introduce a conservative extension of Equilibrium Logic and its monotonic basis, the logic of Here-and-There, in which we deal with a new connective ‘|’ we call fork. We provide a semantic characterisation of PSE for forks and use it to show that, in this extension, it is always possible to forget auxiliary atoms under strong persistence. We further define when the obtained fork is representable as a regular formula.
The color red has been implicated in a variety of social processes, including those involving mating. While previous research suggests that women sometimes wear red strategically to increase their attractiveness, the replicability of this literature has been questioned. The current research is a reasonably powered conceptual replication designed to strengthen this literature by testing whether women are more inclined to display the color red 1) during fertile (as compared with less fertile) days of the menstrual cycle, and 2) when expecting to interact with an attractive man (as compared with a less attractive man and with a control condition). Analyses controlled for a number of theoretically relevant covariates (relationship status, age, the current weather). Only the latter hypothesis received mixed support (mainly among women on hormonal birth control), whereas results concerning the former hypothesis did not reach significance. Women (N = 281) displayed more red when expecting to interact with an attractive man; findings did not support the prediction that women would increase their display of red on fertile days of the cycle. Findings thus suggested only mixed replicability for the link between the color red and psychological processes involving romantic attraction. They also illustrate the importance of further investigating the boundary conditions of color effects on everyday social processes.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
Etmopteridae (lantern sharks) is the most species-rich family of sharks, comprising more than 50 species.
Many species are described from few individuals, and re-collection of specimens is often hindered by the remoteness of their sampling sites.
For taxonomic studies, comparative morphological analysis of type specimens housed in natural history collections has been the main source of evidence.
In contrast, DNA sequence information has rarely been used.
Most lantern shark collection specimens, including the types, were formalin fixed before long-term storage in ethanol solutions.
The DNA damage caused by both fixation and preservation of specimens has excluded these specimens from DNA sequence-based phylogenetic analyses so far.
However, recent advances in the field of ancient DNA have allowed recovery of wet-collection specimen DNA sequence data.
Here we analyse archival mitochondrial DNA sequences, obtained using ancient DNA approaches, of two wet-collection lantern shark paratype specimens, namely Etmopterus litvinovi and E. pycnolepis, for which the type series represent the only known individuals.
Target capture of mitochondrial markers from single-stranded DNA libraries allows for phylogenetic placement of both species.
Our results suggest synonymy of E. benchleyi with E. litvinovi but support the species status of E. pycnolepis. This revised taxonomy is helpful for future conservation and management efforts, as our results indicate a larger distribution range of E. litvinovi. This study further demonstrates the importance of wet-collection type specimens as genetic resource for taxonomic research.
The degree of detrimental effects inflicted on mankind by the COVID-19 pandemic increased the need to develop ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, and Deliverable) POCT (point of care testing) to overcome the current and any future pandemics. Much effort in research and development is currently advancing the progress to overcome the diagnostic pressure built up by emerging new pathogens. LAMP (loop-mediated isothermal amplification) is a well-researched isothermal technique for specific nucleic acid amplification which can be combined with a highly sensitive immunochromatographic readout via lateral flow assays (LFA). Here we discuss LAMP-LFA robustness, sensitivity, and specificity for SARS-CoV-2 N-gene detection in cDNA and clinical swab-extracted RNA samples. The LFA readout is designed to produce highly specific results by incorporation of biotin and FITC labels to 11-dUTP and LF (loop forming forward) primer, respectively. The LAMP-LFA assay was established using cDNA for N-gene with an accuracy of 95.65%. To validate the study, 82 SARS-CoV-2-positive RNA samples were tested. Reverse transcriptase (RT)-LAMP-LFA was positive for the RNA samples with an accuracy of 81.66%; SARS-CoV-2 viral RNA was detected by RT-LAMP-LFA for as low as CT-33. Our method reduced the detection time to 15 min and indicates therefore that RT-LAMP in combination with LFA represents a promising nucleic acid biosensing POCT platform that combines with smartphone based semi-quantitative data analysis.
This study focuses on three key aspects: (a) crude throat swab samples in a viral transport medium (VTM) as templates for RT-LAMP reactions; (b) a biotinylated DNA probe with enhanced specificity for LFA readouts; and (c) a digital semi-quantification of LFA readouts. Throat swab samples from SARS-CoV-2 positive and negative patients were used in their crude (no cleaning or pre-treatment) forms for the RT-LAMP reaction. The samples were heat-inactivated but not treated for any kind of nucleic acid extraction or purification. The RT-LAMP (20 min processing time) product was read out by an LFA approach using two labels: FITC and biotin. FITC was enzymatically incorporated into the RT-LAMP amplicon with the LF-LAMP primer, and biotin was introduced using biotinylated DNA probes, specifically for the amplicon region after RT-LAMP amplification. This assay setup with biotinylated DNA probe-based LFA readouts of the RT-LAMP amplicon was 98.11% sensitive and 96.15% specific. The LFA result was further analysed by a smartphone-based IVD device, wherein the T-line intensity was recorded. The LFA T-line intensity was then correlated with the qRT-PCR Ct value of the positive swab samples. A digital semi-quantification of RT-LAMP-LFA was reported with a correlation coefficient of R2 = 0.702. The overall RT-LAMP-LFA assay time was recorded to be 35 min with a LoD of three RNA copies/µL (Ct-33). With these three advancements, the nucleic acid testing-point of care technique (NAT-POCT) is exemplified as a versatile biosensor platform with great potential and applicability for the detection of pathogens without the need for sample storage, transportation, or pre-processing.
Functional characterization of ROS-responsive genes, ANAC085 and ATR7, in Arabidopsis thaliana
(2023)
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
Quantifying the roles of single stations within homogeneous regions using complex network analysis
(2018)
Regionalization and pooling stations to form homogeneous regions or communities are essential for reliable parameter transfer, prediction in ungauged basins, and estimation of missing information. Over the years, several clustering methods have been proposed for regional analysis. Most of these methods are able to quantify the study region in terms of homogeneity but fail to provide microscopic information about the interaction between communities, as well as about each station within the communities. We propose a complex network-based approach to extract this valuable information and demonstrate the potential of our approach using a rainfall network constructed from the Indian gridded daily precipitation data. The communities were identified using the network-theoretical community detection algorithm for maximizing the modularity. Further, the grid points (nodes) were classified into universal roles according to their pattern of within- and between-community connections. The method thus yields zoomed-in details of individual rainfall grids within each community.
In recent years, complex network analysis facilitated the identification of universal and unexpected patterns in complex climate systems. However, the analysis and representation of a multiscale complex relationship that exists in the global climate system are limited. A logical first step in addressing this issue is to construct multiple networks over different timescales. Therefore, we propose to apply the wavelet multiscale correlation (WMC) similarity measure, which is a combination of two state-of-the-art methods, viz. wavelet and Pearson’s correlation, for investigating multiscale processes through complex networks. Firstly we decompose the data over different timescales using the wavelet approach and subsequently construct a corresponding network by Pearson’s correlation. The proposed approach is illustrated and tested on two synthetics and one real-world example. The first synthetic case study shows the efficacy of the proposed approach to unravel scale-specific connections, which are often undiscovered at a single scale. The second synthetic case study illustrates that by dividing and constructing a separate network for each time window we can detect significant changes in the signal structure. The real-world example investigates the behavior of the global sea surface temperature (SST) network at different timescales. Intriguingly, we notice that spatial dependent structure in SST evolves temporally. Overall, the proposed measure has an immense potential to provide essential insights on understanding and extending complex multivariate process studies at multiple scales.
Hydrologic regionalization deals with the investigation of homogeneity in watersheds and provides a classification of watersheds for regional analysis. The classification thus obtained can be used as a basis for mapping data from gauged to ungauged sites and can improve extreme event prediction. This paper proposes a wavelet power spectrum (WPS) coupled with the self-organizing map method for clustering hydrologic catchments. The application of this technique is implemented for gauged catchments. As a test case study, monthly streamflow records observed at 117 selected catchments throughout the western United States from 1951 through 2002. Further, based on WPS of each station, catchments are classified into homogeneous clusters, which provides a representative WPS pattern for the streamflow stations in each cluster.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
The climate is a complex dynamical system involving interactions and feedbacks among different processes at multiple temporal and spatial scales. Although numerous studies have attempted to understand the climate system, nonetheless, the studies investigating the multiscale characteristics of the climate are scarce. Further, the present set of techniques are limited in their ability to unravel the multi-scale variability of the climate system. It is completely plausible that extreme events and abrupt transitions, which are of great interest to climate community, are resultant of interactions among processes operating at multi-scale. For instance, storms, weather patterns, seasonal irregularities such as El Niño, floods and droughts, and decades-long climate variations can be better understood and even predicted by quantifying their multi-scale dynamics. This makes a strong argument to unravel the interaction and patterns of climatic processes at different scales. With this background, the thesis aims at developing measures to understand and quantify multi-scale interactions within the climate system.
In the first part of the thesis, I proposed two new methods, viz, multi-scale event synchronization (MSES) and wavelet multi-scale correlation (WMC) to capture the scale-specific features present in the climatic processes. The proposed methods were tested on various synthetic and real-world time series in order to check their applicability and replicability. The results indicate that both methods (WMC and MSES) are able to capture scale-specific associations that exist between processes at different time scales in a more detailed manner as compared to the traditional single scale counterparts.
In the second part of the thesis, the proposed multi-scale similarity measures were used in constructing climate networks to investigate the evolution of spatial connections within climatic processes at multiple timescales. The proposed methods WMC and MSES, together with complex network were applied to two different datasets.
In the first application, climate networks based on WMC were constructed for the univariate global sea surface temperature (SST) data to identify and visualize the SSTs patterns that develop very similarly over time and distinguish them from those that have long-range teleconnections to other ocean regions. Further investigations of climate networks on different timescales revealed (i) various high variability and co-variability regions, and (ii) short and long-range teleconnection regions with varying spatial distance. The outcomes of the study not only re-confirmed the existing knowledge on the link between SST patterns like El Niño Southern Oscillation and the Pacific Decadal Oscillation, but also suggested new insights into the characteristics and origins of long-range teleconnections.
In the second application, I used the developed non-linear MSES similarity measure to quantify the multivariate teleconnections between extreme Indian precipitation and climatic patterns with the highest relevance for Indian sub-continent. The results confirmed significant non-linear influences that were not well captured by the traditional methods. Further, there was a substantial variation in the strength and nature of teleconnection across India, and across time scales.
Overall, the results from investigations conducted in the thesis strongly highlight the need for considering the multi-scale aspects in climatic processes, and the proposed methods provide robust framework for quantifying the multi-scale characteristics.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
Numerical simulation of fluid-flow processes in a 3D high-resolution carbonate reservoir analogue
(2014)
A high-resolution three-dimensional (3D) outcrop model of a Jurassic carbonate ramp was used in order to perform a series of detailed and systematic flow simulations. The aim of this study was to test the impact of small- and large-scale geological features on reservoir performance and oil recovery. The digital outcrop model contains a wide range of sedimentological, diagenetic and structural features, including discontinuity surfaces, shoal bodies, mud mounds, oyster bioherms and fractures. Flow simulations are performed for numerical well testing and secondary oil recovery. Numerical well testing enables synthetic but systematic pressure responses to be generated for different geological features observed in the outcrops. This allows us to assess and rank the relative impact of specific geological features on reservoir performance. The outcome documents that, owing to the realistic representation of matrix heterogeneity, most diagenetic and structural features cannot be linked to a unique pressure signature. Instead, reservoir performance is controlled by subseismic faults and oyster bioherms acting as thief zones. Numerical simulations of secondary recovery processes reveal strong channelling of fluid flow into high-permeability layers as the primary control for oil recovery. However, appropriate reservoir-engineering solutions, such as optimizing well placement and injection fluid, can reduce channelling and increase oil recovery.
Pancreatic steatosis associates with beta-cell failure and may participate in the development of type-2-diabetes. Our previous studies have shown that diabetes-susceptible mice accumulate more adipocytes in the pancreas than diabetes-resistant mice. In addition, we have demonstrated that the co-culture of pancreatic islets and adipocytes affect insulin secretion. The aim of this current study was to elucidate if and to what extent pancreas-resident mesenchymal stromal cells (MSCs) with adipogenic progenitor potential differ from the corresponding stromal-type cells of the inguinal white adipose tissue (iWAT). miRNA (miRNome) and mRNA expression (transcriptome) analyses of MSCs isolated by flow cytometry of both tissues revealed 121 differentially expressed miRNAs and 1227 differentially expressed genes (DEGs). Target prediction analysis estimated 510 DEGs to be regulated by 58 differentially expressed miRNAs. Pathway analyses of DEGs and miRNA target genes showed unique transcriptional and miRNA signatures in pancreas (pMSCs) and iWAT MSCs (iwatMSCs), for instance fibrogenic and adipogenic differentiation, respectively. Accordingly, iwatMSCs revealed a higher adipogenic lineage commitment, whereas pMSCs showed an elevated fibrogenesis. As a low degree of adipogenesis was also observed in pMSCs of diabetes-susceptible mice, we conclude that the development of pancreatic steatosis has to be induced by other factors not related to cell-autonomous transcriptomic changes and miRNA-based signals.
Type 2 diabetes (T2D) is a complex metabolic disease regulated by an interaction of genetic predisposition and environmental factors. To understand the genetic contribution in the development of diabetes, mice varying in their disease susceptibility were crossed with the obese and diabetes-prone New Zealand obese (NZO) mouse. Subsequent whole-genome sequence scans revealed one major quantitative trait loci (QTL),Nidd/DBAon chromosome 4, linked to elevated blood glucose and reduced plasma insulin and low levels of pancreatic insulin. Phenotypical characterization of congenic mice carrying 13.6 Mbp of the critical fragment of DBA mice displayed severe hyperglycemia and impaired glucose clearance at week 10, decreased glucose response in week 13, and loss of beta-cells and pancreatic insulin in week 16. To identify the responsible gene variant(s), further congenic mice were generated and phenotyped, which resulted in a fragment of 3.3 Mbp that was sufficient to induce hyperglycemia. By combining transcriptome analysis and haplotype mapping, the number of putative responsible variant(s) was narrowed from initial 284 to 18 genes, including gene models and non-coding RNAs. Consideration of haplotype blocks reduced the number of candidate genes to four (Kti12,Osbpl9,Ttc39a, andCalr4) as potential T2D candidates as they display a differential expression in pancreatic islets and/or sequence variation. In conclusion, the integration of comparative analysis of multiple inbred populations such as haplotype mapping, transcriptomics, and sequence data substantially improved the mapping resolution of the diabetes QTLNidd/DBA. Future studies are necessary to understand the exact role of the different candidates in beta-cell function and their contribution in maintaining glycemic control.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
Information on structural features of a fracture network at early stages of Enhanced Geothermal System development is mostly restricted to borehole images and, if available, outcrop data. However, using this information to image discontinuities in deep reservoirs is difficult. Wellbore failure data provides only some information on components of the in situ stress state and its heterogeneity. Our working hypothesis is that slip on natural fractures primarily controls these stress heterogeneities. Based on this, we introduce stress-based tomography in a Bayesian framework to characterize the fracture network and its heterogeneity in potential Enhanced Geothermal System reservoirs. In this procedure, first a random initial discrete fracture network (DFN) realization is generated based on prior information about the network. The observations needed to calibrate the DFN are based on local variations of the orientation and magnitude of at least one principal stress component along boreholes. A Markov Chain Monte Carlo sequence is employed to update the DFN iteratively by a fracture translation within the domain. The Markov sequence compares the simulated stress profile with the observed stress profiles in the borehole, evaluates each iteration with Metropolis-Hastings acceptance criteria, and stores acceptable DFN realizations in an ensemble. Finally, this obtained ensemble is used to visualize the potential occurrence of fractures in a probability map, indicating possible fracture locations and lengths. We test this methodology to reconstruct simple synthetic and more complex outcrop-based fracture networks and successfully image the significant fractures in the domain.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the ‘argumentative microtext corpus’ [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801–815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
The behaviour of individuals, businesses, and government entities before, during, and immediately after a disaster can dramatically affect the impact and recovery time. However, existing risk-assessment methods rarely include this critical factor. In this Perspective, we show why this is a concern, and demonstrate that although initial efforts have inevitably represented human behaviour in limited terms, innovations in flood-risk assessment that integrate societal behaviour and behavioural adaptation dynamics into such quantifications may lead to more accurate characterization of risks and improved assessment of the effectiveness of risk-management strategies and investments. Such multidisciplinary approaches can inform flood-risk management policy development.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Accurately predicting total electron content (TEC) during geomagnetic storms is still a challenging task for ionospheric models. In this work, a neural-network (NN)-based model is proposed which predicts relative TEC with respect to the preceding 27-day median TEC, during storm time for the European region (with longitudes 30 degrees W-50 degrees E and latitudes 32.5 degrees N-70 degrees N). The 27-day median TEC (referred to as median TEC), latitude, longitude, universal time, storm time, solar radio flux index F10.7, global storm index SYM-H and geomagnetic activity index Hp30 are used as inputs and the output of the network is the relative TEC. The relative TEC can be converted to the actual TEC knowing the median TEC. The median TEC is calculated at each grid point over the European region considering data from the last 27 days before the storm using global ionosphere maps (GIMs) from international GNSS service (IGS) sources. A storm event is defined when the storm time disturbance index Dst drops below 50 nanotesla. The model was trained with storm-time relative TEC data from the time period of 1998 until 2019 (2015 is excluded) and contains 365 storms. Unseen storm data from 33 storm events during 2015 and 2020 were used to test the model. The UQRG GIMs were used because of their high temporal resolution (15 min) compared to other products from different analysis centers. The NN-based model predictions show the seasonal behavior of the storms including positive and negative storm phases during winter and summer, respectively, and show a mixture of both phases during equinoxes. The model's performance was also compared with the Neustrelitz TEC model (NTCM) and the NN-based quiet-time TEC model, both developed at the German Aerospace Agency (DLR). The storm model has a root mean squared error (RMSE) of 3.38 TEC units (TECU), which is an improvement by 1.87 TECU compared to the NTCM, where an RMSE of 5.25 TECU was found. This improvement corresponds to a performance increase by 35.6%. The storm-time model outperforms the quiet-time model by 1.34 TECU, which corresponds to a performance increase by 28.4% from 4.72 to 3.38 TECU. The quiet-time model was trained with Carrington averaged TEC and, therefore, is ideal to be used as an input instead of the GIM derived 27-day median. We found an improvement by 0.8 TECU which corresponds to a performance increase by 17% from 4.72 to 3.92 TECU for the storm-time model using the quiet-time-model predicted TEC as an input compared to solely using the quiet-time model.
Background:
Childhood and adolescence are critical stages of life for mental health and well-being. Schools are a key setting for mental health promotion and illness prevention. One in five children and adolescents have a mental disorder, about half of mental disorders beginning before the age of 14. Beneficial and explainable artificial intelligence can replace current paper- based and online approaches to school mental health surveys. This can enhance data acquisition, interoperability, data driven analysis, trust and compliance. This paper presents a model for using chatbots for non-obtrusive data collection and supervised machine learning models for data analysis; and discusses ethical considerations pertaining to the use of these models.
Methods:
For data acquisition, the proposed model uses chatbots which interact with students. The conversation log acts as the source of raw data for the machine learning. Pre-processing of the data is automated by filtering for keywords and phrases.
Existing survey results, obtained through current paper-based data collection methods, are evaluated by domain experts (health professionals). These can be used to create a test dataset to validate the machine learning models. Supervised learning
can then be deployed to classify specific behaviour and mental health patterns.
Results:
We present a model that can be used to improve upon current paper-based data collection and manual data analysis methods. An open-source GitHub repository contains necessary tools and components of this model. Privacy is respected through
rigorous observance of confidentiality and data protection requirements. Critical reflection on these ethics and law aspects is included in the project.
Conclusions:
This model strengthens mental health surveillance in schools. The same tools and components could be applied to other public health data. Future extensions of this model could also incorporate unsupervised learning to find clusters and patterns
of unknown effects.
In clinical settings, significant resources are spent on data collection and monitoring patients' health parameters to improve decision-making and provide better care. With increased digitization, the healthcare sector is shifting towards implementing digital technologies for data management and in administration. New technologies offer better treatment opportunities and streamline clinical workflow, but the complexity can cause ineffectiveness, frustration, and errors. To address this, we believe digital solutions alone are not sufficient. Therefore, we take a human-centred design approach for AI development, and apply systems engineering methods to identify system leverage points. We demonstrate how automation enables monitoring clinical parameters, using existing non-intrusive sensor technology, resulting in more resources toward patient care. Furthermore, we provide a framework on digitization of clinical data for integration with data management.
The subsurface harbors a large fraction of Earth's living biomass, forming complex microbial ecosystems. Without a profound knowledge of the ongoing biologically mediated processes and their reaction to anthropogenic changes it is difficult to assess the long-term stability and feasibility of any type of geotechnical utilization, as these influence subsurface ecosystems. Despite recent advances in many areas of subsurface microbiology, the direct quantification of turnover processes is still in its infancy, mainly due to the extremely low cell abundances. We provide an overview of the currently available techniques for the quantification of microbial turnover processes and discuss their specific strengths and limitations. Most techniques employed so far have focused on specific processes, e.g. sulfate reduction or methanogenesis. Recent studies show that processes that were previously thought to exclude each other can occur simultaneously, albeit at very low rates. Without the identification of the respective processes it is impossible to quantify total microbial activity. Even in cases where all simultaneously occurring processes can be identified, the typically very low rates prevent quantification. In many cases a simple measure of total microbial activity would be a better and more robust measure than assays for several specific processes. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Depending on the compound targeted a broader spectrum of microbial processes can be quantified. The two most promising compounds are ATP and hydrogenase, as both are ubiquitous in microbes. Technical constraints limit the applicability of currently available ATP-assays for subsurface samples. A recently developed hydrogenase radiotracer assay has the potential to become a key tool for the quantification of subsurface microbial activity.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
Metabolically active microbial communities are present in a wide range of subsurface environments. Techniques like enumeration of microbial cells, activity measurements with radiotracer assays and the analysis of porewater constituents are currently being used to explore the subsurface biosphere, alongside with molecular biological analyses. However, many of these techniques reach their detection limits due to low microbial activity and abundance. Direct measurements of microbial turnover not just face issues of insufficient sensitivity, they only provide information about a single specific process but in sediments many different process can occur simultaneously. Therefore, the development of a new technique to measure total microbial activity would be a major improvement. A new tritium-based hydrogenase-enzyme assay appeared to be a promising tool to quantify total living biomass, even in low activity subsurface environments. In this PhD project total microbial biomass and microbial activity was quantified in different subsurface sediments using established techniques (cell enumeration and pore water geochemistry) as well as a new tritium-based hydrogenase enzyme assay. By using a large database of our own cell enumeration data from equatorial Pacific and north Pacific sediments and published data it was shown that the global geographic distribution of subseafloor sedimentary microbes varies between sites by 5 to 6 orders of magnitude and correlates with the sedimentation rate and distance from land. Based on these correlations, global subseafloor biomass was estimated to be 4.1 petagram-C and ~0.6 % of Earth's total living biomass, which is significantly lower than previous estimates. Despite the massive reduction in biomass the subseafloor biosphere is still an important player in global biogeochemical cycles. To understand the relationship between microbial activity, abundance and organic matter flux into the sediment an expedition to the equatorial Pacific upwelling area and the north Pacific Gyre was carried out. Oxygen respiration rates in subseafloor sediments from the north Pacific Gyre, which are deposited at sedimentation rates of 1 mm per 1000 years, showed that microbial communities could survive for millions of years without fresh supply of organic carbon. Contrary to the north Pacific Gyre oxygen was completely depleted within the upper few millimeters to centimeters in sediments of the equatorial upwelling region due to a higher supply of organic matter and higher metabolic activity. So occurrence and variability of electron acceptors over depth and sites make the subsurface a complex environment for the quantification of total microbial activity. Recent studies showed that electron acceptor processes, which were previously thought to thermodynamically exclude each other can occur simultaneously. So in many cases a simple measure of the total microbial activity would be a better and more robust solution than assays for several specific processes, for example sulfate reduction rates or methanogenesis. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Since hydrogenase enzymes are ubiquitous in microbes, the recently developed tritium-based hydrogenase radiotracer assay is applied to quantify hydrogenase enzyme activity as a parameter of total living cell activity. Hydrogenase enzyme activity was measured in sediments from different locations (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico). In sediment samples that contained nitrate, we found the lowest cell specific enzyme activity around 10^(-5) nmol H_(2) cell^(-1) d^(-1). With decreasing energy yield of the electron acceptor used, cell-specific hydrogenase activity increased and maximum values of up to 1 nmol H_(2) cell^(-1) d^(-1) were found in samples with methane concentrations of >10 ppm. Although hydrogenase activity cannot be converted directly into a turnover rate of a specific process, cell-specific activity factors can be used to identify specific metabolism and to quantify the metabolically active microbial population. In another study on sediments from the Nankai Trough microbial abundance and hydrogenase activity data show that both the habitat and the activity of subseafloor sedimentary microbial communities have been impacted by seismic activities. An increase in hydrogenase activity near the fault zone revealed that the microbial community was supplied with hydrogen as an energy source and that the microbes were specialized to hydrogen metabolism.
Efficient Removal of Tetracycline and Bisphenol A from Water with a New Hybrid Clay/TiO2 Composite
(2023)
New TiO2 hybrid composites were prepared fromkaolinclay, predried and carbonized biomass, and titanium tetraisopropoxideand explored for tetracycline (TET) and bisphenol A (BPA) removalfrom water. Overall, the removal rate is 84% for TET and 51% for BPA.The maximum adsorption capacities (q (m))are 30 and 23 mg/g for TET and BPA, respectively. These capacitiesare far greater than those obtained for unmodified TiO2. Increasing the ionic strength of the solution does not change theadsorption capacity of the adsorbent. pH changes only slightly changeBPA adsorption, while a pH > 7 significantly reduces the adsorptionof TET on the material. The Brouers-Sotolongo fractal modelbest describes the kinetic data for both TET and BPA adsorption, predictingthat the adsorption process occurs via a complex mechanism involvingvarious forces of attraction. Temkin and Freundlich isotherms, whichbest fit the equilibrium adsorption data for TET and BPA, respectively,suggest that adsorption sites are heterogeneous in nature. Overall,the composite materials are much more effective for TET removal fromaqueous solution than for BPA. This phenomenon is assigned to a differencein the TET/adsorbent interactions vs the BPA/adsorbent interactions:the decisive factor appears to be favorable electrostatic interactionsfor TET yielding a more effective TET removal.
Background: While incidences of cancer are continuously increasing, drug resistance of malignant cells is observed towards almost all pharmaceuticals. Several isoflavonoids and flavonoids are known for their cytotoxicity towards various cancer cells. Methods: The cytotoxicity of compounds was determined based on the resazurin reduction assay. Caspases activation was evaluated using the caspase-Glo assay. Flow cytometry was used to analyze the cell cycle (propodium iodide (PI) staining), apoptosis (annexin V/PI staining), mitochondrial membrane potential (MMP) (JC-1) and reactive oxygen species (ROS) (H2DCFH-DA). CCRF-CEM leukemia cells were used as model cells for mechanistic studies. Results: Compounds 1, 2 and 4 displayed IC50 values below 20 mu M towards CCRF-CEM and CEM/ADR5000 leukemia cells, and were further tested towards a panel of 7 carcinoma cells. The IC50 values of the compounds against carcinoma cells varied from 16.90 mu M (in resistant U87MG.Delta EGFR glioblastoma cells) to 48.67 mu M (against HepG2 hepatocarcinoma cells) for 1, from 7.85 mu M (in U87MG.Delta EGFR cells) to 14.44 mu M (in resistant MDA-MB231/BCRP breast adenocarcinoma cells) for 2, from 4.96 mu M (towards U87MG.Delta EGFRcells) to 7.76 mu M (against MDA-MB231/BCRP cells) for 4, and from 0.07 mu M (against MDA-MB231 cells) to 2.15 mu M (against HepG2 cells) for doxorubicin. Compounds 2 and 4 induced apoptosis in CCRF-CEM cells mediated by MMP alteration and increased ROS production. Conclusion: The present report indicates that isoflavones and biflavonoids from Ormocarpum kirkii are cytotoxic compounds with the potential of being exploited in cancer chemotherapy. Compounds 2 and 4 deserve further studies to develop new anticancer drugs to fight sensitive and resistant cancer cell lines.
Chromatographic separation of the extract of the roots of Dorstenia kameruniana (family Moraceae) led to the isolation of three new benzylbenzofuran derivatives, 2-(p-hydroxybenzyl)benzofuran-6-ol (1), 2-(p-hydroxybenzyl)-7-methoxybenzofuran-6-ol (2) and 2-(p-hydroxy)-3-(3-methylbut-2-en-1-yl)benzyl)benzofuran-6-ol (3) (named dorsmerunin A, B and C, respectively), along with the known furanocoumarin, bergapten (4). The twigs of Dorstenia kameruniana also produced compounds 1-4 as well as the known chalcone licoagrochalcone A (5). The structures were elucidated by NMR spectroscopy and mass spectrometry. The isolated compounds displayed cytotoxicity against the sensitive CCRF-CEM and multidrug-resistant CEM/ADR5000 leukemia cells, where compounds 4 and 5 had the highest activities (IC50 values of 7.17 mu M and 5.16 mu M, respectively) against CCRF-CEM leukemia cells. Compound 5 also showed cytotoxicity against 7 sensitive or drug-resistant solid tumor cell lines (breast carcinoma, colon carcinoma, glioblastoma), with IC50 below 50 mu M, whilst 4 showed selective activity.
A new isoflavone, 4′-prenyloxyvigvexin A (1) and a new pterocarpan, (6aR,11aR)-3,8-dimethoxybitucarpin B (2) were isolated from the leaves of Lonchocarpus bussei and the stem bark of Lonchocarpus eriocalyx, respectively. The extract of L. bussei also gave four known isoflavones, maximaisoflavone H, 7,2′-dimethoxy-3′,4′-methylenedioxyisoflavone, 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone, durmillone; a chalcone, 4-hydroxylonchocarpin; a geranylated phenylpropanol, colenemol; and two known pterocarpans, (6aR,11aR)-maackiain and (6aR,11aR)-edunol. (6aR,11aR)-Edunol was also isolated from the stem bark of L. eriocalyx. The structures of the isolated compounds were elucidated by spectroscopy. The cytotoxicity of the compounds was tested by resazurin assay using drug-sensitive and multidrug-resistant cancer cell lines. Significant antiproliferative effects with IC50 values below 10 μM were observed for the isoflavones 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone and durmillone against leukemia CCRF-CEM cells; for the chalcone, 4-hydroxylonchocarpin and durmillone against its resistant counterpart CEM/ADR5000 cells; as well as for durmillone against the resistant breast adenocarcinoma MDA-MB231/BCRP cells and resistant gliobastoma U87MG.ΔEGFR cells.
The cross-linguistic finding of greater demands in processing object relatives as compared to subject relatives in individuals with aphasia and non-brain-damaged speakers has been explained within the Relativized Minimality approach. Based on this account, the asymmetry is attributed to an element intervening between the moved element and its extraction site in object relatives, but not in subject relatives. Moreover, it has been proposed that processing of object relatives is facilitated if the intervening and the moved elements differ in their internal feature structure. The present study investigates these predictions in German-speaking individuals with aphasia and a group of control participants by combining the visual world eye-tracking methodology with an auditory referent identification task. Our results provide support for the Relativized Minimality approach. Particularly, the degree of featural distinctness was shown to modulate the occurrence of the effects in aphasia. We claim that, due to reduced processing capacities, individuals with aphasia need a higher degree of featural dissimilarity to distinguish the moved from the intervening element in object relatives to overcome their syntactic deficit. (C) 2017 Elsevier Ltd. All rights reserved.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
Structural changes at the intra- as well as intermicellar level were induced by the LCST-type collapse transition of poly(N-isopropyl acrylamide) in ABA triblock copolymer micelles in water. The distinct process kinetics was followed in situ and in real-time using time-resolved small-angle neutron scattering (SANS), while a micellar solution of a triblock copolymer, consisting of two short deuterated polystyrene endblocks and a long thermoresponsive poly(N-isopropyl acrylamide) middle block, was heated rapidly above its cloud point. A very fast collapse together with a multistep aggregation behavior is observed. The findings of the transition occurring at several size and time levels may have implications for the design and application of such thermoresponsive self-assembled systems.
We investigate concentrated solutions of poly(styrene-b-N-isopropyl acrylamide) (P(S-b-NIPAM)) diblock copolymers in deuterated water (D2O). Both structural changes and the changes of the segmental dynamics occurring upon heating through the lower critical solution temperature (LCST) of PNIPAM are studied using small-angle neutron scattering and neutron spin-echo spectroscopy. The collapse of the micellar shell and the cluster formation of collapsed micelles at the LCST as well as an increase of the segmental diffusion coefficient after crossing the LCST are detected. Comparing to our recent results on a triblock copolymer P(S-b-NIPAM-b-S) [25], we observe that the collapse transition of P(S-b-NIPAM) is more complex and that the PNIPAM segmental dynamics are faster than in P(S-b-NIPAM-b-S).
We have studied I lie thermal behavior of amphiphilic, symmetric triblock copolymers having short, deuterated polystyrene (PS) end blocks and a large poly(N-isopropylacrylarnicle) (PNIPAM) middle block exhibiting a lower critical solution temperature (LCST) in aqueous solution. A wide range of concentrations (0.1-300 mg/mL) is investigated using it number of analytical methods such as fluorescence correlation spectroscopy (FCS), turbidimetry, dynamic light scattering (DLS), small-angle neutron scattering (SANS), and neutron spin-echo spectroscopy (NSE). The critical micelle concentration is determined using FCS to be 1 mu M or less. The collapse of the micelles at the LCST is investigated using turbidimetry and DLS and shows a weak dependence on the degree of polymerization of the PNIPAM block. SANS with contrast matching allows its to reveal the core-shell Structure of the micelles as well as their correlation as a function of temperature. The segmental dynamics of the PNIPAM shell are studied as a function of temperature and arc found to be faster in the collapsed state than in the swollen state. The mode detected has a linear dispersion in q(2) and is found to be faster in the collapsed state as compared to the swollen state. We attribute this result to the averaging over mobile and immobilized segments.
In aqueous solution, symmetric triblock copolymers with a thermoresponsive middle block and hydrophobic end blocks form flower-like core-shell micelles which collapse and aggregate upon heating through the cloud point (CP). The collapse of the micellar shell and the intermicellar aggregation are followed in situ and in real-time using time-resolved small-angle neutron scattering (SANS), while heating micellar solutions of a poly((styrene-d(8))-b-(N-isopropyl acrylamide)-b-(styrene-d(8))) triblock copolymer in D2O rapidly through their CP. The influence of polymer concentration as well as of the start and target temperatures is addressed. In all cases, the micellar collapse is very fast. The collapsed micelles immediately form small clusters which contain voids. They densify which slows down or even stops their growth. For low concentrations and target temperatures just above the CP, i.e. shallow temperature jumps, the subsequent growth of the clusters is described by diffusion-limited aggregation. In contrast, for higher concentrations and/or higher target temperatures, i.e. deep temperature jumps, intermicellar bridges dominate the growth. Eventually, in all cases, the clusters coagulate which results in macroscopic phase separation. For shallow temperature jumps, the cluster surfaces stay rough; whereas for deep temperature jumps, a concentration gradient develops at late stages. These results are important for the development of conditions for thermal switching in applications, e.g. for the use of thermoresponsive micellar systems for transport and delivery purposes.
A concentrated solution of a symmetric triblock copolymer with a thermoresponsive poly(methoxy diethylene glycol acrylate) (PMDEGA) middle block and short hydrophobic, fully deuterated polystyrene end blocks is investigated in D2O where it undergoes a lower critical solution temperature-type phase transition at ca. 36 A degrees C. Small-angle neutron scattering (SANS) in a wide temperature range (15-50 A degrees C) is used to characterize the size and inner structure of the micelles as well as the correlation between the micelles and the formation of aggregates by the micelles above the cloud point (CP). A model featuring spherical core-shell micelles, which are correlated by a hard-sphere potential or a sticky hard-sphere potential together with a Guinier form factor describing aggregates formed by the micelles above the CP, fits the SANS curves well in the entire temperature range. The thickness of the thermoresponsive micellar PMDEGA shell as well as the hard-sphere radius increase slightly already below the cloud point. Whereas the thickness of the thermoresponsive micellar shell hardly shrinks when heating through the CP and up to 50 A degrees C, the hard-sphere radius decreases within 3.5 K at the CP. The volume fraction decreases already significantly below the CP, which may be at the origin of the previously observed gel-sol transition far below the CP (Miasnikova et al., Langmuir 28: 4479-4490, 2012). Above the CP, small, and at higher temperatures, large aggregates are formed by the micelles.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
The central rift of the Red Sea has 25 brine pools with different physical and geochemical characteristics. Atlantis II (ATIID), Discovery Deeps (DD) and Chain Deep (CD) are characterized by high salinity, temperature and metal content. Several studies reported microbial communities in these brine pools, but few studies addressed the brine pool sediments. Therefore, sediment cores were collected from ATIID, DD, CD brine pools and an adjacent brine-influenced site. Sixteen different lithologic sediment sections were subjected to shotgun DNA pyrosequencing to generate 1.47 billion base pairs (1.47 x 10(9) bp). We generated sediment-specific reads and attempted to annotate all reads. We report the phylogenetic and biochemical uniqueness of the deepest ATIID sulfur-rich brine pool sediments. In contrary to all other sediment sections, bacteria dominate the deepest ATIID sulfur-rich brine pool sediments. This decrease in virus-to-bacteria ratio in selected sections and depth coincided with an overrepresentation of mobile genetic elements. Skewing in the composition of viruses-to-mobile genetic elements may uniquely contribute to the distinct microbial consortium in sediments in proximity to hydrothermally active vents of the Red Sea and possibly in their surroundings, through differential horizontal gene transfer.
Quality attributes of fruit determine its acceptability by the retailer and consumer. The objective of this work was to investigate the potential of absorption (μa) and reduced scattering (μs’) coefficients of European pear to analyze its fruit flesh firmness and soluble solids content (SSC). The absolute reference values, μa* (cm−1) and μs’* (cm−1), of pear were invasively measured, employing multi-spectral photon density wave (PDW) spectroscopy at preselected wavelengths of 515, 690, and 940 nm considering two batches of unripe and overripe fruit. On eight measuring dates during fruit development, μa and μs’ were analyzed non-destructively by means of laser light backscattering imaging (LLBI) at similar wavelengths of 532, 660, and 830 nm by means of fitting according to Farrell’s diffusion theory, using fix reference values of either μa* or μs’*. Both, the μa* and the μa as well as μs’* and μs’ showed similar trends. Considering the non-destructively measured data during fruit development, μa at 660 nm decreased 91 till 141 days after full bloom (dafb) from 1.49 cm−1 to 0.74 cm−1 due to chlorophyll degradation. At 830 nm, μa only slightly decreased from 0.41 cm−1 to 0.35 cm−1. The μs’ at all wavelengths revealed a decreasing trend as the fruit developed. The difference measured at 532 nm was most pronounced decreasing from 24 cm−1 to 10 cm−1, while at 660 nm and 830 nm values decreased from 15 cm−1 to 13 cm−1 and from 10 cm−1 to 8 cm−1, respectively. When building calibration models with partial least-squares regression analysis on the optical properties for non-destructive analysis of the fruit SSC, μa at 532 nm and 830 nm resulted in a correlation coefficient of R = 0.66, however, showing high measuring uncertainty. The combination of all three wavelengths gave an enhanced, encouraging R = 0.89 for firmness analysis using μs’ in the freshly picked fruit.
The effect of worker representation on employment bebaviour in Germany: another case of -2.5%
(2004)
Models of union behavior
(1995)
The Relativized Minimality approach to A'-dependencies (Friedmann et al., 2009) predicts that headed object relative clauses (RCs) and which questions are the most difficult, due to the presence of a lexical restriction on both the subject and the object DP which creates intervention. We investigated comprehension of center-embedded headed object RCs with Italian children, where Number and Gender feature values on subject and object DPs are manipulated. We found that. Number conditions are always more accurate than Gender ones, showing that intervention is sensitive to DP-internal structure. We propose a finer definition of the lexical restriction where external and syntactically active features (such as Number) reduce intervention whereas internal and (possibly) lexicalized features (such as Gender) do so to a lesser extent. Our results are also compatible with a memory interference approach in which the human parser is sensitive to highly specific properties of the linguistic input, such as the cue-based model (Van Dyke, 2007).
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension-a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
The title compound was prepared by the reaction of 1,4,10,13-tetraoxa-7,16-diazacyclo-octadecane with 4-chloro-2-methyl-phenoxyacetic acid in a ratio of 1:2. The structure has been proved by the data of elemental analysis, IR spectroscopy, NMR ((1)H, (13)C) technique and by X-ray diffraction analysis. Intermolecular hydrogen bonds between the azonium protons and oxygen atoms of the carboxylate groups were found. Immunoactive properties of the title compound have been screened. The compound has the ability to suppress spontaneous and Con A-stimulated cell proliferation in vitro and therefore can be considered as immunodepressant.
The title compound was prepared by the reaction of 1,4,10,13-tetraoxa-7,16-diazacyclo-octadecane with 4-chloro-2-methyl-phenoxyacetic acid in a ratio of 1:2. The structure has been proved by the data of elemental analysis, IR spectroscopy, NMR ( 1 H, 13 C) technique and by X-ray diffraction analysis. Intermolecular hydrogen bonds between the azonium protons and oxygen atoms of the carboxylate groups were found. Immunoactive properties of the title compound have been screened. The compound has the ability to suppress spontaneous and Con A-stimulated cell proliferation in vitro and therefore can be considered as immunodepressant.
We assessed intra-individual variability of response times (RT) and single-trial P3 amplitudes following targets in healthy adults during a Flanker/NO-GO task. RT variability and variability of the neural responses coupled at the faster frequencies examined (0.07-0.17 Hz) at Pz, the target-P3 maxima, despite non-significant associations for overall variability (standard deviation, SD). Frequency-specific patterns of variability in the single-trial P3 may help to understand the neurophysiology of RT variability and its explanatory models of attention allocation deficits beyond intra-individual variability summary indices such as SD.
Aging is a highly controlled biological process characterized by a progressive deterioration of various cellular activities. One of several hallmarks of aging describes a link to transcriptional alteration, suggesting that it may impact the steady-state mRNA levels. We analyzed the mRNA steady-state levels of polyCAG-encoding transgenes and endogenous genes under the control of well-characterized promoters for intestinal (vha-6), muscular (unc-54, unc-15) and pan-neuronal (rgef-1, unc-119) expression in the nematode Caenorhabditis elegans. We find that there is not a uniform change in transcriptional profile in aging, but rather a tissue-specific difference in the mRNA levels of these genes. While levels of mRNA in the intestine (vha-6) and muscular (unc-54, unc-15) cells decline with age, pan-neuronal tissue shows more stable mRNA expression (rgef-1, unc-119) which even slightly increases with the age of the animals. Our data on the variations in the mRNA abundance from exemplary cases of endogenous and transgenic gene expression contribute to the emerging evidence for tissue-specific variations in the aging process.
This article discusses how Alex Garland’s The Beach (1996) engages with conceptions of utopian islands, nation, and colonialism in modernity and how it, from this basis, develops a different spatiality that reflects on a more deterritorialized form of imperial domination within late twentieth-century globalization, as exercised by the United States. The novel is shown to subvert, but not to abolish, two spatial formations that originated in early modernity: nation and utopia. Building on Jean Baudrillard’s elaborations regarding simulation and simulacra, the article argues that The Beach creates a hyperreal narrative that does away with the idea of isolated, bounded spaces and that in form and content corresponds with the worldwide dominance of the United States at the end of the twentieth century.
While W.E.B. Du Bois’s first novel, The Quest of the Silver Fleece (1911), is set squarely in the USA, his second work of fiction, Dark Princess: A Romance (1928), abandons this national framework, depicting the treatment of African Americans in the USA as embedded into an international system of economic exploitation based on racial categories. Ultimately, the political visions offered in the novels differ starkly, but both employ a Western literary canon – so-called ‘classics’ from Greek, German, English, French, and US American literature. With this, Du Bois attempts to create a new space for African Americans in the world (literature) of the 20th century. Weary of the traditions of this ‘world literature’, the novels complicate and begin to decenter the canon that they draw on. This reading traces what I interpret as subtle signs of frustration over the limits set by the literature that underlies Dark Princess, while its predecessor had been more optimistic in its appropriation of Eurocentric fiction for its propagandist aims.
Alien Horrors
(2022)
H. P. Lovecraft’s oeuvre abounds with stereotypes of the racialized poor. As scholars have noted, Lovecraft’s work turns those he viewed as ‘Others’ into ‘aliens.’ Poor people of color (as opposed to the orderly White rural population and White working class) in Lovecraft’s stories are foreign, diseased, and criminal, and they threaten social and cosmic orders as they are in league with a nebulous entity that waits to wreak indescribable havoc. This chapter analyzes three ‘Lovecraftian’ novels published in 2016 - Cassandra Khaw’s Hammers on Bone,Victor LaValle’s The Ballad of Black Tom, and Matt Ruff’s Lovecraft Country. These works elucidate the connection of Trump’s 2016 rhetoric in campaign and presidential speeches and the White supremacist imagery used by Lovecraft. In these novels, the racialized poor have a special connection to an astronomical, evil entity à la Lovecraft. As carriers of numinous genes or parasitic entities (literally having ‘an alien within’) they become empowered. They thus occupy a pivotal position in forestalling or bringing about the destruction of societal order; that is, of White supremacy. Exploring the alleged risk posed by this ‘underclass,’ these works seem to foretell current representations of protesters as ‘riotous mobs’ that threaten the body politic Trump sought to make great (and White) again.
Boyle, T.C.
(2022)
T.C. Boyle, or Thomas Coraghessan Boyle, is probably best known for his 1995 novel Tortilla Curtain , which quickly became a staple of high school and college syllabi. Tortilla Curtain deftly illustrates what Boyle does best: acerbically tracing the irrationality that governs human thought and the resulting contradictory and often unethical behavior (mostly in relation to xenophobia, environmentalism, and the intersections of gender). Despite often casting a critical eye over US American society, Boyle's works are accessible reads with fast-paced and eventful plots. This combination has produced a number of international bestsellers. In fact, Boyle is so popular in Germany that translations of his works have been published before the original versions came out in English. However, his talent for depicting the impotence of reason in the face of base desires, selfishness, group dynamics, and indoctrinated ideologies is also a weakness: at times, Boyle's satire reproduces what it means to criticize, coming close to naturalizing the hedonistic, prejudiced, and emotionally charged behavior of his characters.
This book endeavours to understand the seemingly direct link between utopianism and the USA, discussing novels that have never been brought together in this combination before, even though they all revolve around intentional communities: Imlay’s The Emigrants (1793), Hawthorne’s The Blithedale Romance (1852), Howland’s Papas Own Girl (1874), Griggs’s Imperium in Imperio (1899), and Du Bois’s The Quest of the Silver Fleece (1911). They relate nation and utopia not by describing perfect societies, but by writing about attempts to immediately live radically different lives. Signposting the respective communal history, the readings provide a literary perspective to communal studies, and add to a deeply necessary historicization for strictly literary approaches to US utopianism, and for studies that focus on Pilgrims/Puritans/Founding Fathers as utopian practitioners. This book therefore highlights how the authors evaluated the USA’s utopian potential and traces the nineteenth-century development of the utopian imagination from various perspectives.
In both eukaryotic and prokaryotic DNA sequences of 30-100 base-pairs rich in AT base-pairs have been identified at which the double helix preferentially unwinds. Such DNA unwinding elements are commonly associated with origins for DNA replication and transcription, and with chromosomal matrix attachment regions. Here we present a quantitative study of local DNA unwinding based on extensive single DNA plasmid imaging. We demonstrate that long-lived single-stranded denaturation bubbles exist in negatively supercoiled DNA, at the expense of partial twist release. Remarkably, we observe a linear relation between the degree of supercoiling and the bubble size, in excellent agreement with statistical modelling. Furthermore, we obtain the full distribution of bubble sizes and the opening probabilities at varying salt and temperature conditions. The results presented herein underline the important role of denaturation bubbles in negatively supercoiled DNA for biological processes such as transcription and replication initiation in vivo.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands.
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.