Refine
Year of publication
Document Type
- Article (237)
- Doctoral Thesis (143)
- Conference Proceeding (122)
- Postprint (69)
- Working Paper (39)
- Monograph/Edited Volume (16)
- Preprint (6)
- Review (6)
- Master's Thesis (5)
- Habilitation Thesis (2)
Language
- English (647) (remove)
Keywords
- climate change (8)
- USA (7)
- United States (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- football (5)
- modern Jewish history (5)
Institute
- Extern (647) (remove)
I discuss observational evidence – independent of the direct spectral diagnostics of stellar winds themselves – suggesting that mass-loss rates for O stars need to be revised downward by roughly a factor of three or more, in line with recent observed mass-loss rates for clumped winds. These independent constraints include the large observed mass-loss rates in LBV eruptions, the large masses of evolved massive stars like LBVs and WNH stars, WR stars in lower metallicity environments, observed rotation rates of massive stars at different metallicity, supernovae that seem to defy expectations of high mass-loss rates in stellar evolution, and other clues. I pay particular attention to the role of feedback that would result from higher mass-loss rates, driving the star to the Eddington limit too soon, and therefore making higher rates appear highly implausible. Some of these arguments by themselves may have more than one interpretation, but together they paint a consistent picture that steady line-driven winds of O-type stars have lower mass-loss rates and are significantly clumped.
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
Business process management aims at capturing, understanding, and improving work in organizations. The central artifacts are process models, which serve different purposes. Detailed process models are used to analyze concrete working procedures, while high-level models show, for instance, handovers between departments. To provide different views on process models, business process model abstraction has emerged. While several approaches have been proposed, a number of abstraction use case that are both relevant for industry and scientifically challenging are yet to be addressed. In this paper we systematically develop, classify, and consolidate different use cases for business process model abstraction. The reported work is based on a study with BPM users in the health insurance sector and validated with a BPM consultancy company and a large BPM vendor. The identified fifteen abstraction use cases reflect the industry demand. The related work on business process model abstraction is evaluated against the use cases, which leads to a research agenda.
We consider a one-dimensional oscillatory medium with a coupling through a diffusive linear field. In the limit of fast diffusion this setup reduces to the classical Kuramoto–Battogtokh model. We demonstrate that for a finite diffusion stable chimera solitons, namely localized synchronous domain in an infinite asynchronous environment, are possible. The solitons are stable also for finite density of oscillators, but in this case they sway with a nearly constant speed. This finite-density-induced motility disappears in the continuum limit, as the velocity of the solitons is inverse proportional to the density. A long-wave instability of the homogeneous asynchronous state causes soliton turbulence, which appears as a sequence of soliton mergings and creations. As the instability of the asynchronous state becomes stronger, this turbulence develops into a spatio-temporal intermittency.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts
(2011)
The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system’s vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as “Avança Brasil”. Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
A degree course in IT and business administration solely for women (FIW) has been offered since 2009 at the HTW Berlin – University of Applied Sciences. This contribution discusses student motivations for enrolling in such a women only degree course and gives details of our experience over recent years. In particular, the approach to attracting new female students is described and the composition of the intake is discussed. It is shown that the women-only setting together with other factors can attract a new clientele for computer science.
“Creating a Maritime Future”
(2023)
This article explores the importance of the port city of Hamburg in the evolving discourses on the creation of a maritime future, a vision which became influential in the 1930s, 1940s and 1950s. While some Jewish representatives in the city aimed at preserving and intertwining Hanseatic and Jewish traditions in order to secure a Jewish presence in the port city under the pressure of the Nazi regime and thereafter, others wanted to create new emigration opportunities, especially to Mandatory Palestine, and create a Jewish maritime future in Eretz Israel. Different Zionist organizations supported the newly evolving maritime ideas, such as the “conquest of the sea”, and promoted the image of a Jewish seafaring nation. Despite the difficulties in the 1940s, these concepts gained influence post-1945 and led to the foundation of the fishery kibbutz “Zerubavel” in Blankenese/Hamburg. However, the idea of a Hanseatic Jewish future also remained influential and illustrates how differently a “Jewish maritime future” was imagined and used to link past, present and future.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor.
In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo.
In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations.
The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO).
The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.
.NET Gadgeteer Workshop
(2013)
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
Mathematical modeling of biological phenomena has experienced increasing interest since new high-throughput technologies give access to growing amounts of molecular data. These modeling approaches are especially able to test hypotheses which are not yet experimentally accessible or guide an experimental setup. One particular attempt investigates the evolutionary dynamics responsible for today's composition of organisms. Computer simulations either propose an evolutionary mechanism and thus reproduce a recent finding or rebuild an evolutionary process in order to learn about its mechanism. The quest for evolutionary fingerprints in metabolic and gene-coexpression networks is the central topic of this cumulative thesis based on four published articles. An understanding of the actual origin of life will probably remain an insoluble problem. However, one can argue that after a first simple metabolism has evolved, the further evolution of metabolism occurred in parallel with the evolution of the sequences of the catalyzing enzymes. Indications of such a coevolution can be found when correlating the change in sequence between two enzymes with their distance on the metabolic network which is obtained from the KEGG database. We observe that there exists a small but significant correlation primarily on nearest neighbors. This indicates that enzymes catalyzing subsequent reactions tend to be descended from the same precursor. Since this correlation is relatively small one can at least assume that, if new enzymes are no "genetic children" of the previous enzymes, they certainly be descended from any of the already existing ones. Following this hypothesis, we introduce a model of enzyme-pathway coevolution. By iteratively adding enzymes, this model explores the metabolic network in a manner similar to diffusion. With implementation of an Gillespie-like algorithm we are able to introduce a tunable parameter that controls the weight of sequence similarity when choosing a new enzyme. Furthermore, this method also defines a time difference between successive evolutionary innovations in terms of a new enzyme. Overall, these simulations generate putative time-courses of the evolutionary walk on the metabolic network. By a time-series analysis, we find that the acquisition of new enzymes appears in bursts which are pronounced when the influence of the sequence similarity is higher. This behavior strongly resembles punctuated equilibrium which denotes the observation that new species tend to appear in bursts as well rather than in a gradual manner. Thus, our model helps to establish a better understanding of punctuated equilibrium giving a potential description at molecular level. From the time-courses we also extract a tentative order of new enzymes, metabolites, and even organisms. The consistence of this order with previous findings provides evidence for the validity of our approach. While the sequence of a gene is actually subject to mutations, its expression profile might also indirectly change through the evolutionary events in the cellular interplay. Gene coexpression data is simply accessible by microarray experiments and commonly illustrated using coexpression networks where genes are nodes and get linked once they show a significant coexpression. Since the large number of genes makes an illustration of the entire coexpression network difficult, clustering helps to show the network on a metalevel. Various clustering techniques already exist. However, we introduce a novel one which maintains control of the cluster sizes and thus assures proper visual inspection. An application of the method on Arabidopsis thaliana reveals that genes causing a severe phenotype often show a functional uniqueness in their network vicinity. This leads to 20 genes of so far unknown phenotype which are however suggested to be essential for plant growth. Of these, six indeed provoke such a severe phenotype, shown by mutant analysis. By an inspection of the degree distribution of the A.thaliana coexpression network, we identified two characteristics. The distribution deviates from the frequently observed power-law by a sharp truncation which follows after an over-representation of highly connected nodes. For a better understanding, we developed an evolutionary model which mimics the growth of a coexpression network by gene duplication which underlies a strong selection criterion, and slight mutational changes in the expression profile. Despite the simplicity of our assumption, we can reproduce the observed properties in A.thaliana as well as in E.coli and S.cerevisiae. The over-representation of high-degree nodes could be identified with mutually well connected genes of similar functional families: zinc fingers (PF00096), flagella, and ribosomes respectively. In conclusion, these four manuscripts demonstrate the usefulness of mathematical models and statistical tools as a source of new biological insight. While the clustering approach of gene coexpression data leads to the phenotypic characterization of so far unknown genes and thus supports genome annotation, our model approaches offer explanations for observed properties of the coexpression network and furthermore substantiate punctuated equilibrium as an evolutionary process by a deeper understanding of an underlying molecular mechanism.
Mammalian arachidonic acid lipoxygenases (ALOXs) have been implicated in cell differentiation and in the pathogenesis of inflammation. The mouse genome involves seven functional Alox genes and the encoded enzymes share a high degree of amino acid conservation with their human orthologs. There are, however, functional differences between mouse and human ALOX orthologs. Human ALOX15B oxygenates arachidonic acid exclusively to its 15-hydroperoxy derivative (15S-HpETE), whereas 8S-HpETE is dominantly formed by mouse Alox15b. The structural basis for this functional difference has been explored and in vitro mutagenesis humanized the reaction specificity of the mouse enzyme. To explore whether this mutagenesis strategy may also humanize the reaction specificity of mouse Alox15b in vivo, we created Alox15b knock-in mice expressing the arachidonic acid 15-lipoxygenating Tyr603Asp+His604Val double mutant instead of the 8-lipoxygenating wildtype enzyme. These mice are fertile, display slightly modified plasma oxylipidomes and develop normally up to an age of 24 weeks. At later developmental stages, male Alox15b-KI mice gain significantly less body weight than outbred wildtype controls, but this effect was not observed for female individuals. To explore the possible reasons for the observed gender-specific growth arrest, we determined the basic hematological parameters and found that aged male Alox15b-KI mice exhibited significantly attenuated red blood cell parameters (erythrocyte counts, hematocrit, hemoglobin). Here again, these differences were not observed in female individuals. These data suggest that humanization of the reaction specificity of mouse Alox15b impairs the functionality of the hematopoietic system in males, which is paralleled by a premature growth arrest.
As mid-19th-century American Jews introduced radical changes to their religious observance and began to define Judaism in new ways, to what extent did they engage with European Jewish ideas? Historians often approach religious change among Jews from German lands during this period as if Jewish immigrants had come to America with one set of ideas that then evolved solely in conversation with their American contexts. Historians have similarly cast the kinds of Judaism Americans created as both unique to America and uniquely American. These characterizations are accurate to an extent. But to what extent did Jewish innovations in the United States take place in conversation with European Jewish developments? Looking to the 19th-century American Jewish press, this paper seeks to understand how American Jews engaged European Judaism in formulating their own ideas, understanding themselves, and understanding their place in world Judaism.
To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution.
The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores.
The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC.
Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials.
The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.
A numerical bifurcation analysis of the electrically driven plane sheet pinch is presented. The electrical conductivity varies across the sheet such as to allow instability of the quiescent basic state at some critical Hartmann number. The most unstable perturbation is the two-dimensional tearing mode. Restricting the whole problem to two spatial dimensions, this mode is followed up to a time-asymptotic steady state, which proves to be sensitive to three-dimensional perturbations even close to the point where the primary instability sets in. A comprehensive three-dimensional stability analysis of the two-dimensional steady tearing-mode state is performed by varying parameters of the sheet pinch. The instability with respect to three-dimensional perturbations is suppressed by a sufficiently strong magnetic field in the invariant direction of the equilibrium. For a special choice of the system parameters, the unstably perturbed state is followed up in its nonlinear evolution and is found to approach a three-dimensional steady state.
This PhD thesis presents the spatio-temporal distribution of terrestrial carbon fluxes for the time period of 1982 to 2002 simulated by a combination of the process-based dynamic global vegetation model LPJ and a 21-year time series of global AVHRR-fPAR data (fPAR – fraction of photosynthetically active radiation). Assimilation of the satellite data into the model allows improved simulations of carbon fluxes on global as well as on regional scales. As it is based on observed data and includes agricultural regions, the model combined with satellite data produces more realistic carbon fluxes of net primary production (NPP), soil respiration, carbon released by fire and the net land-atmosphere flux than the potential vegetation model. It also produces a good fit to the interannual variability of the CO2 growth rate. Compared to the original model, the model with satellite data constraint produces generally smaller carbon fluxes than the purely climate-based stand-alone simulation of potential natural vegetation, now comparing better to literature estimates. The lower net fluxes are a result of a combination of several effects: reduction in vegetation cover, consideration of human influence and agricultural areas, an improved seasonality, changes in vegetation distribution and species composition. This study presents a way to assess terrestrial carbon fluxes and elucidates the processes contributing to interannual variability of the terrestrial carbon exchange. Process-based terrestrial modelling and satellite-observed vegetation data are successfully combined to improve estimates of vegetation carbon fluxes and stocks. As net ecosystem exchange is the most interesting and most sensitive factor in carbon cycle modelling and highly uncertain, the presented results complementary contribute to the current knowledge, supporting the understanding of the terrestrial carbon budget.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Background
The metabolic syndrome (MetS) is a risk cluster for a number of secondary diseases. The implementation of prevention programs requires early detection of individuals at risk. However, access to health care providers is limited in structurally weak regions. Brandenburg, a rural federal state in Germany, has an especially high MetS prevalence and disease burden. This study aims to validate and test the feasibility of a setup for mobile diagnostics of MetS and its secondary diseases, to evaluate the MetS prevalence and its association with moderating factors in Brandenburg and to identify new ways of early prevention, while establishing a “Mobile Brandenburg Cohort” to reveal new causes and risk factors for MetS.
Methods
In a pilot study, setups for mobile diagnostics of MetS and secondary diseases will be developed and validated. A van will be equipped as an examination room using point-of-care blood analyzers and by mobilizing standard methods. In study part A, these mobile diagnostic units will be placed at different locations in Brandenburg to locally recruit 5000 participants aged 40-70 years. They will be examined for MetS and advice on nutrition and physical activity will be provided. Questionnaires will be used to evaluate sociodemographics, stress perception, and physical activity. In study part B, participants with MetS, but without known secondary diseases, will receive a detailed mobile medical examination, including MetS diagnostics, medical history, clinical examinations, and instrumental diagnostics for internal, cardiovascular, musculoskeletal, and cognitive disorders. Participants will receive advice on nutrition and an exercise program will be demonstrated on site. People unable to participate in these mobile examinations will be interviewed by telephone. If necessary, participants will be referred to general practitioners for further diagnosis.
Discussion
The mobile diagnostics approach enables early detection of individuals at risk, and their targeted referral to local health care providers. Evaluation of the MetS prevalence, its relation to risk-increasing factors, and the “Mobile Brandenburg Cohort” create a unique database for further longitudinal studies on the implementation of home-based prevention programs to reduce mortality, especially in rural regions.
Trial registration
German Clinical Trials Register, DRKS00022764; registered 07 October 2020—retrospectively registered.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
We present preliminary results of a tailored atmosphere analysis of six Galactic WC stars using UV, optical, and mid-infrared Spitzer IRS data. With these data, we are able to sample regions from 10 to 10³ stellar radii, thus to determine wind clumping in different parts of the wind. Ultimately, derived wind parameters will be used to accuratelymeasure neon abundances, and to so test predicted nuclear-reaction rates.
Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials
(2023)
In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself.
The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways.
Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields.
REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin.
Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
Glaciated high-alpine areas are fundamentally altered by climate change, with well-known implications for hydrology, e.g., due to glacier retreat, longer snow-free periods, and more frequent and intense summer rainstorms. While knowledge on how these hydrological changes will propagate to suspended sediment dynamics is still scarce, it is needed to inform mitigation and adaptation strategies. To understand the processes and source areas most relevant to sediment dynamics, we analyzed discharge and sediment dynamics in high temporal resolution as well as their patterns on several spatial scales, which to date few studies have done.
We used a nested catchment setup in the Upper Ötztal in Tyrol, Austria, where high-resolution (15 min) time series of discharge and suspended sediment concentrations are available for up to 15 years (2006–2020). The catchments of the gauges in Vent, Sölden and Tumpen range from 100 to almost 800 km2 with 10 % to 30 % glacier cover and span an elevation range of 930 to 3772 m a.s.l. We analyzed discharge and suspended sediment yields (SSY), their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. We complemented our analysis by linking the observations to satellite-based snow cover maps, glacier inventories, mass balances and precipitation data.
Our results indicate that the areas above 2500 m a.s.l., characterized by glacier tongues and the most recently deglaciated areas, are crucial for sediment generation in all sub-catchments. This notion is supported by the synchronous spring onset of sediment export at the three gauges, which coincides with snowmelt above 2500 m but lags behind spring discharge onsets. This points at a limitation of suspended sediment supply as long as the areas above 2500 m are snow-covered. The positive correlation of annual SSY with glacier cover (among catchments) and glacier mass balances (within a catchment) further supports the importance of the glacier-dominated areas. The analysis of short-term events showed that summer precipitation events were associated with peak sediment concentrations and yields but on average accounted for only 21 % of the annual SSY in the headwaters. These results indicate that under current conditions, thermally induced sediment export (through snow and glacier melt) is dominant in the study area.
Our results extend the scientific knowledge on current hydro-sedimentological conditions in glaciated high-alpine areas and provide a baseline for studies on projected future changes in hydro-sedimentological system dynamics.
A Secular Tradition
(2021)
This article focuses on the social philosopher Horace Kallen and the revisions he made to the concept of cultural pluralism that he first developed in the early 20th century, applying it to postwar America and the young State of Israel. It shows how he opposed the assumption that the United States’ social order was based on a “Judeo-Christian tradition.” By constructing pluralism as a civil religion and carving out space for secular self-understandings in midcentury America, Kallen attempted to preserve the integrity of his earlier political visions, developed during World War I, of pluralist societies in the United States and Palestine within an internationalist global order. While his perspective on the State of Israel was largely shaped by his American experiences, he revised his approach to politically functionalizing religious traditions as he tested his American understanding of a secular, pluralist society against the political theology effective in the State of Israel. The trajectory of Kallen’s thought points to fundamental questions about the compatibility of American and Israeli understandings of religion’s function in society and its relation to political belonging, especially in light of their transnational connection through American Jewish support for the recently established state.
“Israel am Meere”
(2023)
For Jews in Germany, the period following the Nazis’ rise to power in January 1933 was a period of decision-making on many levels: How should they respond to the persecution? If they decided to emigrate, many more decisions had to be made: How does one leave a country, and where should one go? A key moment in the process and in the cultural practice of emigration is the beginning of the sea voyage – when the need for departure and the hope for a new arrival jointly create a period of liminality. Looking at reports from sea voyages of exploration and emigration from the 1930s, this contribution discusses the question whether, and in what ways, such reflections can be read in the context of religious experiences and in the search for Jewish identities in times of turmoil.
Introduction: The ongoing COVID-19 pandemic situation caused by SARS-CoV-2 and variants of concern such as B.1.617.2 (Delta) and recently, B.1.1.529 (Omicron) is posing multiple challenges to humanity. The rapid evolution of the virus requires adaptation of diagnostic and therapeutic applications.
Objectives: In this study, we describe camelid heavy-chain-only antibodies (hcAb) as useful tools for novel in vitro diagnostic assays and for therapeutic applications due to their neutralizing capacity.
Methods: Five antibody candidates were selected out of a naïve camelid library by phage display and expressed as full length IgG2 antibodies. The antibodies were characterized by Western blot, enzyme-linked immunosorbent assays, surface plasmon resonance with regard to their specificity to the recombinant SARS-CoV-2 Spike protein and to SARS-CoV-2 virus-like particles. Neutralization assays were performed with authentic SARS-CoV-2 and pseudotyped viruses (wildtype and Omicron).
Results: All antibodies efficiently detect recombinant SARS-CoV-2 Spike protein and SARS-CoV-2 virus-like particles in different ELISA setups. The best combination was shown with hcAb B10 as catcher antibody and HRP-conjugated hcAb A7.2 as the detection antibody. Further, four out of five antibodies potently neutralized authentic wildtype SARS-CoV-2 and particles pseudotyped with the SARS-CoV-2 Spike proteins of the wildtype and Omicron variant, sublineage BA.1 at concentrations between 0.1 and 0.35 ng/mL (ND50).
Conclusion: Collectively, we report novel camelid hcAbs suitable for diagnostics and potential therapy.
In order to improve a recently established cell-based assay to assess the potency of botulinum neurotoxin, neuroblastoma-derived SiMa cells and induced pluripotent stem-cells (iPSC) were modified to incorporate the coding sequence of a reporter luciferase into a genetic safe harbor utilizing CRISPR/Cas9. A novel method, the double-control quantitative copy number PCR (dc-qcnPCR), was developed to detect off-target integrations of donor DNA. The donor DNA insertion success rate and targeted insertion success rate were analyzed in clones of each cell type. The dc-qcnPCR reliably quantified the copy number in both cell lines. The probability of incorrect donor DNA integration was significantly increased in SiMa cells in comparison to the iPSCs. This can possibly be explained by the lower bundled relative gene expression of a number of double-strand repair genes (BRCA1, DNA2, EXO1, MCPH1, MRE11, and RAD51) in SiMa clones than in iPSC clones. The dc-qcnPCR offers an efficient and cost-effective method to detect off-target CRISPR/Cas9-induced donor DNA integrations.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
Twenty-one scientists met for this year’s virtual conference on Auxology held at the University Potsdam, Germany, to discuss child and adolescent growth during times of fear and emotional stress. Growth within the broad range of normal for age and sex is considered a sign of good general health whereas fear and emotional stress can lead to growth faltering. Stunting is a sign of social disadvantage and poor parental education. Adverse childhood experiences affect child development, particularly in families with low parental education and low socioeconomic status. Negative effects were also shown in Indian children exposed prenatally and in early postnatal life to the cyclone Aila in 2009. Distrust, fears and fake news regarding the current Corona pandemic received particular attention though the effects generally appeared weak. Mean birth weight was higher; rates of low, very and extremely low birth weight were lower. Other topics discussed by the participants, were the influences of economic crises on birth weight, the measurement of self-confidence and its impact on growth, the associations between obesity, peer relationship, and behavior among Turkish adolescents, height trends in Indonesia, physiological neonatal weight loss, methods for assessing biological maturation in sportsmen, and a new method for skeletal age determination. The participants also discussed the association between acute myocardial infarction and somatotype in Estonia, rural-urban growth differences in Mongolian children, socio-environmental conditions and sexual dimorphism, biological mortality bias, and new statistical techniques for describing inhomogeneity in the association of bivariate variables, and for detecting and visualizing extensive interactions among variables.
Background: Members of the same social group tent to have the same body height. Migrants tend to adjust in height to their host communities.
Objectives: Social-Economic-Political-Emotional (SEPE) factors influence growth. We hypothesized that Vietnamese young adult migrants in Germany (1) are taller than their parents, (2) are as tall as their German peers, and (3) are as tall as predicted by height expectation at age 13 years.
Sample: The study was conducted in 30 male and 54 female Vietnamese migrants (mean age 26.23 years. SD=4.96) in Germany in 2020.
Methods: Information on age, sex, body height, school and education, job, height and ethnicity of best friend, migration history and cultural identification, parental height and education, and recalled information on their personal height expectations at age 13 years were obtained by questionnaire. The data were analyzed by St. Nicolas House Analysis (SNHA) and multiple regression.
Results: Vietnamese young adults are taller than their parents (females 3.85cm, males 7.44cm), but do not fully attain height of their German peers. The body height is positively associated with the height of best friend (p < 0.001), the height expectation at age 13 year (p < 0.001), and father’s height (p=0.001).
Conclusion: Body height of Vietnamese migrants in Germany reflects competitive growth and strategic growth adjustments. The magnitude of this intergenerational trend supports the concept that human growth depends on SEPE factors.
What does stunting tell us?
(2023)
Stunting is commonly linked with undernutrition. Yet, already after World War I, German pediatricians questioned this link and stated that no association exists between nutrition and height. Recent analyses within different populations of Low- and middle-income countries with high rates of stunted children failed to support the assumption that stunted children have a low BMI and skinfold sickness as signs of severe caloric deficiency. So, stunting is not a synonym of malnutrition. Parental education level has a positive influence on body height in stunted populations, e.g., in India and in Indonesia. Socially disadvantaged children tend to be shorter and lighter than children from affluent families.
Humans are social mammals; they regulate growth similar to other social mammals. Also in humans, body height is strongly associated with the position within the social hierarchy, reflecting the personal and group-specific social, economic, political, and emotional environment. These non-nutritional impact factors on growth are summarized by the concept of SEPE (Social-Economic-Political-Emotional) factors. SEPE reflects on prestige, dominance-subordination, social identity, and ego motivation of individuals and social groups.
Background: Physical fitness is decreased in malnourished children and adults. Poor appearance and muscular flaccidity are among the first signs of malnutrition. Malnutrition is often associated with stunting.
Objectives: We test the hypotheses that stunted children of low social strata are physically less fit than children of high social strata.
Sample: We investigated 354 school girls and 369 school boys aged 5.83 to 13.83 (mean 9.54) years from three different social strata in Kupang (West-Timor, Indonesia) in 2020.
Methods: We measured height, weight, and elbow breadth, calculated standard deviation (SDS) of height and weight according to CDC references, and the Frame index as an indicator of long-term physical fitness, and we tested physical fitness in standing long jump and hand grip strength.
Results: Children of low social strata are physically fittest. They jump longer distances, and they have higher values in the Frame index. No association exists between height SDS and physical fitness, neither in respect to standing long jump, nor to hand grip strength.
Conclusion: Stunting does not impair physical fitness in Indonesian school children. Our results support the concept that SEPE (social-economic-political-emotional) factors are involved in the regulation of human growth.
Understanding heat transport in sedimentary basins requires an assessment of the regional 3D heat distribution and of the main physical mechanisms responsible for the transport of heat. We review results from different 3D numerical simulations of heat transport based on 3D basin models of the Central European Basin System (CEBS). Therefore we compare differently detailed 3D structural models of the area, previously published individually, to assess the influence of (1) different configurations of the deeper lithosphere, (2) the mechanism of heat transport considered and (3) large faults dissecting the sedimentary succession on the resulting thermal field and groundwater flow. Based on this comparison we propose a modelling strategy linking the regional and lithosphere-scale to the sub-basin and basin-fill scale and appropriately considering the effective heat transport processes. We find that conduction as the dominant mechanism of heat transport in sedimentary basins is controlled by the distribution of thermal conductivities, compositional and thickness variations of both the conductive and radiogenic crystalline crust as well as the insulating sediments and by variations in the depth to the thermal lithosphere-asthenosphere boundary. Variations of these factors cause thermal anomalies of specific wavelength and must be accounted for in regional thermal studies. In addition advective heat transport also exerts control on the thermal field on the regional scale. In contrast, convective heat transport and heat transport along faults is only locally important and needs to be considered for exploration on the reservoir scale. The general applicability of the proposed workflow makes it of interest for a broad range of application in geosciences including oil and gas exploration, geothermal utilization or carbon capture and sequestration issues. (C) 2014 Elsevier Ltd. All rights reserved.
The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.
The olfactomotor system is especially investigated by examining the sniffing in reaction to olfactory stimuli. The motor output of respiratory-independent muscles was seldomly considered regarding possible influences of smells. The Adaptive Force (AF) characterizes the capability of the neuromuscular system to adapt to external forces in a holding manner and was suggested to be more vulnerable to possible interfering stimuli due to the underlying complex control processes. The aim of this pilot study was to measure the effects of olfactory inputs on the AF of the hip and elbow flexors, respectively. The AF of 10 subjects was examined manually by experienced testers while smelling at sniffing sticks with neutral, pleasant or disgusting odours. The reaction force and the limb position were recorded by a handheld device. The results show, inter alia, a significantly lower maximal isometric AF and a significantly higher AF at the onset of oscillations by perceiving disgusting odours compared to pleasant or neutral odours (p < 0.001). The adaptive holding capacity seems to reflect the functionality of the neuromuscular control, which can be impaired by disgusting olfactory inputs. An undisturbed functioning neuromuscular system appears to be characterized by a proper length tension control and by an earlier onset of mutual oscillations during an external force increase. This highlights the strong connection of olfaction and motor control also regarding respiratory-independent muscles.
Previous studies have not considered the potential influence of maturity status on the relationship between mental imagery and change of direction (CoD) speed in youth soccer. Accordingly, this cross-sectional study examined the association between mental imagery and CoD performance in young elite soccer players of different maturity status. Forty young male soccer players, aged 10-17 years, were assigned into two groups according to their predicted age at peak height velocity (PHV) (Pre-PHV; n = 20 and Post-PHV; n = 20). Participants were evaluated on soccer-specific tests of CoD with (CoDBall-15m) and without (CoD-15m) the ball. Participants completed the movement imagery questionnaire (MIQ) with the three- dimensional structure, internal visual imagery (IVI), external visual imagery (EVI), as well as kinesthetic imagery (KI). The Post-PHV players achieved significantly better results than Pre-PHV in EVI (ES = 1.58, large; p < 0.001), CoD-15m (ES = 2.09, very large; p < 0.001) and CoDBall-15m (ES = 1.60, large; p < 0.001). Correlations were significantly different between maturity groups, where, for the pre-PHV group, a negative very large correlation was observed between CoDBall-15m and KI (r = –0.73, p = 0.001). For the post-PHV group, large negative correlations were observed between CoD-15m and IVI (r = –0.55, p = 0.011), EVI (r = –062, p = 0.003), and KI (r = –0.52, p = 0.020). A large negative correlation of CoDBall-15m with EVI (r = –0.55, p = 0.012) and very large correlation with KI (r = –0.79, p = 0.001) were also observed. This study provides evidence of the theoretical and practical use for the CoD tasks stimulus with imagery. We recommend that sport psychology specialists, coaches, and athletes integrated imagery for CoD tasks in pre-pubertal soccer players to further improve CoD related performance.
Background: Change-of-direction (CoD) is a necessary physical ability of a field sport and may vary in youth players according to their maturation status.
Objectives: The aim of this study is: to compare the effectiveness of a 6-week CoD training intervention on dynamic balance (CS-YBT), horizontal jump (5JT), speed (10 and 30-m linear sprint times), CoD with (15 m-CoD + B) and without (15 m-CoD) the ball, in youth male soccer players at different levels of maturity [pre- and post-peak height velocity (PHV)].
Materials and Methods: Thirty elite male youth soccer players aged 10–17 years from the Tunisian first division participated in this study. The players were divided into pre- (G1, n = 15) and post-PHV (G2, n = 15) groups. Both groups completed a similar 6-week training program with two sessions per week of four CoD exercises. All players completed the following tests before and after intervention: CS-YBT; 5 JT; 10, 30, and 15 m-CoD; and 15 m-CoD + B, and data were analyzed using ANCOVA.
Results: All 30 players completed the study according to the study design and methodology. Adherence rate was 100% across all groups, and no training or test-related injuries were reported. Pre-PHV and post-PHV groups showed significant amelioration post-intervention for all dependent variables (after test > before test; p < 0.01, d = 0.09–1.51). ANOVA revealed a significant group × time interaction only for CS-YBT (F = 4.45; p < 0.04; η2 = 0.14), 5JT (F = 6.39; p < 0.02; η2 = 0.18), and 15 m-CoD (F = 7.88; p < 0.01; η2 = 0.22). CS-YBT, 5JT, and 15 m-CoD improved significantly in the post-PHV group (+ 4.56%, effect size = 1.51; + 4.51%, effect size = 1.05; and -3.08%, effect size = 0.51, respectively), more than the pre-PHV group (+ 2.77%, effect size = 0.85; + 2.91%, effect size = 0.54; and -1.56%, effect size = 0.20, respectively).
Conclusion: The CoD training program improved balance, horizontal jump, and CoD without the ball in male preadolescent and adolescent soccer players, and this improvement was greater in the post-PHV players. The maturity status of the athletes should be considered when programming CoD training for soccer players.
Isoflux tension propagation (IFTP) theory and Langevin dynamics (LD) simulations are employed to study the dynamics of channel-driven polymer translocation in which a polymer translocates into a narrow channel and the monomers in the channel experience a driving force fc. In the high driving force limit, regardless of the channel width, IFTP theory predicts τ ∝ f βc for the translocation time, where β = −1 is the force scaling exponent. Moreover, LD data show that for a very narrow channel fitting only a single file of monomers, the entropic force due to the subchain inside the channel does not play a significant role in the translocation dynamics and the force exponent β = −1 regardless of the force magnitude. As the channel width increases the number of possible spatial configurations of the subchain inside the channel becomes significant and the resulting entropic force causes the force exponent to drop below unity.
Morphological analyses based on word syntax approaches can encounter difficulties with long distance dependencies. The reason is that in some cases an affix has to have access to the inner structure of the form with which it combines. One solution is the percolation of features from ther inner morphemes to the outer morphemes with some process of feature unification. However, the obstacle of percolation constraints or stipulated features has lead some linguists to argue in favour of other frameworks such as, e.g., realizational morphology or parallel approaches like optimality theory. This paper proposes a linguistic analysis of two long distance dependencies in the morphology of Russian verbs, namely secondary imperfectivization and deverbal nominalization.We show how these processes can be reanalysed as local dependencies. Although finitestate frameworks are not bound by such linguistically motivated considerations, we present an implementation of our analysis as proposed in [1] that does not complicate the grammar or enlarge the network unproportionally.
Wheat alpha-amylase/trypsin inhibitors remain a subject of interest considering the latest findings showing their implication in wheat-related non-celiac sensitivity (NCWS). Understanding their functions in such a disorder is still unclear and for further study, the need for pure ATI molecules is one of the limiting problems. In this work, a simplified approach based on the successive fractionation of ATI extracts by reverse phase and ion exchange chromatography was developed. ATIs were first extracted from wheat flour using a combination of Tris buffer and chloroform/methanol methods. The separation of the extracts on a C18 column generated two main fractions of interest F1 and F2. The response surface methodology with the Doehlert design allowed optimizing the operating parameters of the strong anion exchange chromatography. Finally, the seven major wheat ATIs namely P01083, P17314, P16850, P01085, P16851, P16159, and P83207 were recovered with purity levels (according to the targeted LC-MS/MS analysis) of 98.2 ± 0.7; 98.1 ± 0.8; 97.9 ± 0.5; 95.1 ± 0.8; 98.3 ± 0.4; 96.9 ± 0.5, and 96.2 ± 0.4%, respectively. MALDI-TOF-MS analysis revealed single peaks in each of the pure fractions and the mass analysis yielded deviations of 0.4, 1.9, 0.1, 0.2, 0.2, 0.9, and 0.1% between the theoretical and the determined masses of P01083, P17314, P16850, P01085, P16851, P16159, and P83207, respectively. Overall, the study allowed establishing an efficient purification process of the most important wheat ATIs. This paves the way for further in-depth investigation of the ATIs to gain more knowledge related to their involvement in NCWS disease and to allow the absolute quantification in wheat samples.
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.