Extern
Refine
Year of publication
Document Type
- Article (240)
- Conference Proceeding (122)
- Doctoral Thesis (110)
- Postprint (68)
- Working Paper (39)
- Monograph/Edited Volume (14)
- Review (9)
- Preprint (6)
- Master's Thesis (4)
- Habilitation Thesis (2)
Language
- English (616) (remove)
Keywords
- USA (7)
- United States (7)
- climate change (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- COVID-19 (5)
- football (5)
- modern Jewish history (5)
- 20. Jahrhundert (4)
Institute
- Extern (616)
- Institut für Biochemie und Biologie (67)
- Institut für Physik und Astronomie (51)
- Institut für Geowissenschaften (44)
- Institut für Chemie (39)
- Strukturbereich Kognitionswissenschaften (37)
- Vereinigung für Jüdische Studien e. V. (34)
- Center for Economic Policy Analysis (CEPA) (32)
- Fachgruppe Volkswirtschaftslehre (27)
- Department Psychologie (24)
- Institut für Ernährungswissenschaft (20)
- Department Linguistik (19)
- Institut für Mathematik (14)
- Institut für Umweltwissenschaften und Geographie (13)
- Wirtschaftswissenschaften (12)
- Department Sport- und Gesundheitswissenschaften (9)
- Fakultät für Gesundheitswissenschaften (9)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (8)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (6)
- Institut für Informatik und Computational Science (6)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (5)
- Strukturbereich Bildungswissenschaften (5)
- Hasso-Plattner-Institut für Digital Engineering GmbH (4)
- Fachgruppe Politik- & Verwaltungswissenschaft (2)
- Humanwissenschaftliche Fakultät (2)
- Institut für Anglistik und Amerikanistik (2)
- MenschenRechtsZentrum (2)
- Department Erziehungswissenschaft (1)
- Fachgruppe Soziologie (1)
- Sozialwissenschaften (1)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (1)
- Öffentliches Recht (1)
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Recent years witnessed a vast advent of stalagmites as palaeoclimate archives. The multitude of geochemical and physical proxies and a promise of a precise and accurate age model greatly appeal to palaeoclimatologists. Although substantial progress was made in speleothem-based palaeoclimate research and despite high-resolution records from low-latitudinal regions, proving that palaeo-environmental changes can be archived on sub-annual to millennial time scales our comprehension of climate dynamics is still fragmentary. This is in particular true for the summer monsoon system on the Indian subcontinent. The Indian summer monsoon (ISM) is an integral part of the intertropical convergence zone (ITCZ). As this rainfall belt migrates northward during boreal summer, it brings monsoonal rainfall. ISM strength depends however on a variety of factors, including snow cover in Central Asia and oceanic conditions in the Indic and Pacific. Presently, many of the factors influencing the ISM are known, though their exact forcing mechanism and mutual relations remain ambiguous. Attempts to make an accurate prediction of rainfall intensity and frequency and drought recurrence, which is extremely important for South Asian countries, resemble a puzzle game; all interaction need to fall into the right place to obtain a complete picture. My thesis aims to create a faithful picture of climate change in India, covering the last 11,000 ka. NE India represents a key region for the Bay of Bengal (BoB) branch of the ISM, as it is here where the monsoon splits into a northwestward and a northeastward directed arm. The Meghalaya Plateau is the first barrier for northward moving air masses and receives excessive summer rainfall, while the winter season is very dry. The proximity of Meghalaya to the Tibetan Plateau on the one hand and the BoB on the other hand make the study area a key location for investigating the interaction between different forcings that governs the ISM. A basis for the interpretation of palaeoclimate records, and a first important outcome of my thesis is a conceptual model which explains the observed pattern of seasonal changes in stable isotopes (d18O and d2H) in rainfall. I show that although in tropical and subtropical regions the amount effect is commonly called to explain strongly depleted isotope values during enhanced rainfall, alone it cannot account for observed rainwater isotope variability in Meghalaya. Monitoring of rainwater isotopes shows no expected negative correlation between precipitation amount and d18O of rainfall. In turn I find evidence that the runoff from high elevations carries an inherited isotopic signature into the BoB, where during the ISM season the freshwater builds a strongly depleted plume on top of the marine water. The vapor originating from this plume is likely to memorize' and transmit further very negative d18O values. The lack of data does not allow for quantication of this plume effect' on isotopes in rainfall over Meghalaya but I suggest that it varies on seasonal to millennial timescales, depending on the runoff amount and source characteristics. The focal point of my thesis is the extraction of climatic signals archived in stalagmites from NE India. High uranium concentration in the stalagmites ensured excellent age control required for successful high-resolution climate reconstructions. Stable isotope (d18O and d13C) and grey-scale data allow unprecedented insights into millennial to seasonal dynamics of the summer and winter monsoon in NE India. ISM strength (i. e. rainfall amount) is recorded in changes in d18Ostalagmites. The d13C signal, reflecting drip rate changes, renders a powerful proxy for dry season conditions, and shows similarities to temperature-related changes on the Tibetan Plateau. A sub-annual grey-scale profile supports a concept of lower drip rate and slower stalagmite growth during dry conditions. During the Holocene, ISM followed a millennial-scale decrease of insolation, with decadal to centennial failures resulting from atmospheric changes. The period of maximum rainfall and enhanced seasonality corresponds to the Holocene Thermal Optimum observed in Europe. After a phase of rather stable conditions, 4.5 kyr ago, the strengthening ENSO system dominated the ISM. Strong El Nino events weakened the ISM, especially when in concert with positive Indian Ocean dipole events. The strongest droughts of the last 11 kyr are recorded during the past 2 kyr. Using the advantage of a well-dated stalagmite record at hand I tested the application of laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to detect sub-annual to sub-decadal changes in element concentrations in stalagmites. The development of a large ablation cell allows for ablating sample slabs of up to 22 cm total length. Each analyzed element is a potential proxy for different climatic parameters. Combining my previous results with the LAICP- MS-generated data shows that element concentration depends not only on rainfall amount and associated leaching from the soil. Additional factors, like biological activity and hydrogeochemical conditions in the soil and vadose zone can eventually affect the element content in drip water and in stalagmites. I present a theoretical conceptual model for my study site to explain how climatic signals can be transmitted and archived in stalagmite carbonate. Further, I establish a first 1500 year long element record, reconstructing rainfall variability. Additionally, I hypothesize that volcanic eruptions, producing large amounts of sulfuric acid, can influence soil acidity and hence element mobilization.
The Milky Way is only one out of billions of galaxies in the universe. However, it is a special galaxy because it allows to explore the main mechanisms involved in its evolution and formation history by unpicking the system star-by-star. Especially, the chemical fingerprints of its stars provide clues and evidence of past events in the Galaxy’s lifetime. These information help not only to decipher the current structure and building blocks of the Milky Way, but to learn more about the general formation process of galaxies.
In the past decade a multitude of stellar spectroscopic Galactic surveys have scanned millions of stars far beyond the rim of the solar neighbourhood. The obtained spectroscopic information provide unprecedented insights to the chemo-dynamics of the Milky Way. In addition analytic models and numerical simulations of the Milky Way provide necessary descriptions and predictions suited for comparison with observations in order to decode the physical properties that underlie the complex system of the Galaxy.
In the thesis various approaches are taken to connect modern theoretical modelling of galaxy formation and evolution with observations from Galactic stellar surveys. With its focus on the chemo-kinematics of the Galactic disk this work aims to determine new observational constraints on the formation of the Milky Way providing also proper comparisons with two different models. These are the population synthesis model TRILEGAL based on analytical distribution functions, which aims to simulate the number and distribution of stars in the Milky Way and its different components, and a hybrid model (MCM) that combines an N-body simulation of a Milky Way like galaxy in the cosmological framework with a semi-analytic chemical evolution model for the Milky Way. The major observational data sets in use come from two surveys, namely the “Radial Velocity Experiment” (RAVE) and the “Sloan Extension for Galactic Understanding and Exploration” (SEGUE).
In the first approach the chemo-kinematic properties of the thin and thick disk of the Galaxy as traced by a selection of about 20000 SEGUE G-dwarf stars are directly compared to the predictions by the MCM model. As a necessary condition for this, SEGUE's selection function and its survey volume are evaluated in detail to correct the spectroscopic observations for their survey specific selection biases. Also, based on a Bayesian method spectro-photometric distances with uncertainties below 15% are computed for the selection of SEGUE G-dwarfs that are studied up to a distance of 3 kpc from the Sun.
For the second approach two synthetic versions of the SEGUE survey are generated based on the above models. The obtained synthetic stellar catalogues are then used to create mock samples best resembling the compiled sample of observed SEGUE G-dwarfs. Generally, mock samples are not only ideal to compare predictions from various models. They also allow validation of the models' quality and improvement as with this work could be especially achieved for TRILEGAL. While TRILEGAL reproduces the statistical properties of the thin and thick disk as seen in the observations, the MCM model has shown to be more suitable in reproducing many chemo-kinematic correlations as revealed by the SEGUE stars. However, evidence has been found that the MCM model may be missing a stellar component with the properties of the thick disk that the observations clearly show. While the SEGUE stars do indicate a thin-thick dichotomy of the stellar Galactic disk in agreement with other spectroscopic stellar studies, no sign for a distinct metal-poor disk is seen in the MCM model.
Usually stellar spectroscopic surveys are limited to a certain volume around the Sun covering different regions of the Galaxy’s disk. This often prevents to obtain a global view on the chemo-dynamics of the Galactic disk. Hence, a suitable combination of stellar samples from independent surveys is not only useful for the verification of results but it also helps to complete the picture of the Milky Way. Therefore, the thesis closes with a comparison of the SEGUE G-dwarfs and a sample of RAVE giants. The comparison reveals that the chemo-kinematic relations agree in disk regions where the samples of both surveys show a similar number of stars. For those parts of the survey volumes where one of the surveys lacks statistics they beautifully complement each other. This demonstrates that the comparison of theoretical models on the one side, and the combined observational data gathered by multiple surveys on the other side, are key ingredients to understand and disentangle the structure and formation history of the Milky Way.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
We have analyzed the spectra of seven Galactic O4 supergiants, with the NLTE wind code CMFGEN. For all stars, we have found that clumped wind models match well lines from different species spanning a wavelength range from FUV to optical, and remain consistent with Hα data. We have achieved an excellent match of the P V λλ1118, 1128 resonance doublet and N IV λ1718, as well as He II λ4686 suggesting that our physical description of clumping is adequate. We find very small volume filling factors and that clumping starts deep in the wind, near the sonic point. The most crucial consequence of our analysis is that the mass loss rates of O stars need to be revised downward significantly, by a factor of 3 and more compared to those obtained from smooth-wind models.
Eye fixation durations during normal reading correlate with processing difficulty but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers’ eyefixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated, and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam Sentence Corpus
(2008)
The surprisal of a word on a probabilistic grammar constitutes a promising complexity metric for human sentence comprehension difficulty. Using two different grammar types, surprisal is shown to have an effect on fixation durations and regression probabilities in a sample of German readers’ eye movements, the Potsdam Sentence Corpus. A linear mixed-effects model was used to quantify the effect of surprisal while taking into account unigram and bigram frequency, word length, and empirically-derived word predictability; the so-called “early” and “late” measures of processing difficulty both showed an effect of surprisal. Surprisal is also shown to have a small but statistically non-significant effect on empirically-derived predictability itself. This work thus demonstrates the importance of including parsing costs as a predictor of comprehension difficulty in models of reading, and suggests that a simple identification of syntactic parsing costs with early measures and late measures with durations of post-syntactic events may be difficult to uphold.
Strong as a hippo’s heart
(2021)
The heart is comprised of multiple tissues that contribute to its physiological functions. During development, the growth of myocardium and endocardium is coupled and morphogenetic processes within these separate tissue layers are integrated. Here, we discuss the roles of mechanosensitive Hippo signaling in growth and morphogenesis of the zebrafish heart. Hippo signaling is involved in defining numbers of cardiac progenitor cells derived from the secondary heart field, in restricting the growth of the epicardium, and in guiding trabeculation and outflow tract formation. Recent work also shows that myocardial chamber dimensions serve as a blueprint for Hippo signaling-dependent growth of the endocardium. Evidently, Hippo pathway components act at the crossroads of various signaling pathways involved in embryonic zebrafish heart development. Elucidating how biomechanical Hippo signaling guides heart morphogenesis has direct implications for our understanding of cardiac physiology and pathophysiology.
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
Urban pollution
(2022)
We use worldwide satellite data to analyse how population size and density affect urban pollution. We find that density significantly increases pollution exposure. Looking only at urban areas, we find that population size affects exposure more than density. Moreover, the effect is driven mostly by population commuting to core cities rather than the core city population itself. We analyse heterogeneity by geography and income levels. By and large, the influence of population on pollution is greatest in Asia and middle-income countries. A counterfactual simulation shows that PM2.5 exposure would fall by up to 36% and NO2 exposure up to 53% if within countries population size were equalized across all cities.
Property tax competition
(2022)
We develop a model of property taxation and characterize equilibria under three alternative taxa-tion regimes often used in the public finance literature: decentralized taxation, centralized taxation, and “rent seeking” regimes. We show that decentralized taxation results in inefficiently high tax rates, whereas centralized taxation yields a common optimal tax rate, and tax rates in the rent-seeking regime can be either inefficiently high or low. We quantify the effects of switching from the observed tax system to the three regimes for Japan and Germany. The decentralized or rent-seeking regime best describes the Japanese tax system, whereas the centralized regime does so for Germany. We also quantify the welfare effects of regime changes.
We study the effect of energy and transport policies on pollution in two developing country cities. We use a quantitative equilibrium model with choice of housing, energy use, residential location, transport mode, and energy technology. Pollution comes from commuting and residential energy use. The model parameters are calibrated to replicate key variables for two developing country cities, Maputo, Mozambique, and Yogyakarta, Indonesia. In the counterfactual simulations, we study how various transport and energy policies affect equilibrium pollution. Policies may be induce rebound effects from increasing residential energy use or switching to high emission modes or locations. In general, these rebound effects tend to be largest for subsidies to public transport or modern residential energy technology.
Finite state methods for natural language processing often require the construction and the intersection of several automata. In this paper, we investigate the question of determining the best order in which these intersections should be performed. We take as an example lexical disambiguation in polarity grammars. We show that there is no efficient way to minimize the state complexity of these intersections.
In the first section of the thesis graphitic carbon nitride was for the first time synthesised using the high-temperature condensation of dicyandiamide (DCDA) – a simple molecular precursor – in a eutectic salt melt of lithium chloride and potassium chloride. The extent of condensation, namely next to complete conversion of all reactive end groups, was verified by elemental microanalysis and vibrational spectroscopy. TEM- and SEM-measurements gave detailed insight into the well-defined morphology of these organic crystals, which are not based on 0D or 1D constituents like known molecular or short-chain polymeric crystals but on the packing motif of extended 2D frameworks. The proposed crystal structure of this g-C3N4 species was derived in analogy to graphite by means of extensive powder XRD studies, indexing and refinement. It is based on sheets of hexagonally arranged s-heptazine (C6N7) units that are held together by covalent bonds between C and N atoms. These sheets stack in a graphitic, staggered fashion adopting an AB-motif, as corroborated by powder X-ray diffractometry and high-resolution transmission electron microscopy. This study was contrasted with one of many popular – yet unsuccessful – approaches in the last 30 years of scientific literature to perform the condensation of an extended carbon nitride species through synthesis in the bulk. The second section expands the repertoire of available salt melts introducing the lithium bromide and potassium bromide eutectic as an excellent medium to obtain a new phase of graphitic carbon nitride. The combination of SEM, TEM, PXRD and electron diffraction reveals that the new graphitic carbon nitride phase stacks in an ABA’ motif forming unprecedentedly large crystals. This section seizes the notion of the preceding chapter, that condensation in a eutectic salt melt is the key to obtain a high degree of conversion mainly through a solvatory effect. At the close of this chapter ionothermal synthesis is seen established as a powerful tool to overcome the inherent kinetic problems of solid state reactions such as incomplete polymerisation and condensation in the bulk especially when the temperature requirement of the reaction in question falls into the proverbial “no man’s land” of classical solvents, i.e. above 250 to 300 °C. The following section puts the claim to the test, that the crystalline carbon nitrides obtained from a salt melt are indeed graphitic. A typical property of graphite – namely the accessibility of its interplanar space for guest molecules – is transferred to the graphitic carbon nitride system. Metallic potassium and graphitic carbon nitride are converted to give the potassium intercalation compound, K(C6N8)3 designated according to its stoichiometry and proposed crystal structure. Reaction of the intercalate with aqueous solvents triggers the exfoliation of the graphitic carbon nitride material and – for the first time – enables the access of singular (or multiple) carbon nitride sheets analogous to graphene as seen in the formation of sheets, bundles and scrolls of carbon nitride in TEM imaging. The thus exfoliated sheets form a stable, strongly fluorescent solution in aqueous media, which shows no sign in UV/Vis spectroscopy that the aromaticity of individual sheets was subject to degradation. The final section expands on the mechanism underlying the formation of graphitic carbon nitride by literally expanding the distance between the covalently linked heptazine units which constitute these materials. A close examination of all proposed reaction mechanisms to-date in the light of exhaustive DSC/MS experiments highlights the possibility that the heptazine unit can be formed from smaller molecules, even if some of the designated leaving groups (such as ammonia) are substituted by an element, R, which later on remains linked to the nascent heptazine. Furthermore, it is suggested that the key functional groups in the process are the triazine- (Tz) and the carbonitrile- (CN) group. On the basis of these assumptions, molecular precursors are tailored which encompass all necessary functional groups to form a central heptazine unit of threefold, planar symmetry and then still retain outward functionalities for self-propagated condensation in all three directions. Two model systems based on a para-aryl (ArCNTz) and para-biphenyl (BiPhCNTz) precursors are devised via a facile synthetic procedure and then condensed in an ionothermal process to yield the heptazine based frameworks, HBF-1 and HBF-2. Due to the structural motifs of their molecular precursors, individual sheets of HBF-1 and HBF-2 span cavities of 14.2 Å and 23.0 Å respectively which makes both materials attractive as potential organic zeolites. Crystallographic analysis confirms the formation of ABA’ layered, graphitic systems, and the extent of condensation is confirmed as next-to-perfect by elemental analysis and vibrational spectroscopy.
Background: Biological age markers are a crucial indicator whether children are decelerated in growth tempo. Skeletal maturation is the standard measure. Yet, it relies on exposing children to x-radiation. Dental eruption is a potential, but highly debated, radiation free alternative.
Objectives: We assess the interrelationship between dental eruption and other maturational markers. We hypothesize that dental age correlates with body height and skeletal age. We further evaluate how the three different variables behave in cohorts from differing social backgrounds.
Sample and Method: Dental, skeletal and height data from the 1970s to 1990s from Guatemalan boys were converted into standard deviation scores, using external references for each measurement. The boys, aged between 7 and 12, derived from different social backgrounds (middle SES (N = 6529), low-middle SES (N = 736), low SES Ladino (N = 3653) and low SES Maya (N = 4587).
Results: Dental age shows only a weak correlation with skeletal age (0.18) and height (0.2). The distinction between cohorts differs according to each of the three measurements. All cohorts differ significantly in height. In skeletal maturation, the middle SES cohort is significantly advanced compared to all other cohorts. The periodically malnourished cohorts of low SES Mayas and Ladinos are significantly delayed in dental maturation compared to the well-nourished low-middle and middle class Ladino children.
Conclusion: Dental development is an independent system, that is regulated by different mechanisms than skeletal development and growth. Tooth eruption is sensitive to nutritional status, whereas skeletal age is more sensitive to socioeconomic background.
The self-employed faced strong income losses during the Covid-19 pandemic. Many governments introduced programs to financially support the self-employed during the pandemic, including Germany. The German Ministry for Economic Affairs announced a €50bn emergency-aid program in March 2020, offering one-off lump-sum payments of up to €15,000 to those facing substantial revenue declines. By reassuring the self- employed that the government ‘would not let them down’ during the crisis, the program had also the important aim of motivating the self-employed to get through the crisis. We investigate whether the program affected the confidence of the self-employed to survive the crisis using real-time online-survey data comprising more than 20,000 observations. We employ propensity score matching, making use of a rich set of variables that influence the subjective survival probability as main outcome measure. We observe that this program had significant effects, with the subjective survival probability of the self- employed being moderately increased. We reveal important effect heterogeneities with respect to education, industries, and speed of payment. Notably, positive effects only occur among those self-employed whose application was processed quickly. This suggests stress-induced waiting costs due to the uncertainty associated with the administrative processing and the overall pandemic situation. Our findings have policy implications for the design of support programs, while also contributing to the literature on the instruments and effects of entrepreneurship policy interventions in crisis situations.
Background: As the number of cardiac diseases continuously increases within the last years in modern society, so does cardiac treatment, especially cardiac catheterization. The procedure of a cardiac catheterization is challenging for both patients and practitioners. Several potential stressors of psychological or physical nature can occur during the procedure. The objective of the study is to develop and implement a stress management intervention for both practitioners and patients that aims to reduce the psychological and physical strain of a cardiac catheterization.
Methods: The clinical study (DRKS00026624) includes two randomized controlled intervention trials with parallel groups, for patients with elective cardiac catheterization and practitioners at the catheterization lab, in two clinic sites of the Ernst-von-Bergmann clinic network in Brandenburg, Germany. Both groups received different interventions for stress management. The intervention for patients comprises a psychoeducational video with different stress management technics and additional a standardized medical information about the cardiac catheterization examination. The control condition includes the in hospitals practiced medical patient education before the examination (usual care). Primary and secondary outcomes are measured by physiological parameters and validated questionnaires, the day before (M1) and after (M2) the cardiac catheterization and at a postal follow-up 6 months later (M3). It is expected that people with standardized information and psychoeducation show reduced complications during cardiac catheterization procedures, better pre- and post-operative wellbeing, regeneration, mood and lower stress levels over time. The intervention for practitioners includes a Mindfulness-based stress reduction program (MBSR) over 8 weeks supervised by an experienced MBSR practitioner directly at the clinic site and an operative guideline. It is expected that practitioners with intervention show improved perceived and chronic stress, occupational health, physical and mental function, higher effort-reward balance, regeneration and quality of life. Primary and secondary outcomes are measured by physiological parameters (heart rate variability, saliva cortisol) and validated questionnaires and will be assessed before (M1) and after (M2) the MBSR intervention and at a postal follow-up 6 months later (M3). Physiological biomarkers in practitioners will be assessed before (M1) and after intervention (M2) on two work days and a two days off. Intervention effects in both groups (practitioners and patients) will be evaluated separately using multivariate variance analysis.
Discussion: This study evaluates the effectiveness of two stress management intervention programs for patients and practitioners within cardiac catheter laboratory. Study will disclose strains during a cardiac catheterization affecting both patients and practitioners. For practitioners it may contribute to improved working conditions and occupational safety, preservation of earning capacity, avoidance of participation restrictions and loss of performance. In both groups less anxiety, stress and complications before and during the procedures can be expected. The study may add knowledge how to eliminate stressful exposures and to contribute to more (psychological) security, less output losses and exhaustion during work. The evolved stress management guidelines, training manuals and the standardized patient education should be transferred into clinical routines
Since Harris’ parser in the late 50s, multiword units have been progressively integrated in parsers. Nevertheless, in the most part, they are still restricted to compound words, that are more stable and less numerous. Actually, language is full of semi-fixed expressions that also form basic semantic units: semi-fixed adverbial expressions (e.g. time), collocations. Like compounds, the identification of these structures limits the combinatorial complexity induced by lexical ambiguity. In this paper, we detail an experiment that largely integrates these notions in a finite-state procedure of segmentation into super-chunks, preliminary to a parser.We show that the chunker, developped for French, reaches 92.9% precision and 98.7% recall. Moreover, multiword units realize 36.6% of the attachments within nominal and prepositional phrases.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
This article aims to demonstrate the exceptional potential of Habsburg military records for the study of Jewish history during Europe’s Age of Revolution. We begin with the random discovery of six Jewish veterans of Freikorps Grün Loudon – a unit of mercenary freebooters – which fought for the Habsburgs during the first war against the French Republic (1792 – 97). A careful re-reading of the available archival evidence reveals that these men were the survivors of a much larger group numbering at least two dozen Jewish soldiers. While Jewish conscripts had been drafted into the Habsburg army since 1788, the fact that Jews could also serve – even volunteer – as professional soldiers in that period is completely new to us. When considered together, the personal circumstances and service experiences of the Jewish soldiers of Freikorps Grün Loudon enable us to make several observations about their motivation as well as their position vis-à-vis their non-Jewish comrades.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
Causes for slow weathering and erosion in the steep, warm, monsoon-subjected Highlands of Sri Lanka
(2018)
In the Highlands of Sri Lanka, erosion and chemical weathering rates are among the lowest for global mountain denudation. In this tropical humid setting, highly weathered deep saprolite profiles have developed from high-grade metamorphic charnockite during spheroidal weathering of the bedrock. The spheroidal weathering produces rounded corestones and spalled rindlets at the rock-saprolite interface. I used detailed textural, mineralogical, chemical, and electron-microscopic (SEM, FIB, TEM) analyses to identify the factors limiting the rate of weathering front advance in the profile, the sequence of weathering reactions, and the underlying mechanisms. The first mineral attacked by weathering was found to be pyroxene initiated by in situ Fe oxidation, followed by in situ biotite oxidation. Bulk dissolution of the primary minerals is best described with a dissolution – re-precipitation process, as no chemical gradients towards the mineral surface and sharp structural boundaries are observed at the nm scale. Only the local oxidation in pyroxene and biotite is better described with an ion by ion process. The first secondary phases are oxides and amorphous precipitates from which secondary minerals (mainly smectite and kaolinite) form. Only for biotite direct solid state transformation to kaolinite is likely. The initial oxidation of pyroxene and biotite takes place in locally restricted areas and is relatively fast: log J = -11 molmin/(m2 s). However, calculated corestone-scale mineral oxidation rates are comparable to corestone-scale mineral dissolution rates: log R = -13 molpx/(m2 s) and log R = -15 molbt/(m2 s). The oxidation reaction results in a volume increase. Volumetric calculations suggest that this observed oxidation leads to the generation of porosity due to the formation of micro-fractures in the minerals and the bedrock allowing for fluid transport and subsequent dissolution of plagioclase. At the scale of the corestone, this fracture reaction is responsible for the larger fractures that lead to spheroidal weathering and to the formation of rindlets. Since these fractures have their origin from the initial oxidational induced volume increase, oxidation is the rate limiting parameter for weathering to take place. The ensuing plagioclase weathering leads to formation of high secondary porosity in the corestone over a distance of only a few cm and eventually to the final disaggregation of bedrock to saprolite. As oxidation is the first weathering reaction, the supply of O2 is a rate-limiting factor for chemical weathering. Hence, the supply of O2 and its consumption at depth connects processes at the weathering front with erosion at the surface in a feedback mechanism. The strength of the feedback depends on the relative weight of advective versus diffusive transport of O2 through the weathering profile. The feedback will be stronger with dominating diffusive transport. The low weathering rate ultimately depends on the transport of O2 through the whole regolith, and on lithological factors such as low bedrock porosity and the amount of Fe-bearing primary minerals. In this regard the low-porosity charnockite with its low content of Fe(II) bearing minerals impedes fast weathering reactions. Fresh weatherable surfaces are a pre-requisite for chemical weathering. However, in the case of the charnockite found in the Sri Lankan Highlands, the only process that generates these surfaces is the fracturing induced by oxidation. Tectonic quiescence in this region and low pre-anthropogenic erosion rate (attributed to a dense vegetation cover) minimize the rejuvenation of the thick and cohesive regolith column, and lowers weathering through the feedback with erosion.
Revisiting public investment
(2004)
The consumption equivalence method is the theoretical basis of public cost-benefit analysis. Consumption equivalence public capital prices are explicitly introduces in order to sufficiently care for the opportunity cost of public expenditure. This can solve the dispute about the social rate of discount within public cost-benefit analysis witch was generated on a criterion looking similar to the capital value formula, known as Lind’s approach. The social rate of discount is liberated from opportunity costs considerations and the discounting away of the effects for future welfare vanishes. The corresponding question whether one should accept a positive value of the pure rate of social time preference is an old issue. Its current state between the prescriptive and descriptive view can also be interpreted as a consequence of the oversimplification of standard cost– benefit analysis. But apart from an economic self-process the pure rate of social time preference is also defined as a business-as-usual value of social distance discounting. Hence, a political choice has to be made about this rate which is free in principal.
An exhaustive and disjoint decomposition of social choice situations is derived in a general set theoretical framework using the new tools of the Lifted Pareto relation on the power set of social states representing a pre-choice comparison of choice option sets. The main result is the classification of social choice situations which include three types of social choice problems. First, we usually observe the common incompleteness of the Pareto relation. Second, a kind of non-compactness problem of a choice set of social states can be generated. Finally, both can be combined. The first problem root can be regarded as natural everyday dilemma of social choice theory whereas the second may probably be much more due to modeling technique implications. The distinction is enabled at a very general set theoretical level. Hence, the derived classification of social choice situations is applicable on almost every relevant economic model.
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.
Local Orders, Global Chaos
(1999)
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
This paper describes a two-level formalism where feature structures are used in contextual rules. Whereas usual two-level grammars describe rational sets over symbol pairs, this new formalism uses tree structured regular expressions. They allow an explicit and precise definition of the scope of feature structures. A given surface form may be described using several feature structures. Feature unification is expressed in contextual rules using variables, like in a unification grammar. Grammars are compiled in finite state multi-tape transducers.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
This article describes a HMM-based word-alignment method that can selectively enforce a contiguity constraint. This method has a direct application in the extraction of a bilingual terminological lexicon from a parallel corpus, but can also be used as a preliminary step for the extraction of phrase pairs in a Phrase-Based Statistical Machine Translation system. Contiguous source words composing terms are aligned to contiguous target language words. The HMM is transformed into a Weighted Finite State Transducer (WFST) and contiguity constraints are enforced by specific multi-tape WFSTs. The proposed method is especially suited when basic linguistic resources (morphological analyzer, part-of-speech taggers and term extractors) are available for the source language only.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
National Action Plans (NAPs) have been increas-ingly adopted world-wide after the Vienna Dec-laration in 1993, where it was urged to consider the improvement and promotion of Human Rights. In this paper, we discuss their usefulness and success by analysing the challenges present-ed during NAP processes as well as the benefits this set of actions entails: The challenges for their implementation outweigh its actual benefits. Nevertheless, NAPs have great potential. Based on new research, we elaborate a set of recom-mendations for improving the design and imple-mentation of national action planning. In order to effectively bring NAP into practice, we consider it crucial to plan and analyse every state local circumstances in detail. The latter is important, since the implementation of a concrete set of actions is intended to directly transform and improve the local living conditions of the people. In a long-term perspective, we defend the benefit of NAP’s implementation for complying obliga-tions set up by HR treaties.
The COVID-19 pandemic created the largest experiment in working from home. We study how persistent telework may change energy and transport consumption and costs in Germany to assess the distributional and environmental implications when working from home will stick. Based on data from the German Microcensus and available classifications of working-from-home feasibility for different occupations, we calculate the change in energy consumption and travel to work when 15% of employees work full time from home. Our findings suggest that telework translates into an annual increase in heating energy expenditure of 110 euros per worker and a decrease in transport expenditure of 840 euros per worker. All income groups would gain from telework but high-income workers gain twice as much as low-income workers. The value of time saving is between 1.3 and 6 times greater than the savings from reduced travel costs and almost 9 times higher for high-income workers than low-income workers. The direct effects on CO₂ emissions due to reduced car commuting amount to 4.5 millions tons of CO₂, representing around 3 percent of carbon emissions in the transport sector.
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
The end of the cold war division of the Baltic Sea in 1989, and the three Baltic states’ return to independence in 1991 created new opportunities for the decision-makers of the area, as well as new possibilities for fashioning security in the region. This article will examine the security debate affecting the Baltic Sea region in the post-cold war period, and in particular, the relevance of the European Union to that debate. The following section will examine various concepts of security relevant to the Baltic region; the third section looks at the EU and the Baltic area; and the last part deals with the implications that EU membership by the Baltic Sea states may have for the security of the Baltic Sea zone.