Refine
Has Fulltext
- yes (5944) (remove)
Year of publication
Document Type
- Postprint (2340)
- Doctoral Thesis (1727)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (165)
- Master's Thesis (59)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5944) (remove)
Keywords
- climate change (73)
- Klimawandel (50)
- morphology (40)
- information structure (39)
- machine learning (39)
- MOOC (37)
- syntax (37)
- e-learning (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (639)
- Institut für Biochemie und Biologie (554)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (475)
- Institut für Geowissenschaften (471)
- Extern (457)
- Institut für Chemie (427)
- Department Linguistik (237)
- Humanwissenschaftliche Fakultät (207)
- Department Psychologie (204)
We define weak boundary values of solutions to those nonlinear differential equations which appear as Euler-Lagrange equations of variational problems. As a result we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to the study of Lagrangian problems.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
To understand the evolution and morphology of planetary nebulae, a detailed knowledge of their central stars is required. Central stars that exhibit emission lines in their spectra, indicating stellar mass-loss allow to study the evolution of planetary nebulae in action. Emission line central stars constitute about 10 % of all central stars. Half of them are practically hydrogen-free Wolf-Rayet type central stars of the carbon sequence, [WC], that show strong emission lines of carbon and oxygen in their spectra. In this contribution we address the weak emission-lines central stars (wels). These stars are poorly analyzed and their hydrogen content is mostly unknown. We obtained optical spectra, that include the important Balmer lines of hydrogen, for four weak emission line central stars. We present the results of our analysis, provide spectral classification and discuss possible explanations for their formation and evolution.
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps.
The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations.
Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas:
1. A Service-Based Concept for Image-Based Provisioning of
Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models.
2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library.
3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily.
4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes.
5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service.
These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
Obesity is a risk factor for several major cancers. Associations of weight change in middle adulthood with cancer risk, however, are less clear. We examined the association of change in weight and body mass index (BMI) category during middle adulthood with 42 cancers, using multivariable Cox proportional hazards models in the European Prospective Investigation into Cancer and Nutrition cohort. Of 241 323 participants (31% men), 20% lost and 32% gained weight (>0.4 to 5.0 kg/year) during 6.9 years (average). During 8.0 years of follow-up after the second weight assessment, 20 960 incident cancers were ascertained. Independent of baseline BMI, weight gain (per one kg/year increment) was positively associated with cancer of the corpus uteri (hazard ratio [HR] = 1.14; 95% confidence interval: 1.05-1.23). Compared to stable weight (+/- 0.4 kg/year), weight gain (>0.4 to 5.0 kg/year) was positively associated with cancers of the gallbladder and bile ducts (HR = 1.41; 1.01-1.96), postmenopausal breast (HR = 1.08; 1.00-1.16) and thyroid (HR = 1.40; 1.04-1.90). Compared to maintaining normal weight, maintaining overweight or obese BMI (World Health Organisation categories) was positively associated with most obesity-related cancers. Compared to maintaining the baseline BMI category, weight gain to a higher BMI category was positively associated with cancers of the postmenopausal breast (HR = 1.19; 1.06-1.33), ovary (HR = 1.40; 1.04-1.91), corpus uteri (HR = 1.42; 1.06-1.91), kidney (HR = 1.80; 1.20-2.68) and pancreas in men (HR = 1.81; 1.11-2.95). Losing weight to a lower BMI category, however, was inversely associated with cancers of the corpus uteri (HR = 0.40; 0.23-0.69) and colon (HR = 0.69; 0.52-0.92). Our findings support avoiding weight gain and encouraging weight loss in middle adulthood.
Welcome to the Dark Side
(2022)
Differences in natural light conditions caused by changes in moonlight are known to affect perceived predation risk in many nocturnal prey species. As artificial light at night (ALAN) is steadily increasing in space and intensity, it has the potential to change movement and foraging behavior of many species as it might increase perceived predation risk and mask natural light cycles. We investigated if partial nighttime illumination leads to changes in foraging behavior during the night and the subsequent day in a small mammal and whether these changes are related to animal personalities. We subjected bank voles to partial nighttime illumination in a foraging landscape under laboratory conditions and in large grassland enclosures under near natural conditions. We measured giving-up density of food in illuminated and dark artificial seed patches and video recorded the movement of animals. While animals reduced number of visits to illuminated seed patches at night, they increased visits to these patches at the following day compared to dark seed patches. Overall, bold individuals had lower giving-up densities than shy individuals but this difference increased at day in formerly illuminated seed patches. Small mammals thus showed carry-over effects on daytime foraging behavior due to ALAN, i.e., nocturnal illumination has the potential to affect intra- and interspecific interactions during both night and day with possible changes in personality structure within populations and altered predator-prey dynamics.
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models.
We prove a local in time existence and uniqueness theorem of classical solutions of the coupled Einstein{Euler system, and therefore establish the well posedness of this system. We use the condition that the energy density might vanish or tends to zero at infinity and that the pressure is a certain function of the energy density, conditions which are used to describe simplified stellar models. In order to achieve our goals we are enforced, by the complexity of the problem, to deal with these equations in a new type of weighted Sobolev spaces of fractional order. Beside their construction, we develop tools for PDEs and techniques for hyperbolic and elliptic equations in these spaces. The well posedness is obtained in these spaces.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
A lot has been published about the competencies needed by
students in the 21st century (Ravenscroft et al., 2012). However, equally
important are the competencies needed by educators in the new era
of digital education. We review the key competencies for educators in
light of the new methods of teaching and learning proposed by Massive
Open Online Courses (MOOCs) and their on-campus counterparts,
Small Private Online Courses (SPOCs).
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
What Colin Reynolds could tell us about nutrient limitation, N:P ratios and eutrophication control
(2020)
Colin Reynolds exquisitely consolidated our understanding of driving forces shaping phytoplankton communities and those setting the upper limit to biomass yield, with limitation typically shifting from light in winter to phosphorus in spring. Nonetheless, co-limitation is frequently postulated from enhanced growth responses to enrichments with both N and P or from N:P ranging around the Redfield ratio, concluding a need to reduce both N and P in order to mitigate eutrophication. Here, we review the current understanding of limitation through N and P and of co-limitation. We conclude that Reynolds is still correct: (i) Liebig's law of the minimum holds and reducing P is sufficient, provided concentrations achieved are low enough; (ii) analyses of nutrient limitation need to exclude evidently non-limiting situations, i.e. where soluble P exceeds 3-10 mu g/l, dissolved N exceeds 100-130 mu g/l and total P and N support high biomass levels with self-shading causing light limitation; (iii) additionally decreasing N to limiting concentrations may be useful in specific situations (e.g. shallow waterbodies with high internal P and pronounced denitrification); (iv) management decisions require local, situation-specific assessments. The value of research on stoichiometry and co-limitation lies in promoting our understanding of phytoplankton ecophysiology and community ecology.
This study analyzes the influence of local and regional climatic factors on the stable isotopic composition of rainfall in the Vietnamese Mekong Delta (VMD) as part of the Asian monsoon region. It is based on 1.5 years of weekly rainfall samples. In the first step, the isotopic composition of the samples is analyzed by local meteoric water lines (LMWLs) and single-factor linear correlations. Additionally, the contribution of several regional and local factors is quantified by multiple linear regression (MLR) of all possible factor combinations and by relative importance analysis. This approach is novel for the interpretation of isotopic records and enables an objective quantification of the explained variance in isotopic records for individual factors. In this study, the local factors are extracted from local climate records, while the regional factors are derived from atmospheric backward trajectories of water particles. The regional factors, i.e., precipitation, temperature, relative humidity and the length of backward trajectories, are combined with equivalent local climatic parameters to explain the response variables delta O-18, delta H-2, and d-excess of precipitation at the station of measurement.
The results indicate that (i) MLR can better explain the isotopic variation in precipitation (R-2 = 0.8) compared to single-factor linear regression (R-2 = 0.3); (ii) the isotopic variation in precipitation is controlled dominantly by regional moisture regimes (similar to 70 %) compared to local climatic conditions (similar to 30 %); (iii) the most important climatic parameter during the rainy season is the precipitation amount along the trajectories of air mass movement; (iv) the influence of local precipitation amount and temperature is not sig-nificant during the rainy season, unlike the regional precipitation amount effect; (v) secondary fractionation processes (e.g., sub-cloud evaporation) can be identified through the d-excess and take place mainly in the dry season, either locally for delta O-18 and delta H-2, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors.
The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.
What did Cain say to Abel?
(2009)
Aim
Although research and clinical definitions of psychotherapeutic competence have been proposed, less is known about the layperson perspective. The aim was to explore the views of individuals with different levels of psychotherapy experience regarding what-in their views-constitutes a competent therapist.
Method
In an online survey, 375 persons (64% female, mean age 33.24 years) with no experience, with professional experience, or with personal pre-experience with psychotherapy participated. To provide low-threshold questions, we first presented two qualitative items (i.e. "In your opinion, what makes a good/competent psychotherapist?"; "How do you recognize that a psychotherapist is not competent?") and analysed them using inductive content analysis techniques (Mayring, 2014). Then, we gave participants a 16-item questionnaire including items from previous surveys and from the literature and analysed them descriptively.
Results
Work-relatedprinciples, professionalism, personalitycharacteristics, caringcommunication, empathy and understandingwere important categories of competence. Concerning the quantitative questions, most participants agreed with items indicating that a therapist should be open, listen well, show empathy and behave responsibly.
Conclusion
Investigating layperson perspectives suggested that effective and professional interpersonal behaviour of therapists plays a central role in the public's perception of psychotherapy.
The goal of this paper is to study the demand factors driving enrollment in massive open online courses. Using course level data from a French MOOC platform, we study the course, teacher and institution related characteristics that influence the enrollment decision of students, in a setting where enrollment is open to all students without administrative barriers. Coverage from social and traditional media done around the course is a key driver. In addition, the language of instruction and the (estimated) amount of work needed to complete the course also have a significant impact. The data also suggests that the presence of same-side externalities is limited. Finally, preferences of national and of international students tend to differ on several dimensions.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
The COVID-19 pandemic created the largest experiment in working from home. We study how persistent telework may change energy and transport consumption and costs in Germany to assess the distributional and environmental implications when working from home will stick. Based on data from the German Microcensus and available classifications of working-from-home feasibility for different occupations, we calculate the change in energy consumption and travel to work when 15% of employees work full time from home. Our findings suggest that telework translates into an annual increase in heating energy expenditure of 110 euros per worker and a decrease in transport expenditure of 840 euros per worker. All income groups would gain from telework but high-income workers gain twice as much as low-income workers. The value of time saving is between 1.3 and 6 times greater than the savings from reduced travel costs and almost 9 times higher for high-income workers than low-income workers. The direct effects on CO₂ emissions due to reduced car commuting amount to 4.5 millions tons of CO₂, representing around 3 percent of carbon emissions in the transport sector.
What is it good for?
(2023)
Military conflicts and wars affect a country’s development in various dimensions. Rising inflation rates are a potentially important economic effect associated with conflict. High inflation can undermine investment, weigh on private consumption, and threaten macroeconomic stability. Furthermore, these effects are not necessarily restricted to the locality of the conflict, but can also spill over to other countries. Therefore, to understand how conflict affects the economy and to make a more comprehensive assessment of the costs of armed conflict, it is important to take inflationary effects into account. To disentangle the conflict-inflation-nexus and to quantify this relationship, we conduct a panel analysis for 175 countries over the period 1950–2019. To capture indirect inflationary effects, we construct a distance based spillover index. In general, the results of our analysis confirm a statistically significant positive direct association between conflicts and inflation rates. This finding is robust across various model specifications. Moreover, our results indicate that conflict induced inflation is not solely driven by increasing money supply. Furthermore, we document a statistically significant positive indirect association between conflicts and inflation rates in uninvolved countries.
What is visualization?
(2011)
Over the last 20 years, information visualization became a common tool in science and also a growing presence in the arts and culture at large. However, the use of visualization in cultural research is still in its infancy. Based on the work in the analysis of video games, cinema, TV, animation, Manga and other media carried out in Software Studies Initiative at University of California, San Diego over last two years, a number of visualization techniques and methods particularly useful for cultural and media research are presented.
Extract: That the Celtic languages were of the Indo-European family was first recognised by Rasmus Christian Rask (*1787), a young Danish linguist, in 1818. However, the fact that he wrote in Danish meant that his discovery was not noted by the linguistic establishment until long after his untimely death in 1832. The same conclusion was arrived at independently of Rask and, apparently, of each other, by Adolphe Pictet (1836) and Franz Bopp (1837). This agreement between the foremost scholars made possible the completion of the picture of the spread of the Indo-European languages in the extreme west of the European continent. However, in the Middle Ages the speakers of Irish had no awareness of any special relationship between Irish and the other Celtic languages, and a scholar as linguistically competent as Cormac mac Cuillennáin (†908), or whoever compiled Sanas Chormaic, treated Welsh on the same basis as Greek, Latin, and the lingua northmannorum in the elucidation of the meaning and history of Irish words. [...]
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
What Makes an Employer?
(2019)
As the policy debate on entrepreneurship increasingly centers on firm growth in terms of job creation, it is important to better understand which variables influence the first hiring decision and which ones influence the subsequent survival as an employer. Using the German Socio-economic Panel (SOEP), we analyze what role individual characteristics of entrepreneurs play in sustainable job creation. While human and social capital variables positively influence the hiring decision and the survival as an employer in the same direction, we show that none of the personality traits affect the two outcomes in the same way. Some traits are only relevant for survival as an employer but do not influence the hiring decision, other traits even unfold a revolving door effect, in the sense that employers tend to fail due to the same characteristics that positively influenced their hiring decision.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
What’s in an irish name?
(2006)
Content: 1. Introduction: The Irish Patronymic System Prior to 1600 2. Anglicisation Pressure 3. Anglicisation: 1600-1900 3.1. Phonetic Approximation 3.2. Simplification 3.3. Translation 3.4. Mistranslation 3.5. Equivalence with Existing English Surname 3.6. Multiplicity of Anglicised Forms 3.7. Anglicisation of Prefixes 4. The Call to De-Anglicise 5. Current Personal Naming Patterns in Ireland 5.1. Current Modern Irish 6. Traditional Naming: “X (Son/Daughter) of Y (Son/Daughter) of Z” 7. Nicknames 8. Conclusion
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
When Jesus Spoke Yiddish
(2015)
In this paper, I wish to bring some evidences from a Yiddish manuscript of the “Toledot Yeshu” which has not yet been the object of research: MS. Günzburg, 1730 kept in the Russian State Library in Moscow and dated 17th century. The manuscript is part of the so-called ‘Herode-tradition’ of the “Toledot Yeshu”. This means that the Yiddish manuscript is connected to the version printed in Hebrew and accompanied by a Latin translation by the Swiss pastor and theologian Johann Jacob Uldrich (Huldricus, 1683–1731) in Leiden in 1705, bearing the title “Historia Jeschuae Nazareni”. Given the uncertainty about the exact dating of the Yiddish manuscript, a comparison between the Hebrew and the Yiddish can still allow some remarks concerning the characteristics of the Yiddish version and posit some questions about the transmission and the reception of this challenging and intriguing text.
One of the informal properties often used to describe a new virtual world is its degree of openness. Yet what is an “open” virtual world? Does the phrase mean generally the same thing to different people? What distinguishes an open world from a less open world? Why does openness matter anyway? The answers to these questions cast light on an important, but shadowy, and uneasy, topic for virtual worlds: the relationship between those who construct the virtual, and those who use these constructions.
Previous research informs us about facilitators of employees’ promotive voice. Yet little is known about what determines whether a specific idea for constructive change brought up by an employee will be approved or rejected by a supervisor. Drawing on interactionist theories of motivation and personality, we propose that a supervisor will be least likely to support an idea when it threatens the supervisor’s power motive, and when it is perceived to serve the employee’s own striving for power. The prosocial versus egoistic intentions attributed to the idea presenter are proposed to mediate the latter effect. We conducted three scenario-based studies in which supervisors evaluated fictitious ideas voiced by employees that – if implemented – would have power-related consequences for them as a supervisor. Results show that the higher a supervisors’ explicit power motive was, the less likely they were to support a power-threatening idea (Study 1, N = 60). Moreover, idea support was less likely when this idea was proposed by an employee that was described as high (rather than low) on power motivation (Study 2, N = 79); attributed prosocial intentions mediated this effect. Study 3 (N = 260) replicates these results.
When we pay close attention to the prosody of Wh-questions in Japanese, we discover many novel and interesting empirical puzzles that would require us to devise a much finer syntactic component of grammar. This paper addresses the issues that pose some problems to such an elaborated grammar, and offers solutions, making an appeal to the information structure and sentence processing involved in the interpretation of interrogative and focus constructions.
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
First come, first served: Critical choices between alternative actions are often made based on events external to an organization, and reacting promptly to their occurrence can be a major advantage over the competition. In Business Process Management (BPM), such deferred choices can be expressed in process models, and they are an important aspect of process engines. Blockchain-based process execution approaches are no exception to this, but are severely limited by the inherent properties of the platform: The isolated environment prevents direct access to external entities and data, and the non-continual runtime based entirely on atomic transactions impedes the monitoring and detection of events. In this paper we provide an in-depth examination of the semantics of deferred choice, and transfer them to environments such as the blockchain. We introduce and compare several oracle architectures able to satisfy certain requirements, and show that they can be implemented using state-of-the-art blockchain technology.
In this study, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain carbon nanodots (CNDs) with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch and Tris-acetate-EDTA (TAE) buffer as carbon sources. Three kinds of CNDs are prepared using different sets of above mentioned starting materials. The as-synthesized CNDs: C-CND (starch only), N-CND 1 (starch in TAE) and N-CND 2 (TAE only) exhibit highly homogenous PL and are ready to use without need for further purification. The CNDs are stable over a long period of time (>1 year) either in solution or as freeze-dried powder. Depending on starting material, CNDs with PL quantum yield (PLQY) ranging from less than 1% up to 28% are obtained. The influence of the precursor concentration, reaction time and type of additives on the optical properties (UV-Vis absorption, PL emission spectrum and PLQY) is carefully investigated, providing insight into the chemical processes that occur during CND formation. Remarkably, upon freeze-drying the initially brown CND-solution turns into a non-fluorescent white/slightly brown powder which recovers PL in aqueous solution and can potentially be applied as fluorescent marker in bio-imaging, as a reduction agent or as a photocatalyst.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
Dispersal behavior plays an important role for the geographical distribution and population structure of any given species. Individual’s fitness, reproductive and competitive ability, and dispersal behavior can be determined by the age of the individual. Age-dependent as well as density-dependent dispersal patterns are common in many bird species. In this thesis, I first present age-dependent breeding ability and natal site fidelity in white storks (Ciconia ciconia); migratory birds breeding in large parts of Europe. I predicted that both the proportion of breeding birds and natal site fidelity increase with the age. After the seventies of the last century, following a steep population decline, a recovery of the white stork population has been observed in many regions in Europe. Increasing population density in the white stork population in Eastern Germany especially after 1983 allowed examining density- as well as age-dependent breeding dispersal patterns. Therefore second, I present whether: young birds show more often and longer breeding dispersal than old birds, and frequency of dispersal events increase with the population density increase, especially in the young storks. Third, I present age- and density-dependent dispersal direction preferences in the give population. I asked whether and how the major spring migration direction interacts with dispersal directions of white storks: in different age, and under different population densities. The proportion of breeding individuals increased in the first 22 years of life and then decreased suggesting, the senescent decay in aging storks. Young storks were more faithful to their natal sites than old storks probably due to their innate migratory direction and distance. Young storks dispersed more frequently than old storks in general, but not for longer distance. Proportion of dispersing individuals increased significantly with increasing population densities indicating, density- dependent dispersal behavior in white storks. Moreover, the finding of a significant interaction effects between the age of dispersing birds and year (1980–2006) suggesting, older birds dispersed more from their previous nest sites over time due to increased competition. Both young and old storks dispersed along their spring migration direction; however, directional preferences were different in young storks and old storks. Young storks tended to settle down before reaching their previous nest sites (leading to the south-eastward dispersal) while old birds tended to keep migrating along the migration direction after reaching their previous nest sites (leading to the north-westward dispersal). Cues triggering dispersal events may be age-dependent. Changes in the dispersal direction over time were observed. Dispersal direction became obscured during the second half of the observation period (1993–2006). Increase in competition may affect dispersal behavior in storks. I discuss the potential role of: age for the observed age-dependent dispersal behavior, and competition for the density dependent dispersal behavior. This Ph.D. thesis contributes significantly to the understanding of population structure and geographical distribution of white storks. Moreover, presented age- and density (competition)-dependent dispersal behavior helps understanding underpinning mechanisms of dispersal behavior in bird species.
This study explores the identity of the Bene Israel caste from India and its assimilation into Israeli society. The large immigration from India to Israel started in the early 1950s and continued until the early 1970s. Initially, these immigrants struggled hard as they faced many problems such as the language barrier, cultural differences, a new climate, geographical isolation, and racial discrimination. This analysis focuses on the three major aspects of the integration process involving the Bene Israel: economic, socio-cultural and political. The study covers the period from the early fifties to the present.
I will focus on the origin of the Bene Israel, which has evolved after their immigration to Israel; from a Hindu–Muslim lifestyle and customs they integrated into the Jewish life of Israel. Despite its ethnographic nature, this study has theological implications as it is an encounter between Jewish monotheism and Indian polytheism.
All the western scholars who researched the Bene Israel community felt impelled to rely on information received by community members themselves. No written historical evidence recorded Bene Israel culture and origin. Only during the nineteenth century onwards, after the intrusion of western Jewish missionaries, were Jewish books translated into Marathi . Missionary activities among the Bene Israel served as a catalyst for the Bene Israel themselves to investigate their historical past . Haeem Samuel Kehimkar (1830-1908), a Bene Israel teacher, wrote notes on the history of the Bene Israel in India in Marathi in 1897. Brenda Ness wrote in her dissertation:
The results [of the missionary activities] are several works about the community in English and Marathi by Bene-Israel authors which have appeared during the last century. These are, for the most part, not documented; they consist of much theorizing on accepted tradition and tend to be apologetic in nature.
There can be no philosophical explanation or rational justification for an entire community to leave their motherland India, and enter into a process of annihilation of its own free will. I see this as a social and cultural suicide. In craving for a better future in Israel, the Indian Bene Israel community pays an enormously heavy price as a people that are today discarded by the East and disowned by the West: because they chose to become something that they never were and never could be. As it is written, “know where you came from, and where you are going.” A community with an ancient history from a spiritual culture has completely lost its identity and self-esteem.
In concluding this dissertation, I realize the dilemma with which I have confronted the members of the Bene Israel community which I have reviewed after strenuous and constant self-examination. I chose to evolve the diversifications of the younger generations urges towards acceptance, and wish to clarify my intricate analysis of this controversial community. The complexity of living in a Jewish State, where citizens cannot fulfill their basic desires, like matrimony, forced an entire community to conceal their true identity and perjure themselves to blend in, for the sake of national integration. Although scholars accepted their new claims, the skepticism of the rabbinate authorities prevails, and they refuse to marry them to this day, suspecting they are an Indian caste.
Clustering in education is important in identifying groups of objects in order to find linked patterns of correlations in educational datasets. As such, MOOCs provide a rich source of educational datasets which enable a wide selection of options to carry out clustering and an opportunity for cohort analyses. In this experience paper, five research studies on clustering in MOOCs are reviewed, drawing out several reasonings, methods, and students’ clusters that reflect certain kinds of learning behaviours. The collection of the varied clusters shows that each study identifies and defines clusters according to distinctive engagement patterns. Implications and a summary are provided at the end of the paper.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
This study investigates the relationship between teacher quality and teachers’ engagement in professional development (PD) activities using data on 229 German secondary school mathematics teachers. We assessed different aspects of teacher quality (e.g. professional knowledge, instructional quality) using a variety of measures, including standardised tests of teachers’ content knowledge, to determine what characteristics are associated with high participation in PD. The results show that teachers with higher scores for teacher quality variables take part in more content-focused PD than teachers with lower scores for these variables. This suggests that teacher learning may be subject to a Matthew effect, whereby more proficient teachers benefit more from PD than less proficient teachers.
Electrical muscle stimulation (EMS) is an increasingly popular training method and has become the focus of research in recent years. New EMS devices offer a wide range of mobile applications for whole-body EMS (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. The present study aimed to determine the differences in exercise intensity between WB-EMS-superimposed and conventional walking (EMS-CW), and CON and WB-EMS-superimposed Nordic walking (WB-EMS-NW) during a treadmill test. Eleven participants (52.0 ± years; 85.9 ± 7.4 kg, 182 ± 6 cm, BMI 25.9 ± 2.2 kg/m2) performed a 10 min treadmill test at a given velocity (6.5 km/h) in four different test situations, walking (W) and Nordic walking (NW) in both conventional and WB-EMS superimposed. Oxygen uptake in absolute (VO2) and relative to body weight (rel. VO2), lactate, and the rate of perceived exertion (RPE) were measured before and after the test. WB-EMS intensity was adjusted individually according to the feedback of the participant. The descriptive statistics were given in mean ± SD. For the statistical analyses, one-factorial ANOVA for repeated measures and two-factorial ANOVA [factors include EMS, W/NW, and factor combination (EMS*W/NW)] were performed (α = 0.05). Significant effects were found for EMS and W/NW factors for the outcome variables VO2 (EMS: p = 0.006, r = 0.736; W/NW: p < 0.001, r = 0.870), relative VO2 (EMS: p < 0.001, r = 0.850; W/NW: p < 0.001, r = 0.937), and lactate (EMS: p = 0.003, r = 0.771; w/NW: p = 0.003, r = 0.764) and both the factors produced higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS*W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values (p = 0.035, r = 0.613), RPE differences for W/NW and EMS*W/NW were not significant. The current study results indicate that WB-EMS influences the parameters of exercise intensity. The impact on exercise intensity and the clinical relevance of WB-EMS-superimposed walking (WB-EMS-W) exercise is questionable because of the marginal differences in the outcome variables.
During the last decade, intracellular actin waves have attracted much attention due to their essential role in various cellular functions, ranging from motility to cytokinesis. Experimental methods have advanced significantly and can capture the dynamics of actin waves over a large range of spatio-temporal scales. However, the corresponding coarse-grained theory mostly avoids the full complexity of this multi-scale phenomenon. In this perspective, we focus on a minimal continuum model of activator–inhibitor type and highlight the qualitative role of mass conservation, which is typically overlooked. Specifically, our interest is to connect between the mathematical mechanisms of pattern formation in the presence of a large-scale mode, due to mass conservation, and distinct behaviors of actin waves.
Why choice matters
(2018)
Measures of democracy are in high demand. Scientific and public audiences use them to describe political realities and to substantiate causal claims about those realities. This introduction to the thematic issue reviews the history of democracy measurement since the 1950s. It identifies four development phases of the field, which are characterized by three recurrent topics of debate: (1) what is democracy, (2) what is a good measure of democracy, and (3) do our measurements of democracy register real-world developments? As the answers to those questions have been changing over time, the field of democracy measurement has adapted and reached higher levels of theoretical and methodological sophistication. In effect, the challenges facing contemporary social scientists are not only limited to the challenge of constructing a sound index of democracy. Today, they also need a profound understanding of the differences between various measures of democracy and their implications for empirical applications. The introduction outlines how the contributions to this thematic issue help scholars cope with the recurrent issues of conceptualization, measurement, and application, and concludes by identifying avenues for future research.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
This is an introduction to Wiener measure and the Feynman-Kac formula on general Riemannian manifolds for Riemannian geometers with little or no background in stochastics. We explain the construction of Wiener measure based on the heat kernel in full detail and we prove the Feynman-Kac formula for Schrödinger operators with bounded potentials. We also consider normal Riemannian coverings and show that projecting and lifting of paths are inverse operations which respect the Wiener measure.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
The influence of the wind to the total continuum of OB supergiants is discussed. For wind velocity distributions with β > 1.0, the wind can have strong influence to the total continuum emission, even at optical wavelengths. Comparing the continuum emission of clumped and unclumped winds, especially for stars with high β values, delivers flux differences of up to 30% with maximum in the near-IR. Continuum observations at these wavelengths are therefore an ideal tool to discriminate between clumped and unclumped winds of OB supergiants.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
Wind influences the development, architecture and morphology of plant roots and may modify subsequent interactions between plants and soil (plant–soil feedbacks—PSFs). However, information on wind effects on fine root morphology is scarce and the extent to which wind changes plant–soil interactions remains unclear. Therefore, we investigated the effects of two wind intensity levels by manipulating surrounding vegetation height in a grassland PSF field experiment. We grew four common plant species (two grasses and two non-leguminous forbs) with soil biota either previously conditioned by these or other species and tested the effect of wind on root:shoot ratio, fine root morphological traits as well as the outcome for PSFs. Wind intensity did not affect biomass allocation (i.e. root:shoot ratio) in any species. However, fine-root morphology of all species changed under high wind intensity. High wind intensity increased specific root length and surface area and decreased root tissue density, especially in the two grasses. Similarly, the direction of PSFs changed under high wind intensity in all four species, but differences in biomass production on the different soils between high and low wind intensity were marginal and most pronounced when comparing grasses with forbs. Because soils did not differ in plant-available nor total nutrient content, the results suggest that wind-induced changes in root morphology have the potential to influence plant–soil interactions. Linking wind-induced changes in fine-root morphology to effects on PSF improves our understanding of plant–soil interactions under changing environmental conditions.
The emission-line dominated spectra of Wolf-Rayet stars are formed in expanding layers of their atmosphere, i.e. in their strong stellar wind. Adequate modeling of such spectra has to face a couple of difficulties. Because of the supersonic motion, the radiative transfer is preferably formulated in the co-moving frame. The strong deviations from local thermodynamical equilibrium (LTE) require to solve the equations of statistical equilibrium for the population numbers, accounting for many hundred atomic energy levels and thousands of line transitions. Moreover, millions of lines from iron-group elements must be taken into account for their blanketing effect. Model atmospheres of the described kind can reproduce the observed WR spectra satisfyingly, and have been widely applied for corresponding spectral analyses.
The most massive stars are those with the shortest but most active life. One group of massive stars, the Luminous Blue Variables (LBVs), of which only a few objects are known, are in particular of interest concerning the stability of stars. They have a high mass loss rate and are close to being instable. This is even more likely as rotation becomes an important factor in stellar evolution of these stars. Through massive stellar winds and sometimes giant eruptions, LBV nebulae are formed. Various aspects in the evolution in the LBV phase lead, beside the large scale morphological and kinematical differences, to a diversity of small structures like clumps, rims, and outflows in these nebulae.
Luminous Blue Variables show strong changes in their stellar wind on time scales of typically years to decades when they expand and contract radially at approximately constant luminosity. Micro-variability on shorter time scales and amplitudes can be observed superimposed to the larger scale radial changes. I will show long-term time series of high resolution spectra which we have collected in the past 20 years for many of the well known LBVs together with a few time series of weekly sampling (HR Car, R40, R71, R110, R127, S Dor) covering a time windows of up to a few months. Wind variability is seen on short and intermediate time scales with the line profiles changing from P Cygni to inverse P Cygni and double peeked profiles sometimes for the same star and spectral line. On longer time scales the ionisation levels for all chemical elements change drastically due to the strong change of the temperature on the stellar surface. While on the long term the characteristic radial changes may have impact on the over all mass loss rates, the variabilities and asymmetries on short and intermediate time scales may cause false estimates of the mass loss rates when confronting models with the observed line profiles
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Ismar Elbogen (1874–1943) and Franz Rosenzweig (1886-1929) were both pioneers in Jewish thought and culture. Elbogen authored the most comprehensive study on Jewish liturgy, while Rosenzweig’s magnum opus The Star of Redemption has emerged as one of the twentieth century’s most innovative and elusive works of Jewish thought. Even though Rosenzweig is not known for his work on or appreciation for the Wissenschaft des Judentums, this article will explore this overlooked aspect of his thought by exploring the influence of Ismar Elbogen. Commentaries to Rosenzweig’s views on prayer are numerous, yet none mention the work of Elbogen. This is a problem. By comparing Elbogen’s work on Jewish liturgy with Rosenzweig’s writings on prayer in the Star, we are able to demonstrate how methods seminal to the Wissenschaft des Judentums helped articulate several of Rosenzweig’s most innovative contributions to Jewish thought.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
Genetic divergence is impacted by many factors, including phylogenetic history, gene flow, genetic drift, and divergent selection. Rotifers are an important component of aquatic ecosystems, and genetic variation is essential to their ongoing adaptive diversification and local adaptation. In addition to coding sequence divergence, variation in gene expression may relate to variable heat tolerance, and can impose ecological barriers within species. Temperature plays a significant role in aquatic ecosystems by affecting species abundance, spatio-temporal distribution, and habitat colonization. Recently described (formerly cryptic) species of the Brachionus calyciflorus complex exhibit different temperature tolerance both in natural and in laboratory studies, and show that B. calyciflorus sensu stricto (s.s.) is a thermotolerant species. Even within B. calyciflorus s.s., there is a tendency for further temperature specializations. Comparison of expressed genes allows us to assess the impact of stressors on both expression and sequence divergence among disparate populations within a single species. Here, we have used RNA-seq to explore expressed genetic diversity in B. calyciflorus s.s. in two mitochondrial DNA lineages with different phylogenetic histories and differences in thermotolerance. We identify a suite of candidate genes that may underlie local adaptation, with a particular focus on the response to sustained high or low temperatures. We do not find adaptive divergence in established candidate genes for thermal adaptation. Rather, we detect divergent selection among our two lineages in genes related to metabolism (lipid metabolism, metabolism of xenobiotics).
A significant number of the central stars of planetary nebulae (CSPNe) are hydrogen-deficient, showing a chemical composition of helium, carbon, and oxygen. Most of them exhibit Wolf-Rayet-like emission line spectra, similar to those of the massive WC Pop I stars, and are therefore classified as of spectral type [WC]. In the last years, CSPNe of other Wolf-Rayet spectral subtypes have been identified, namely PB 8, which is of spectral type [WN/C], and IC 4663 and Abell 48, which are of spectral type [WN]. We review spectral analyses of Wolf-Rayet type central stars of different evolutionary stages and discuss the results in the context of stellar evolution. Especially we consider the question of a common evolutionary channel for [WC] stars. The constraints on the formation of [WN] or [WC/N] subtype stars will also be addressed.
An overview of the known Wolf-Rayet (WR) population of the Milky Way is presented, including a brief overview of historical catalogues and recent advances based on infrared photometric and spectroscopic observations resulting in the current census of 642 (vl.13 online catalogue). The observed distribution of WR stars is considered with respect to known star clusters, given that ≤20% of WR stars in the disk are located in clusters. WN stars outnumber WC stars at all galactocentric radii, while early-type WC stars are strongly biased against the inner Milky Way. Finally, recent estimates of the global WR population in the Milky Way are reassessed, with 1,200±100 estimated, such that the current census may be 50% complete. A characteristic WR lifetime of 0.25 Myr is inferred for an initial mass threshold of 25 M⊙.
I review our current understanding of the interaction between a Wolf-Rayet star's fast wind and the surrounding medium, and discuss to what extent the predictions of numerical simulations coincide with multiwavelength observations of Wolf-Rayet nebulae. Through a series of examples, I illustrate how changing the input physics affects the results of the numerical simulations. Finally, I discuss how numerical simulations together with multiwavelength observations of these objects allow us to unpick the previous mass-loss history of massive stars.
Wolf-Rayet Stars
(2015)
Nearly 150 years ago, the French astronomers Charles Wolf and Georges Rayet described stars with very conspicuous spectra that are dominated by bright and broad emission lines. Meanwhile termed Wolf-Rayet Stars after their discoverers, those objects turned out to represent important stages in the life of massive stars.
As the first conference in a long time that was specifically dedicated to Wolf-Rayet stars, an international workshop was held in Potsdam, Germany, from 1.-5. June 2015. About 100 participants, comprising most of the leading experts in the field as well as as many young scientists, gathered for one week of extensive scientific exchange and discussions. Considerable progress has been reported throughout, e.g. on finding such stars, modeling and analyzing their spectra, understanding their evolutionary context, and studying their circumstellar nebulae. While some major questions regarding Wolf-Rayet stars still remain open 150 years after their discovery, it is clear today that these objects are not just interesting stars as such, but also keystones in the evolution of galaxies.
These proceedings summarize the talks and posters presented at the Potsdam Wolf-Rayet workshop. Moreover, they also include the questions, comments, and discussions emerging after each talk, thereby giving a rare overview not only about the research, but also about the current debates and unknowns in the field. The Scientific Organizing Committee (SOC) included Alceste Bonanos (Athens), Paul Crowther (Sheffield), John Eldridge (Auckland), Wolf-Rainer Hamann (Potsdam, Chair), John Hillier (Pittsburgh), Claus Leitherer (Baltimore), Philip Massey (Flagstaff), George Meynet (Geneva), Tony Moffat (Montreal), Nicole St-Louis (Montreal), and Dany Vanbeveren (Brussels).
Wolf-Rayet (WR) stars, as they are advanced stages of the life of massive stars, provide a good test for various physical processes involved in the modelling of massive stars, such as rotation and mass loss. In this paper, we show the outputs of the latest grids of single massive stars computed with the Geneva stellar evolution code, and compare them with some observations. We present a short discussion on the shortcomings of single stars models and we also briefly discuss the impact of binarity on the WR populations.
In this review, I discuss the suitability of massive star progenitors, evolved in isolation or in interacting binaries, for the production of observed supernovae (SNe) IIb, Ib, Ic. These SN types can be explained through variations in composition. The critical need of non-thermal effects to produce He I lines favours low-mass He-rich ejecta (in which ^56 Ni can be more easily mixed with He) for the production of SNe IIb/Ib, which thus may arise preferentially from moderate-mass donors in interacting binaries. SNe Ic may instead arise from higher mass progenitors, He-poor or not, because their larger CO cores prevent efficient non-thermal excitation of He i lines. However, current single star evolution models tend to produce Wolf-Rayet (WR) stars at death that have a final mass of > 10 M⊙. Single WR star explosion models produce ejecta that are too massive to match the observed light curve widths and rise times of SNe IIb/Ib/Ic, unless their kinetic energy is systematically and far greater than the canonical value of 10^56 erg. Future work is needed to evaluate the energy/mass degeneracy in light curve properties. Alternatively, a greater mass loss during the WR phase, perhaps in the form of eruptions, as evidenced in SNe Ibn, may reduce the final WR mass. If viable, such explosions would nonetheless favour a SN Ic, not a Ib.
We analyze the impact of women’s managerial representation on the gender pay gap among employees on the establishment level using German Linked-Employer-Employee-Data from the years 2004 to 2018. For identification of a causal effect we employ a panel model with establishment fixed effects and industry-specific time dummies. Our results show that a higher share of women in management significantly reduces the gender pay gap within the firm. An increase in the share of women in first-level management e.g. from zero to above 33 percent decreases the adjusted gender pay gap from a baseline of 15 percent by 1.2 percentage points, i.e. to roughly 14 percent. The effect is stronger for women in second-level than first-level management, indicating that women managers with closer interactions with their subordinates have a higher impact on the gender pay gap than women on higher management levels. The results are similar for East and West Germany, despite the lower gender pay gap and more gender egalitarian social norms in East Germany. From a policy perspective, we conclude that increasing the number of women in management positions has the potential to reduce the gender pay gap to a limited extent. However, further policy measures will be needed in order to fully close the gender gap in pay.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
We report two corpus analyses to examine the impact of animacy, definiteness, givenness and type of referring expression on the ordering of double objects in the spontaneous speech of German-speaking two- to four-year-old children and the child-directed speech of their mothers. The first corpus analysis revealed that definiteness, givenness and type of referring expression influenced word order variation in child language and child-directed speech when the type of referring expression distinguished between pronouns and lexical noun phrases. These results correspond to previous child language studies in English (e.g., de Marneffe et al. 2012). Extending the scope of previous studies, our second corpus analysis examined the role of different pronoun types on word order. It revealed that word order in child language and child-directed speech was predictable from the types of pronouns used. Different types of pronouns were associated with different sentence positions but also showed a strong correlation to givenness and definiteness. Yet, the distinction between pronoun types diminished the effects of givenness so that givenness had an independent impact on word order only in child-directed speech but not in child language. Our results support a multi-factorial approach to word order in German. Moreover, they underline the strong impact of the type of referring expression on word order and suggest that it plays a crucial role in the acquisition of the factors influencing word order variation.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
Recent research has shown that the early lexical representations children establish in their second year of life already seem to be phonologically detailed enough to allow differentiation from very similar forms. In contrast to these findings children with specific language impairment show problems in discriminating phonologically similar word forms up to school age. In our study we investigated the question whether there would be differences in the processing of phonological details in normally developing and in children with low language performance in the second year of life. This was done by a retrospective study in which in the processing of phonological details was tested by a preferential looking experiment when the children were 19 months old. At the age of 30 months children were tested with a standardized German test of language comprehension and production (SETK2). The preferential looking data at 19 months revealed an opposite reaction pattern for the two groups: while the children scoring normally in the SETK2 increase their fixations of a pictured object only when it was named with the correct word, children with later low language performance did so only when presented with a phonologically slightly deviant mispronunciation. We suggest that this pattern does not point to a specific deficit in processing phonological information in these children but might be related to an instability of early phonological representations, and/or a generalized problem of information processing as compared to typically developing children.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Leveraging two cohort-specific pension reforms, this paper estimates the forward-looking effects of an exogenous increase in the working horizon on (un)employment behaviour for individuals with a long remaining statutory working life. Using difference-in-differences and regression discontinuity approaches based on administrative and survey data, I show that a longer legal working horizon increases individuals’ subjective expectations about the length of their work life, raises the probability of employment, decreases the probability of unemployment, and increases the intensity of job search among the unemployed. Heterogeneity analyses show that the demonstrated employment effects are strongest for women and in occupations with comparatively low physical intensity, i.e., occupations that can be performed at older ages.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.
Diversity is a term that is broadly used and challenging for informatics research, development and education. Diversity concerns may relate to unequal participation, knowledge and methodology, curricula, institutional planning etc. For a lot of these areas, measures, guidelines and best practices on diversity awareness exist. A systemic, sustainable impact of diversity measures on informatics is still largely missing. In this paper I explore what working with diversity and gender concepts in informatics entails, what the main challenges are and provide thoughts for improvement. The paper includes definitions of diversity and intersectionality, reflections on the disciplinary basis of informatics and practical implications of integrating diversity in informatics research and development. In the final part, two concepts from the social sciences and the humanities, the notion of “third space”/hybridity and the notion of “feminist ethics of care”, serve as a lens to foster more sustainable ways of working with diversity in informatics.
Injuries in professional soccer are a significant concern for teams, and they are caused amongst others by high training load. This cohort study describes the relationship between workload parameters and the occurrence of non-contact injuries, during weeks with high and low workload in professional soccer players throughout the season. Twenty-one professional soccer players aged 28.3 ± 3.9 yrs. who competed in the Iranian Persian Gulf Pro League participated in this 48-week study. The external load was monitored using global positioning system (GPS, GPSPORTS Systems Pty Ltd) and the type of injury was documented daily by the team's medical staff. Odds ratio (OR) and relative risk (RR) were calculated for non-contact injuries for high- and low-load weeks according to acute (AW), chronic (CW), acute to chronic workload ratio (ACWR), and AW variation (Δ-Acute) values. By using Poisson distribution, the interval between previous and new injuries were estimated. Overall, 12 non-contact injuries occurred during high load and 9 during low load weeks. Based on the variables ACWR and Δ-AW, there was a significantly increased risk of sustaining non-contact injuries (p < 0.05) during high-load weeks for ACWR (OR: 4.67), and Δ-AW (OR: 4.07). Finally, the expected time between injuries was significantly shorter in high load weeks for ACWR [1.25 vs. 3.33, rate ratio time (RRT)] and Δ-AW (1.33 vs. 3.45, RRT) respectively, compared to low load weeks. The risk of sustaining injuries was significantly larger during high workload weeks for ACWR, and Δ-AW compared with low workload weeks. The observed high OR in high load weeks indicate that there is a significant relationship between workload and occurrence of non-contact injuries. The predicted time to new injuries is shorter in high load weeks compared to low load weeks. Therefore, the frequency of injuries is higher during high load weeks for ACWR and Δ-AW. ACWR and Δ-AW appear to be good indicators for estimating the injury risk, and the time interval between injuries.
Workplace-related anxieties and workplace phobia : a concept of domain-specific mental disorders
(2008)
Background: Anxiety in the workplace is a special problem as workplaces are especially prone to provoke anxiety: There are social hierarchies, rivalries between colleagues, sanctioning through superiors, danger of accidents, failure, and worries of job security. Workplace phobia is a phobic anxiety reaction with symptoms of panic occurring when thinking of or approaching the workplace, and with clear tendency of avoidance. Objectives: What characterizes workplace-related anxieties and workplace phobia as domain-specific mental disorders in contrast to conventional anxiety disorders? Method: 230 patients from an inpatient psychosomatic rehabilitation center were interviewed with the (semi-)structured Mini-Work-Anxiety-Interview and the Mini International Neuropsychiatric Interview, concerning workplace-related anxieties and conventional mental disorders. Additionally, the patients filled in the self-rating questionnaires Job-Anxiety-Scale (JAS) and the Symptom Checklist (SCL-90-R)measuring job-related and general psychosomatic symptom load. Results: Workplace-related anxieties occurred together with conventional anxiety disorders in 35% of the patients, but also alone in others (23%). Workplace phobia could be found in 17% of the interviewed, any diagnosis of workplace-related anxiety was stated in 58%. Workplace phobic patients had significantly higher scores in job-anxiety than patients without workplace phobia. Patients with workplace phobia were significantly longer on sick leave in the past 12 months (23,5 weeks) than patients without workplace phobia (13,4 weeks). Different qualities of workplace-related anxieties lead with different frequencies to work participation disorders. Conclusion: Workplace phobia cannot be described by only assessing the general level of psychosomatic symptom load and conventional mental disorders. Workplace-related anxieties and workplace phobia have an own clinical value which is mainly defined by specific workplace-related symptom load and work-participation disorders. They require special therapeutic attention and treatment instead of a “sick leave” certification by the general health physician. Workplace phobia should be named with a proper diagnosis according to ICD-10 chapter V, F 40.8: “workplace phobia”.
World market governance
(2014)
Democratic capitalism or liberal democracy, as the successful marriage of convenience between market liberalism and democracy sometimes is called, is in trouble. The market economy system has become global and there is a growing mismatch with the territoriality of the nation-states. The functional global networks and inter-governmental order can no longer keep pace with the rapid development of the global market economy and regulatory capture is all too common. Concepts like de-globalization, self-regulation, and global government are floated in the debate. The alternatives are analysed and found to be improper, inadequate or plainly impossible. The proposed route is instead to accept that the global market economy has developed into an independent fundamental societal system that needs its own governance. The suggestion is World Market Governance based on the Rule of Law in order to shape the fitness environment for the global market economy and strengthen the nation-states so that they can regain the sovereignty to decide upon the social and cultural conditions in each country. Elements in the proposed Rule of Law are international legislation decided by an Assembly supported by a Council, and an independent Judiciary. Existing international organisations would function as executors. The need for broad sustained demand for regulations in the common interest is identified.
For some years now, spectroscopic measurements of massive stars in the amateur domain have been fulfilling professional requirements. Various groups in the northern and southern hemispheres have been established, running successful professional-amateur (ProAm) collaborative campaigns, e.g., on WR, O and B type stars. Today high quality data (echelle and long-slit) are regularly delivered and corresponding results published. Night-to-night long-term observations over months to years open a new opportunity for massive-star research. We introduce recent and ongoing sample campaigns (e.g. ∊ Aur, WR 134, ζ Pup), show respective results and highlight the vast amount of data collected in various data bases. Ultimately it is in the time-dependent domain where amateurs can shine most.
WR Time Series Photometry
(2015)
We take a comprehensive look at Wolf Rayet photometric variability using the MOST satellite. This sample, consisting of 6 WR stars and 6 WC stars defies all typical photometric analysis. We do, however, confirm the presence of unusual periodic signals resembling sawtooth waves which are present in 11 out of 12 stars in this sample.
Writing an alternative Australia : women and national discourse in nineteenth-century literature
(2007)
In this thesis, I want to outline the emergence of the Australian national identity in colonial Australia. National identity is not a politically determined construct but culturally produced through discourse on literary works by female and male writers. The emergence of the dominant bushman myth exhibited enormous strength and influence on subsequent generations and infused the notion of “Australianness” with exclusively male characteristics. It provided a unique geographical space, the bush, on and against which the colonial subject could model his identity. Its dominance rendered non-male and non-bush experiences of Australia as “un-Australian.” I will present a variety of contemporary voices – postcolonial, Aboriginal, feminist, cultural critics – which see the Australian identity as a prominent topic, not only in the academia but also in everyday culture and politics. Although positioned in different disciplines and influenced by varying histories, these voices share a similar view on Australian society: Australia is a plural society, it is home to millions of different people – women, men, and children, Aboriginal Australians and immigrants, newly arrived and descendents of the first settlers – with millions of different identities which make up one nation. One version of national identity does not account for the multitude of experiences; one version, if applied strictly, renders some voices unheard and oppressed. After exemplifying how the literature of the 1890s and its subsequent criticism constructed the itinerant worker as “the” Australian, literary productions by women will be singled out to counteract the dominant version by presenting different opinions on the state of colonial Australia. The writers Louisa Lawson, Barbara Baynton, and Tasma are discussed with regard to their assessment of their mother country. These women did not only present a different picture, they were also gifted writers and lived the ideal of the “New Women:” they obtained divorces, remarried, were politically active, worked for their living and led independent lives. They paved the way for many Australian women to come. In their literary works they allowed for a dual approach to the bush and the Australian nation. Louisa Lawson credited the bushwoman with heroic traits and described the bush as both cruel and full of opportunities not known to women in England. She understood women’s position in Australian society as oppressed and tried to change politics and culture through the writings in her feminist magazine the Dawn and her courageous campaign for women suffrage. Barbara Baynton painted a gloomy picture of the Australian bush and its inhabitants and offered one of the fiercest critiques of bush society. Although the woman is presented as the able and resourceful bushperson, she does not manage to survive in an environment which functions on male rules and only values the economic potential of the individual. Finally, Tasma does not present as outright a critique as Barbara Baynton, however, she also attests the colonies a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England. Tasma attests that the colonies had a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England and demonstrates how uncertainties and irritations emerged in the course of Australia’s nation formation. These three women, as writers, commentators, and political activists, faced exclusion from the dominant literary discourses. Their assessment of colonial society remained unheard for a long time. Now, after much academic excavation, these voices speak to us from the past and remind us that people are diverse, thus nation is diverse. Dominant power structures, the institutions and individuals who decide who can contribute to the discourse on nation, have to be questioned and reassessed, for they mute voices which contribute to a wider, to the “full”, and maybe “real” picture of society.
Writing travel, writing life
(2022)
The book compares the texts of three Swiss authors: Ella Maillart, Annemarie Schwarzenbach and Nicolas Bouvier. The focus is on their trip from Genève to Kabul that Ella Maillart and Annemarie Schwarzenbach made together in 1939/1940 and Nicolas Bouvier 1953/1954 with the artist Thierry Vernet. The comparison shows the strong connection between the journey and life and between ars vivendi and travel literature.
This book also gives an overview of and organises the numerous terms, genres, and categories that already exist to describe various travel texts and proposes the new term travelling narration. The travelling narration looks at the text from a narratological perspective that distinguishes the author, narrator, and protagonist within the narration.
In the examination, ten motifs could be found to characterise the travelling narration: Culture, Crossing Borders, Freedom, Time and Space, the Aesthetics of Landscapes, Writing and Reading, the Self and/as the Other, Home, Religion and Spirituality as well as the Journey. The importance of each individual motif does not only apply in the 1930s or 1950s but also transmits important findings for living together today and in the future.
WRKY23 is a component of the transcriptional network mediating auxin feedback on PIN polarity
(2018)
Auxin is unique among plant hormones due to its directional transport that is mediated by the polarly distributed PIN auxin transporters at the plasma membrane. The canalization hypothesis proposes that the auxin feedback on its polar flow is a crucial, plant-specific mechanism mediating multiple self-organizing developmental processes. Here, we used the auxin effect on the PIN polar localization in Arabidopsis thaliana roots as a proxy for the auxin feedback on the PIN polarity during canalization. We performed microarray experiments to find regulators of this process that act downstream of auxin. We identified genes that were transcriptionally regulated by auxin in an AXR3/IAA17-and ARF7/ARF19-dependent manner. Besides the known components of the PIN polarity, such as PID and PIP5K kinases, a number of potential new regulators were detected, among which the WRKY23 transcription factor, which was characterized in more detail. Gain-and loss-of-function mutants confirmed a role for WRKY23 in mediating the auxin effect on the PIN polarity. Accordingly, processes requiring auxin-mediated PIN polarity rearrangements, such as vascular tissue development during leaf venation, showed a higher WRKY23 expression and required the WRKY23 activity. Our results provide initial insights into the auxin transcriptional network acting upstream of PIN polarization and, potentially, canalization-mediated plant development.
Giacconi et al. (1962) discovered a diffuse cosmic X-ray background with rocket experiments when they searched for lunar X-ray emission. Later satellite missions found a spectral peak in the cosmic X-ray background at ~30 keV. Imaging X-ray satellites such as ROSAT (1990-1999) were able to resolve up to 80% of the background below 2 keV into single point sources, mainly active galaxies. The cosmic X-ray background is the integration of all accreting super-massive (several million solar masses) black holes in the centre of active galaxies over cosmic time. Synthesis models need further populations of X-ray absorbed active galaxy nuclei (AGN) in order to explain the cosmic X-ray background peak at ~30 keV. Current X-ray missions such as XMM-Newton and Chandra offer the possibility of studying these additional populations. This Ph.D. thesis studies the populations that dominate the X-ray sky. For this purpose the 120 ksec XMM-Newton Marano field survey, named for an earlier optical quasar survey in the southern hemisphere, is analysed. Based on the optical follow-up observations the X-ray sources are spectroscopically classified. Optical and X-ray properties of the different X-ray source populations are studied and differences are derived. The amount of absorption in the X-ray spectra of type II AGN, which are considered as a main contributor to the X-ray background at ~30 keV, is determined. In order to extend the sample size of the rare type II AGN, this study also includes objects from another survey, the XMM-Newton Serendipitous Medium Sample. In addition, the dependence of the absorption in type II AGN with redshift and X-ray luminosity is analysed. We detected 328 X-ray sources in the Marano field. 140 sources were spectroscopically classified. We found 89 type I AGN, 36 type II AGN, 6 galaxies, and 9 stars. AGN, galaxies, and stars are clearly distinguishable by their optical and X-ray properties. Type I and II AGN do not separate clearly. They have a significant overlap in all studied properties. In a few cases the X-ray properties are in contradiction to the observed optical properties for type I and type II AGN. For example we find type II AGN that show evidence for optical absorption but are not absorbed in X-rays. Based on the additional use of near infra-red imaging (K-band), we were able to identify several of the rare type II AGN. The X-ray spectra of type II AGN from the XMM-Newton Marano field survey and the XMM-Newton Serendipitous Medium Sample were analysed. Since most of the sources have only ~40 X-ray counts in the XMM-Newton PN-detector, I carefully studied the fit results of simulated X-ray spectra as a function of fit statistic and binning method. The objects revealed only moderate absorption. In particular, I do not find any Compton-thick sources (absorbed by column densities of NH > 1.5 x 10^24 cm^−2). This gives evidence that type II AGN are not the main contributor of the X-ray background around 30 keV. Although bias effects may occur, type II AGN show no noticeable trend of the amount of absorption with redshift or X-ray luminosity.
Using a code that employs a self-consistent method for computing the effects of photoionization on circumstellar gas dynamics, we model the formation of wind-driven nebulae around massive Wolf-Rayet (W-R) stars. Our algorithm incorporates a simplified model of the photo-ionization source, computes the fractional ionization of hydrogen due to the photoionizing flux and recombination, and determines self-consistently the energy balance due to ionization, photo-heating and radiative cooling. We take into account changes in stellar properties and mass-loss over the star's evolution. Our multi-dimensional simulations clearly reveal the presence of strong ionization front instabilities. Using various X-ray emission models, and abundances consistent with those derived for W-R nebulae, we compute the X-ray flux and spectra from our wind bubble models. We show the evolution of the X-ray spectral features with time over the evolution of the star, taking the absorption of the X-rays by the ionized bubble into account. Our simulated X-ray spectra compare reasonably well with observed spectra of Wolf-Rayet bubbles. They suggest that X-ray nebulae around massive stars may not be easily detectable, consistent with observations.∗
In this review I briefly summarize our knowledge of the X-ray emission from single WN, WC, and WO stars. These stars have relatively modest X-ray luminosities, typically not exceeding 1L⊙. The analysis of X-ray spectra usually reveals thermal plasma with temperatures reaching a few x10 MK. X-ray variability is detected in some WN stars. At present we don't fully understand how X-ray radiation in produced in WR stars, albeit there are some promising research avenues, such as the presence of CIRs in the winds of some stars. To fully understand WR stars we need to unravel mechanisms of X-ray production in their winds.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.