Refine
Year of publication
Document Type
- Article (20724)
- Doctoral Thesis (3140)
- Postprint (2090)
- Monograph/Edited Volume (1198)
- Other (661)
- Review (585)
- Conference Proceeding (324)
- Preprint (232)
- Part of a Book (231)
- Working Paper (134)
Language
- English (29534) (remove)
Is part of the Bibliography
- yes (29534) (remove)
Keywords
- climate change (172)
- Germany (103)
- machine learning (86)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (67)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4876)
- Institut für Biochemie und Biologie (4708)
- Institut für Geowissenschaften (3309)
- Institut für Chemie (2855)
- Institut für Mathematik (1571)
- Department Psychologie (1405)
- Institut für Ernährungswissenschaft (1031)
- Department Linguistik (924)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (796)
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
In this visualization, the authors show changes in family patterns by different race groups across two cohorts. Using data from the National Longitudinal Survey of Youth 1979 (born from 1957 to 1965) and 1997 (born from 1980 to 1984), the authors visualize the relationship-parenthood state distributions at each age between 15 and 35 years by race and cohort. The results suggest the rise of cohabiting mothers and the decline of married and divorced mothers among women born from 1980 to 1984. Black women born from 1980 to 1984 were more likely to experience single/childless and single/parent status compared with Black women born from 1957 to 1965. Although with some visible postponement in the recent cohort, white women in both cohorts were more likely to experience married/parent status than other race groups. The decline in married/parent status across the two generations was sharpest among Hispanic women. These descriptive findings highlight the importance of identifying race when discussing changes in family formation and dissolution trends across generations.
Invisible iterations: how formal and informal organization shape knowledge networks for coordination
(2024)
This study takes a network approach to investigate coordination among knowledge workers as grounded in both formal and informal organization. We first derive hypotheses regarding patterns of knowledge-sharing relationships by which workers pass on and exchange tacit and codified knowledge within and across organizational hierarchies to address the challenges that underpin contemporary knowledge work. We use survey data and apply exponential random graph models to test our hypotheses. We then extend the quantitative network analysis with insights from qualitative interviews and demonstrate that the identified knowledge-sharing patterns are the micro-foundational traces of collective coordination resulting from two underlying coordination mechanisms which we label ‘invisible iterations’ and ‘bringing in the big guns’. These mechanisms and, by extension, the associated knowledge-sharing patterns enable knowledge workers to perform in a setting that is characterized by complexity, uncertainty and ambiguity. Our research contributes to theory on the interplay between formal and informal organization for coordination by showing how self-directed, informal action is supported by the formal organizational hierarchy. In doing so, it also extends understanding of the role that hierarchy plays for knowledge-intensive work. Finally, it establishes the collective need to coordinate work as a previously overlooked driver of knowledge network relationships and network patterns. © 2024 The Authors. Journal of Management Studies published by Society for the Advancement of Management Studies and John Wiley & Sons Ltd.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
With the surging reliance on videoconferencing tools, users may find themselves staring at their reflections for hours a day. We refer to this phenomenon as self-referential information (SRI) consumption and examine its consequences and the mechanism behind them. Building on self-awareness research and the strength model of self-control, we argue that SRI consumption heightens the state of self-awareness and thereby depletes participants’ mental resources, eventually undermining virtual meeting (VM) outcomes. Our findings from a European employee sample revealed contrary effects of SRI consumption across speaker vs listener roles. Engagement with self-view is positively associated with self-awareness, which, in turn, is negatively related to satisfaction with VM process, perceived productivity, and enjoyment. Looking at the self while listening to others exhibits adverse direct and indirect (via self-awareness) effects on VM outcomes. However, looking at the self when speaking exhibits positive direct effects on satisfaction with VM process and enjoyment.
Enhancing higher entrepreneurship education: insights from practitioners for curriculum improvement
(2024)
Curricula for higher entrepreneurship education should meet the requirements of both a solid theoretical foundation and a practical orientation. When these curricula are designed by education specialists, entrepreneurs are usually not consulted. To explore practitioners’ curricular recommendations, we conducted 73 semi-structured interviews with entrepreneurs with at least five years of professional experience. We collected 49 items for teaching and learning objectives, 37 for contents, 28 for teaching methods, and 17 for assessment methods. The respondents are convinced that students should acquire solid knowledge in business and management, legal issues, and entrepreneurship. For the latter, only some core aspects are provided. The entrepreneurs put greater emphasis on entrepreneurial skills and attitudes and consider experiential learning designs as most suitable, both in the secure setting of the classroom and in real life. The findings can help reflect on current entrepreneurship curriculum designs.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
In the debate on epistemic injustice, it is generally assumed that testimonial injustice as one form of epistemic injustice cannot be committed (fully) deliberately or intentionally because it involves unconscious identity prejudices. Drawing on the case of sexual violence against refugees in European refugee camps, this paper argues that there is a form of testimonial injustice—willful testimonial injustice—that is deliberate. To do so, the paper argues (a) that the hearer intentionally utilizes negative identity prejudices for a particular purpose and (b) that the hearer is aware of the fact that the intentionally used prejudices are in fact prejudices. Furthermore, the paper shows how testimonial injustice relates to recognition failures both in terms of a causal as well as a constitutive claim. In fact, introducing willful testimonial injustice can support the constitutive claim of such a relation that has so far received little attention. Besides arguing for a novel form of testimonial injustice and contributing to the recent debate on the relation between epistemic injustice and recognition failures, this paper is also motivated by the attempt to draw attention to the inhumane conditions for refugees at the border of Europe as well as elsewhere.
We examine how the gender of business-owners is related to the wages paid to female relative to male employees working in their firms. Using Finnish register data and employing firm fixed effects, we find that the gender pay gap is – starting from a gender pay gap of 11 to 12 percent - two to three percentage-points lower for hourly wages in female-owned firms than in male-owned firms. Results are robust to how the wage is measured, as well as to various further robustness checks. More importantly, we find substantial differences between industries. While, for instance, in the manufacturing sector, the gender of the owner plays no role for the gender pay gap, in several service sector industries, like ICT or business services, no or a negligible gender pay gap can be found, but only when firms are led by female business owners. Businesses in male ownership maintain a gender pay gap of around 10 percent also in the latter industries. With increasing firm size, the influence of the gender of the owner, however, fades. In large firms, it seems that others – firm managers – determine wages and no differences in the pay gap are observed between male- and female-owned firms.
The growing use of digital tools in policy implementation has altered the work of street-level bureaucrats who are granted substantial discretionary power in decision-making. Digital tools can constrain discretionary power, like the curtailment thesis proposed, or serve as action resources, like the enablement thesis suggested. This article assesses empirical evidence of the impact of digital tools on street-level work and decision-making in service-oriented and regulation-oriented organisations based on a systematic literature review and thematic qualitative content analysis of 36 empirical studies published until 2021. The findings demonstrate different effects with regard to the role of digital tools and the core tasks of the public administration, depending on political and managerial goals and consequent system design. Leading or decisive digital tools mostly curtail discretion, especially in service-oriented organisations. In contrast, an enhanced information base or recommendations for actions enable decision-making, in particular in regulation-oriented organisations. By showing how street-level bureaucrats actively try to resist the curtailing effects caused by rigid design to address individual circumstances, for instance by establishing ways of coping like rule bending or rule breaking, using personal resources or prioritising among clients, this study demonstrates the importance of the continuation thesis and the persistently crucial role of human judgement in policy implementation.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Diglossic translanguaging
(2024)
This book examines how German-speaking Jews living in Berlin make sense and make use of their multilingual repertoire. With a focus on lexical variation, the book demonstrates how speakers integrate Yiddish and Hebrew elements into German for indexing belonging and for positioning themselves within the Jewish community. Linguistic choices are shaped by language ideologies (e.g., authenticity, prescriptivism, nostalgia). Speakers translanguage when using their multilingual repertoire, but do so in a diglossic way, using elements from different languages for specific domains
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
The paper argues that economists’ position-taking in discourses of crises should be understood in the light of economists’ positions in the academic field of economics. This hypothesis is investigated by performing a multiple correspondence analysis (MCA) on a prosopographical data set of 144 French economists who positioned themselves between 2008 and 2021 in controversies over the euro crisis, the French political economic model, and French economics. In these disciplinary controversies, different forms of (post-)national academic capital are used by economists to either initiate change or defend the status quo. These strategies are then interpreted as part of more general power struggles over the basic national or post-national constitution and legitimate governance of economy and society.
The biosecurity individual
(2024)
Discoveries in biomedicine and biotechnology, especially in diagnostics, have made prevention and (self)surveillance increasingly important in the context of health practices. Frederike Offizier offers a cultural critique of the intersection between health, security and identity, and explores how the focus on risk and security changes our understanding of health and transforms our relationship to our bodies. Analyzing a wide variety of texts, from life writing to fiction, she offers a critical intervention on how this shift in the medical gaze produces new paradigms of difference and new biomedically facilitated identities: biosecurity individuals.
Economic crises as critical junctures for policy and structural changes towards decarbonization
(2024)
Crises may act as tipping points for decarbonization pathways by triggering structural economic change or offering windows of opportunity for policy change. We investigate both types of effects of the global financial and COVID-19 crises on decarbonization in Spain and Germany through a quantitative Kaya-decomposition analysis of CO2 emissions and through a qualitative review of climate and energy policy changes. We show that the global financial crisis resulted in a critical juncture for Spanish CO2 emissions due to the combined effects of the deep economic recession and crisis-induced structural change, resulting in reductions in carbon and energy intensities and shifts in the economic structure. However, the crisis also resulted in a rollback of renewable energy policy, halting progress in the transition to green electricity. The impacts were less pronounced in Germany, where pre-existing decarbonization and policy trends continued after the crisis. Recovery packages had modest effects, primarily due to their temporary nature and the limited share of climate-related spending. The direct short-term impacts of the COVID-19 crisis on CO2 emissions were more substantial in Spain than in Germany. The policy responses in both countries sought to align short-term economic recovery with the long-term climate change goals of decarbonization, but it is too soon to observe their lasting effects. Our findings show that crises can affect structural change and support decarbonization but suggest that such effects depend on pre-existing trends, the severity of the crisis and political manoeuvring during the crisis.
How do the rights of same-sex couples have to be ensured by states, and which kind of environmental obligations are induced by the right to life and to personal integrity? Questions as diverse and far-reaching as these are regularly dealt with by the Inter-American Court of Human Rights in its advisory function. This book is the first comprehensive, non-Spanish-written treatise on the advisory function of this Court. It analyzes the scope of the Court's advisory jurisdiction and its procedural practice in comparison with that of other international courts. Moreover, the legal effects of the Court’s advisory opinions and the question when the Court should better reject a request for an advisory opinion are examined.
Captive Red Army soldiers made up the majority of victims of Nazi Germany’s starvation policy against Soviet civilians and other non-combatants and thus constituted the largest single victim group of the German war of annihilation against the Soviet Union. Indeed, Soviet prisoners of war were the largest victim group of all National Socialist annihilation policies after the European Jews. Before the launch of Operation Barbarossa, it was clear to the Wehrmacht planning departments on exactly what scale they could expect to capture Soviet troops. Yet, they neglected to make the necessary preparations for feeding and sheltering the captured soldiers, who were viewed by the economic staffs and the military leadership alike as direct competitors of German troops and the German home front for precious food supplies. The number of extra mouths to feed was incompatible with German war aims. The obvious limitations on their freedom of movement and the relative ease with which large numbers could be segregated and their rations controlled were crucial factors in the death of over 3 million Soviet POWs, the vast majority directly or indirectly as a result of deliberate policies of neglect, undernourishment, and starvation while in the ‘care’ of the Wehrmacht. The most reliable figures for the mortality of Soviet POWs in German captivity reveal that up to 3.3 million died from a total of just over 5.7 million captured between June 1941 and February 1945 — a proportion of almost 58 percent. Of these, 2 million were already dead by the beginning of February 1942. In English, there is still neither a single monograph nor a single edited volume dedicated to the subject. This article now provides the first detailed stand-alone synthesis in that language addressing the whole period from 1941 to 1945.
Enterprise Resource Planning (ERP) system customization is often necessary because companies have unique processes that provide their competitive advantage. Despite new technological advances such as cloud computing or model-driven development, technical ERP customization options are either outdated or ambiguously formulated in the scientific literature. Using a systematic literature review (SLR) that analyzes 137 definitions from 26 papers, the result is an analysis and aggregation of technical customization types by providing clearance and aligning with future organizational needs. The results show a shift from ERP code modification in on-premises systems to interface and integration customization in cloud ERP systems, as well as emerging technological opportunities as a way for customers and key users to perform system customization. The study contributes by providing a clear understanding of given customization types and assisting ERP users and vendors in making customization decisions.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
Openness indicators for the evaluation of digital platforms between the launch and maturity phase
(2024)
In recent years, the evaluation of digital platforms has become an important focus in the field of information systems science. The identification of influential indicators that drive changes in digital platforms, specifically those related to openness, is still an unresolved issue. This paper addresses the challenge of identifying measurable indicators and characterizing the transition from launch to maturity in digital platforms. It proposes a systematic analytical approach to identify relevant openness indicators for evaluation purposes. The main contributions of this study are the following (1) the development of a comprehensive procedure for analyzing indicators, (2) the categorization of indicators as evaluation metrics within a multidimensional grid-box model, (3) the selection and evaluation of relevant indicators, (4) the identification and assessment of digital platform architectures during the launch-to-maturity transition, and (5) the evaluation of the applicability of the conceptualization and design process for digital platform evaluation.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Background
Many high-income countries are grappling with severe labour shortages in the healthcare sector. Refugees and recent migrants present a potential pool for staff recruitment due to their higher unemployment rates, younger age, and lower average educational attainment compared to the host society's labour force. Despite this, refugees and recent migrants, often possessing limited language skills in the destination country, are frequently excluded from traditional recruitment campaigns conducted solely in the host country’s language. Even those with intermediate language skills may feel excluded, as destination-country language advertisements are perceived as targeting only native speakers. This study experimentally assesses the effectiveness of a recruitment campaign for nursing positions in a German care facility, specifically targeting Arabic and Ukrainian speakers through Facebook advertisements.
Methods
We employ an experimental design (AB test) approximating a randomized controlled trial, utilizing Facebook as the delivery platform. We compare job advertisements for nursing positions in the native languages of Arabic and Ukrainian speakers (treatment) with the same advertisements displayed in German (control) for the same target group in the context of a real recruitment campaign for nursing jobs in Berlin, Germany. Our evaluation includes comparing link click rates, visits to the recruitment website, initiated applications, and completed applications, along with the unit cost of these indicators. We assess statistical significance in group differences using the Chi-squared test.
Results
We find that recruitment efforts in the origin language were 5.6 times (Arabic speakers) and 1.9 times (Ukrainian speakers) more effective in initiating nursing job applications compared to the standard model of German-only advertisements among recent migrants and refugees. Overall, targeting refugees and recent migrants was 2.4 (Ukrainians) and 10.8 (Arabic) times cheaper than targeting the reference group of German speakers indicating higher interest among these groups.
Conclusions
The results underscore the substantial benefits for employers in utilizing targeted recruitment via social media aimed at foreign-language communities within the country. This strategy, which is low-cost and low effort compared to recruiting abroad or investing in digitalization, has the potential for broad applicability in numerous high-income countries with sizable migrant communities. Increased employment rates among underemployed refugee and migrant communities, in turn, contribute to reducing poverty, social exclusion, public expenditure, and foster greater acceptance of newcomers within the receiving society.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
What does the future hold for corporate communications? The Communications Trend Radar is an applied research project. On an annual basis, it identifies relevant trends for corporate communications from the fields of society, management, and technology. The research team at the University of Potsdam (Professor Stefan Stieglitz, Sünje Clausen, MS.) and Leipzig University (Professor Ansgar Zerfass, Dr Michelle Wloka) identified the following trends for 2024: Information Inflation, AI Literacy, Workforce Shift, Content Integrity, Decoding Humans. More information on the trends can be found in the Communications Trend Radar Report 2024
Virtual reality promises high potential as an immersive, hands-on learning tool for training 21st-century skills. However, previous research revealed that the mere use of digital tools in higher education does not automatically translate into learning outcomes. Instead, information systems studies emphasized the importance of effective use behavior to achieve technology usage goals. Applying the affordance network approach, we investigated what constitutes effective usage behavior regarding a virtual reality collaboration system in digital education. Therefore, we conducted 18 interviews with students and observations of six course sessions. The results uncover how affordance actualization contributed to the achievement of learning goals. A comparison with findings of previous studies on other information systems (i.e., electronic medical record systems, big data analytics, fitness wearables) allowed us to highlight system-specific differences in effective use behavior. We also demonstrated a clear distinction between concepts surrounding effective use theory facilitating the application of the affordance network approach in information systems research.
Social media constitute an important arena for public debates and steady interchange of issues relevant to society. To boost their reputation, commercial organizations also engage in political, social, or environmental debates on social media. To engage in this type of digital activism, organizations increasingly utilize the social media profiles of executive employees and other brand ambassadors. However, the relationship between brand ambassadors’ digital activism and corporate reputation is only vaguely understood. The results of a qualitative inquiry suggest that digital activism via brand ambassadors can be risky (e.g., creating additional surface for firestorms, financial loss) and rewarding (e.g., emitting authenticity, employing ‘megaphones’ for industry change) at the same time. The paper informs both scholarship and practitioners about strategic trade-offs that need to be considered when employing brand ambassadors for digital activism.
Breaking down barriers
(2024)
Many researchers hesitate to provide full access to their datasets due to a lack of knowledge about research data management (RDM) tools and perceived fears, such as losing the value of one's own data. Existing tools and approaches often do not take into account these fears and missing knowledge. In this study, we examined how conversational agents (CAs) can provide a natural way of guidance through RDM processes and nudge researchers towards more data sharing. This work offers an online experiment in which researchers interacted with a CA on a self-developed RDM platform and a survey on participants’ data sharing behavior. Our findings indicate that the presence of a guiding and enlightening CA on an RDM platform has a constructive influence on both the intention to share data and the actual behavior of data sharing. Notably, individual factors do not appear to impede or hinder this effect.
Our dignity in your hands
(2024)
The Jewish population of early modern Italy was characterised by its inner diversity, which found its expression in the coexistence of various linguistic, cultural and liturgical traditions, as well as social and economic patterns. The contributions in this volume aim to explore crucial questions concerning the self-perception and identity of early modern Italian Jews from new perspectives and angles.
Purpose
The purpose of this study was to investigate work-related adaptive performance from a longitudinal process perspective. This paper clustered specific behavioral patterns following the introduction of a change and related them to retentivity as an individual cognitive ability. In addition, this paper investigated whether the occurrence of adaptation errors varied depending on the type of change content.
Design/methodology/approach
Data from 35 participants collected in the simulated manufacturing environment of a Research and Application Center Industry 4.0 (RACI) were analyzed. The participants were required to learn and train a manufacturing process in the RACI and through an online training program. At a second measurement point in the RACI, specific manufacturing steps were subject to change and participants had to adapt their task execution. Adaptive performance was evaluated by counting the adaptation errors.
Findings
The participants showed one of the following behavioral patterns: (1) no adaptation errors, (2) few adaptation errors, (3) repeated adaptation errors regarding the same actions, or (4) many adaptation errors distributed over many different actions. The latter ones had a very low retentivity compared to the other groups. Most of the adaptation errors were made when new actions were added to the manufacturing process.
Originality/value
Our study adds empirical research on adaptive performance and its underlying processes. It contributes to a detailed understanding of different behaviors in change situations and derives implications for organizational change management.
To achieve the Paris climate target, deep emissions reductions have to be complemented with carbon dioxide removal (CDR). However, a portfolio of CDR options is necessary to reduce risks and potential negative side effects. Despite a large theoretical potential, ocean-based CDR such as ocean alkalinity enhancement (OAE) has been omitted in climate change mitigation scenarios so far. In this study, we provide a techno-economic assessment of large-scale OAE using hydrated lime ('ocean liming'). We address key uncertainties that determine the overall cost of ocean liming (OL) such as the CO2 uptake efficiency per unit of material, distribution strategies avoiding carbonate precipitation which would compromise efficiency, and technology availability (e.g., solar calciners). We find that at economic costs of 130–295 $/tCO2 net-removed, ocean liming could be a competitive CDR option which could make a significant contribution towards the Paris climate target. As the techno-economic assessment identified no showstoppers, we argue for more research on ecosystem impacts, governance, monitoring, reporting, and verification, and technology development and assessment to determine whether ocean liming and other OAE should be considered as part of a broader CDR portfolio.
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
Purpose
With shorter product cycles and a growing number of knowledge-intensive business processes, time consumption is a highly relevant target factor in measuring the performance of contemporary business processes. This research aims to extend prior research on the effects of knowledge transfer velocity at the individual level by considering the effect of complexity, stickiness, competencies, and further demographic factors on knowledge-intensive business processes at the conversion-specific levels.
Design/methodology/approach
We empirically assess the impact of situation-dependent knowledge transfer velocities on time consumption in teams and individuals. Further, we issue the demographic effect on this relationship. We study a sample of 178 experiments of project teams and individuals applying ordinary least squares (OLS) for regression analysis-based modeling.
Findings
The authors find that time consumed at knowledge transfers is negatively associated with the complexity of tasks. Moreover, competence among team members has a complementary effect on this relationship and stickiness retards knowledge transfers. Thus, while demographic factors urgently need to be considered for effective and speedy knowledge transfers, these influencing factors should be addressed on a conversion-specific basis so that some tasks are realized in teams best while others are not. Guidelines and interventions are derived to identify best task realization variants, so that process performance is improved by a new kind of process improvement method.
Research limitations/implications
This study establishes empirically the importance of conversion-specific influence factors and demographic factors as drivers of high knowledge transfer velocities in teams and among individuals. The contribution connects the field of knowledge management to important streams in the wider business literature: process improvement, management of knowledge resources, design of information systems, etc. Whereas the model is highly bound to the experiment tasks, it has high explanatory power and high generalizability to other contexts.
Practical implications
Team managers should take care to allow the optimal knowledge transfer situation within the team. This is particularly important when knowledge sharing is central, e.g. in product development and consulting processes. If this is not possible, interventions should be applied to the individual knowledge transfer situation to improve knowledge transfers among team members.
Social implications
Faster and more effective knowledge transfers improve the performance of both commercial and non-commercial organizations. As nowadays, the individual is faced with time pressure to finalize tasks, the deliberated increase of knowledge transfer velocity is a core capability to realize this goal. Quantitative knowledge transfer models result in more reliable predictions about the duration of knowledge transfers. These allow the target-oriented modification of knowledge transfer situations so that processes speed up, private firms are more competitive and public services are faster to citizens.
Originality/value
Time consumption is an increasingly relevant factor in contemporary business but so far not been explored in experiments at all. This study extends current knowledge by considering quantitative effects on knowledge velocity and improved knowledge transfers.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
The experience of premenstrual syndrome (PMS) affects up to 90% of individuals with an active menstrual cycle and involves a spectrum of aversive physiological and psychological symptoms in the days leading up to menstruation (Tschudin et al., 2010). Despite its high prevalence, the precise origins of PMS remain elusive, with influences ranging from hormonal fluctuations to cognitive, social, and cultural factors (Hunter, 2007; Matsumoto et al., 2013).
Biologically, hormonal fluctuations, particularly in gonadal steroids, are commonly believed to be implicated in PMS, with the central factor being varying susceptibilities to the fluctuations between individuals and cycles (Rapkin & Akopians, 2012). Allopregnanolone (ALLO), a neuroactive steroid and progesterone metabolite, has emerged as a potential link to PMS symptoms (Hantsoo & Epperson, 2020). ALLO is a positive allosteric modulator of the GABAA receptor, influencing inhibitory communication (Rupprecht, 2003; Andréen et al., 2006). Different susceptibility to ALLO fluctuations throughout the cycle may lead to reduced GABAergic signal transmission during the luteal phase of the menstrual cycle.
The GABAergic system's broad influence leads to a number of affected physiological systems, including a consistent reduction in vagally mediated heart rate variability (vmHRV) during the luteal phase (Schmalenberger et al., 2019). This reduction in vmHRV is more pronounced in individuals with high PMS symptoms (Baker et al., 2008; Matsumoto et al., 2007). Fear conditioning studies have shown inconsistent associations with cycle phases, suggesting a complex interplay between physiological parameters and PMS-related symptoms (Carpenter et al., 2022; Epperson et al., 2007; Milad et al., 2006).
The neurovisceral integration model posits that vmHRV reflects the capacity of the central autonomous network (CAN), which is responsible for regulatory processes on behavioral, cognitive, and autonomous levels (Thayer & Lane, 2000, 2009). Fear learning, mediated within the CAN, is suggested to be indicative of vmHRV's capacity for successful
VI
regulation (Battaglia & Thayer, 2022). Given the GABAergic mediation of central inhibitory functional connectivity in the CAN, which may be affected by ALLO fluctuations, this thesis proposes that fluctuating CAN activity in the luteal phase contributes to diverse aversive symptoms in PMS.
A research program was designed to empirically test these propositions. Study 1 investigated fear discrimination during different menstrual cycle phases and its interaction with vmHRV, revealing nuanced effects on acoustic startle response and skin conductance response. While there was heightened fear discrimination in acoustic startle responses in participants in the luteal phase, there was an interaction between menstrual cycle phase and vmHRV in skin conductance responses. In this measure, heightened fear discrimination during the luteal phase was only visible in individuals with high resting vmHRV; those with low vmHRV showed reduced fear discrimination and higher overall responses.
Despite affecting the vast majority of menstruating people, there are very limited tools available to reliably assess these symptoms in the German speaking area. Study 2 aimed at closing this gap, by translating and validating a German version of the short version of the Premenstrual Assessment Form (Allen et al., 1991), providing a reliable tool for future investigations, which closes the gap in PMS questionnaires in the German-speaking research area.
Study 3 employed a diary study paradigm to explore daily associations between vmHRV and PMS symptoms. The results showed clear simultaneous fluctuations between the two constructs with a peak in PMS and a low point in vmHRV a few days before menstruation onset. The association between vmHRV and PMS was driven by psychological PMS symptoms.
Based on the theoretical considerations regarding the neurovisceral perspective on PMS, another interesting construct to consider is attentional control, as it is closely related to functions of the CAN. Study 4 delved into attentional control and vmHRV differences between menstrual cycle phases, demonstrating an interaction between cycle phase and PMS symptoms. In a pilot, we found reduced vmHRV and attentional control during the luteal phase only in participants who reported strong PMS.
While Studies 1-4 provided evidence for the mechanisms underlying PMS, Studies 5 and 6 investigated short- and long-term intervention protocols to ameliorate PMS symptomatology. Study 5 explored the potential of heart rate variability biofeedback (HRVB) in alleviating PMS symptoms and a number of other outcome measures. In a waitlist-control design, participants underwent a 4-week smartphone-based HRVB intervention. The results revealed positive effects on PMS, with larger effect sizes on psychological symptoms, as well as on depressive symptoms, anxiety/stress and attentional control.
Finally, Study 6 examined the acute effects of HRVB on attentional control. The study found positive impact but only in highly stressed individuals.
The thesis, based on this comprehensive research program, expands our understanding of PMS as an outcome of CAN fluctuations mediated by GABAA receptor reactivity. The results largely support the model. These findings not only deepen our understanding of PMS but also offer potential avenues for therapeutic interventions. The promising results of smartphone-based HRVB training suggest a non-pharmacological approach to managing PMS symptoms, although further research is needed to confirm its efficacy.
In conclusion, this thesis illuminates the complex web of factors contributing to PMS, providing valuable insights into its etiological underpinnings and potential interventions. By elucidating the relationships between hormonal fluctuations, CAN activity, and psychological responses, this research contributes to more effective treatments for individuals grappling with the challenges of PMS. The findings hold promise for improving the quality of life for those affected by this prevalent and often debilitating condition.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
Germany’s relatively stable party system faces a new left-authoritarian challenger: Sahra Wagenknecht’s Bündnis Sahra Wagenknecht (BSW) party. First polls indicate that for the BSW, election results above 10% are within reach. While Wagenknecht’s positions in economic and cultural terms have already been discussed, this article elaborates on another highly relevant feature of Wagenknecht, namely her populist communication. Exploring Wagenknecht’s and BSW’s populist appeal helps us to understand why the party is said to also have potential among seemingly different voter groups coming from the far right Alternative for Germany (AfD) and far left Die Linke, which share high levels of populist attitudes. To analyse the role that populist communication plays for Wagenknecht and the BSW, this article combines quantitative and qualitative methods. The quantitative analysis covers all speeches (10,000) and press releases (19,000) published by Die Linke members of Parliament (MPs; 2005–2023). The results show that Wagenknecht is the (former) Die Linke MP with the highest share of populist communication. Furthermore, she was also able to convince a group of populist MPs to join the BSW. The article closes with a qualitative analysis of BSW’s manifesto that reveals how populist framing plays a major role in this document, in which the political and economic elites are accused of working against the interest of “the majority”. Based on this analysis, the classification of the BSW as a populist party seems to be appropriate.
“Ick bin een Berlina”
(2024)
Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.
Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (Mage = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.
Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.
Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.
Systematic review and meta-analysis of ex-post evaluations on the effectiveness of carbon pricing
(2024)
Today, more than 70 carbon pricing schemes have been implemented around the globe, but their contributions to emissions reductions remains a subject of heated debate in science and policy. Here we assess the effectiveness of carbon pricing in reducing emissions using a rigorous, machine-learning assisted systematic review and meta-analysis. Based on 483 effect sizes extracted from 80 causal ex-post evaluations across 21 carbon pricing schemes, we find that introducing a carbon price has yielded immediate and substantial emission reductions for at least 17 of these policies, despite the low level of prices in most instances. Statistically significant emissions reductions range between –5% to –21% across the schemes (–4% to –15% after correcting for publication bias). Our study highlights critical evidence gaps with regard to dozens of unevaluated carbon pricing schemes and the price elasticity of emissions reductions. More rigorous synthesis of carbon pricing and other climate policies is required across a range of outcomes to advance our understanding of “what works” and accelerate learning on climate solutions in science and policy.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
In Time and the Other Johannes Fabian analysed how modern conceptions of time were “not only secularized and naturalized but also thoroughly spatialized.” According to Fabian, this was particularly visible in modern anthropology which “promoted a scheme in terms of which not only past cultures but all living societies were irrevocably placed on a temporal slope, a stream of Time – some upstream, others downstream.”3 Anthropologists attributed otherness to a distant past which was traditionally associated with cultural retardation, i.e. a lower degree of development, progress, and civilization. Cultural difference was expressed in terms of temporal distance while temporal distance was attributed to spatial remoteness. The result was a phenomenon that Fabian coined “the denial of coevalness” which pointed towards “a persistent and systematic tendency to place the referent(s) of anthropology in a Time other than the present of the producer of anthropological discourse.
Does working in a gender-atypical occupation reduce individuals’ likelihood of finding a different-sex romantic partner, and do such occupational partnership penalties contribute to occupational gender segregation? To answer this question, we theorized partnership penalties for working in gender-atypical occupations by drawing on insights from evolutionary psychology, social constructivism, and rational choice theory and exploited the stability of occupational pathways in Germany. In Study 1, we analyzed observational data from a national probability sample (N= 1,634,944) to assess whether individuals in gender-atypical occupations were less likely to be partnered than individuals who worked in gender typical occupations. To assess whether the observed partnership gaps found in Study 1 were causally related to the gender typicality of men’s and women’s occupations, we conducted a field experiment on a dating app (N = 6,778). Because the findings from Study 2 suggested that young women and men indeed experienced penalties for working in a gender-atypical occupation (at least when they were not highly attractive), we employed a choice-experimental design in Study 3 (N = 1,250) to assess whether women and men were aware of occupational partnership penalties and showed that anticipating occupational partnership penalties may keep young and highly educated women from working in gender-atypical occupations. Our main conclusion therefore is that that observed penalties and their anticipation seem to be driven by unconscious rather than conscious processes.
We analyze how conventional emissions trading schemes (ETS) can be modified by introducing “clean-up certificates” to allow for a phase of net-negative emissions. Clean-up certificates bundle the permission to emit CO2 with the obligation for its removal. We show that demand for such certificates is determined by cost-saving technological progress, the discount rate and the length of the compliance period. Introducing extra clean-up certificates into an existing ETS reduces near-term carbon prices and mitigation efforts. In contrast, substituting ETS allowances with clean-up certificates reduces cumulative emissions without depressing carbon prices or mitigation in the near term. We calibrate our model to the EU ETS and identify reforms where simultaneously (i) ambition levels rise, (ii) climate damages fall, (iii) revenues from carbon prices rise and (iv) carbon prices and aggregate mitigation cost fall. For reducing climate damages, roughly half of the issued clean-up certificates should replace conventional ETS allowances. In the context of the EU ETS, a European Carbon Central Bank could manage the implementation of cleanup certificates and could serve as an enforcement mechanism.
This study on the Messianic Jewish movement and its relationship to the Torah explores the various aspects of the relationship to the Torah on the basis of 10 interviews with selected Yeshua-believing Jews in leadership positions. The selection of interviewees results in a range of different positions typical of the movement as a whole, which overlap in many respects but are often fundamentally different and sometimes contradictory. Particular attention is paid to the theologically based, divergent and contradictory positions in an attempt to make these understandable.
After a brief introduction to the Messianic Jewish movement, aspects of the Messianic Jewish dual identity are examined and their relevance for the relationship to the Torah is demonstrated. This is followed by an overview of the forums in which Yeshua-believing Jews discuss their relationship to the Torah. The extensive bibliography at the end of the work provides an insight into a lively discussion process within the movement that is still far from complete. A briefly annotated differentiation of terms serves as an overview of the most important meanings of Torah used in the Messianic Jewish movement. Following this preliminary work, the field study is presented. A description of the research field and methodological reflections precede the interviews. In the interviews, the associations with the term Torah are first recorded and the conceptual meaning and use clarified. This already reveals some serious differences. The theological positions and understandings of Torah are presented with the biographical context and main field of influence, and the most important formative influences are named. The points on which they all agree are noted first, as they serve as a common basis. All study the written Torah and consider it, as well as the rest of the Tanakh and the writings of the New Testament in their present form, to be divinely inspired and authoritative. All have found a positive approach to the Torah according to their own definition of the term. For all of them, the written Torah and the Tanakh point to Yeshua. All agree that Yeshua did not abrogate the Torah, but fulfilled it. And all feel a responsibility as a Jew to the Torah in some way. With regard to keeping commandments, all say that no one can earn their way to heaven by doing so. G-d's faithfulness to His promises to Israel is affirmed by all, but whether the new covenant in Yeshua superseded the old covenant of Mt. Sinai, or whether it is simply added to the already existing covenant of Sinai, whether ritual commandments are to continue to be kept after Yeshua's death and resurrection and the destruction of the Temple, whether the commandments aiming at separation from the nations should continue to be kept, whether and under what conditions rabbinic halacha should be followed and what individuals do and teach in their families and communities - all this is discussed interview by interview. It becomes clear how different ways of reading and weighting key scriptures produce different positions. Just as the diversity of positions in relation to the Torah already suggests, the interview partners are divided on the question of a Messianic Jewish Halacha. But here too, the term halacha is interpreted differently by the representatives. At the end of the field study, the attempts to produce Messianic Jewish Halacha and the problems and points of criticism expressed by other interviewees are explained. The work concludes with a theological framework able to contain all the different positions and relationships to the Torah and some starting points for a possible Messianic Jewish hermeneutic theology of the Torah.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
Governments engage in corporatization by creating corporate entities or reorganizing existing ones. These corporatization activities reflect an interplay between political agency and environmental pressures, including (changing) notions of state-market relations. This paper discusses two ideal-typed organizational models of corporatization: the state as a marketizer and the marketization of the state. Whereas the first emphasizes the role of political design and agency in corporatization, the second emphasizes the role of (actors in) the environment for corporatization. Both models are assessed across five corporatization episodes in Norway and Sweden, where we also demonstrate the interplay between political agency and environmental pressure.
We examine how the gender of business owners is related to the wages paid to female relative to male employees working in their firms. Using Finnish register data and employing firm fixed effects, we find that the gender pay gap is—starting from a gender pay gap of 11 to 12%—two to three percentage points lower for hourly wages in female-owned firms than in male-owned firms. Results are robust to how the wage is measured, as well as to various further robustness checks. More importantly, we find substantial differences between industries. While, for instance, in the manufacturing sector, the gender of the owner plays no role in the gender pay gap, in several service sector industries, like ICT or business services, no or a negligible gender pay gap can be found, but only when firms are led by female business owners. Businesses with male ownership maintain a gender pay gap of around 10% also in the latter industries. With increasing firm size, the influence of the gender of the owner, however, fades. In large firms, it seems that others—firm managers—determine wages and no differences in the pay gap are observed between male- and female-owned firms.
Organizational commitments to equality change how people view women’s and men’s professional success
(2024)
To address women’s underrepresentation in high-status positions, many organizations have committed to gender equality. But is women’s professional success viewed less positively when organizations commit to women’s advancement? Do equality commitments have positive effects on evaluations of successful men? We fielded a survey experiment with a national probability sample in Germany (N = 3229) that varied employees’ gender and their organization’s commitment to equality. Respondents read about a recently promoted employee and rated how decisive of a role they thought intelligence and effort played in getting the employee promoted from 1 “Not at all decisive” to 7 “Very decisive” and the fairness of the promotion from 1 “Very unfair” to 7 “Very fair.” When organizations committed to women’s advancement rather than uniform performance standards, people believed intelligence and effort were less decisive in women’s promotions, but that intelligence was more decisive in men’s promotions. People viewed women’s promotions as least fair and men’s as most fair in organizations committed to women’s advancement. However, women’s promotions were still viewed more positively than men’s in all conditions and on all outcomes, suggesting people believed that organizations had double standards for success that required women to be smarter and work harder to be promoted, especially in organizations that did not make equality commitments.
Money matters!
(2024)
This paper examines the context dependency of attitudes toward maternal employment. We test three sets of factors that may affect these attitudes—economic benefits, normative obligations, and child-related consequences—by analyzing data from a unique survey experimental design implemented in a large-scale household panel survey in Germany (17,388 observations from 3,494 respondents). Our results show that the economic benefits associated with maternal employment are the most important predictor of attitudes supporting maternal employment. Moreover, we find that attitudes toward maternal employment vary by individual, household, and contextual characteristics (in particular, childcare quality). We interpret this variation as an indication that negative attitudes toward maternal employment do not necessarily reflect gender essentialism; rather, gender role attitudes are contingent upon the frames individuals have in mind.
Social institutions
(2024)
Social institutions are a system of behavioral and relationship patterns that are densely interwoven and enduring and function across an entire society. They order and structure the behavior of individuals in core areas of society and thus have a strong impact on the quality of life of individuals. Institutions regulate the following: (a) family and relationship networks carry out social reproduction and socialization; (b) institutions in the realm of education and training ensure the transmission and cultivation of knowledge, abilities, and specialized skills; (c) institutions in the labor market and economy provide for the production and distribution of goods and services; (d) institutions in the realm of law, governance, and politics provide for the maintenance of the social order; (e) while cultural, media, and religious institutions further the development of contexts of meaning, value orientations, and symbolic codes.
Examining the dissemination of evidence on social media, we analyzed the discourse around eight visible scientists in the context of COVID-19. Using manual (N = 1,406) and automated coding (N = 42,640) on an account-based tracked Twitter/X dataset capturing scientists’ activities and eliciting reactions over six 2-week periods, we found that visible scientists’ tweets included more scientific evidence. However, public reactions contained more anecdotal evidence. Findings indicate that evidence can be a message characteristic leading to greater tweet dissemination. Implications for scientists, including explicitly incorporating scientific evidence in their communication and examining evidence in science communication research, are discussed.
It has been highlighted many times how difficult it is to draw a boundary between gift and bribe, and how the same transfer can be interpreted in different ways according to the position of the observer and the narrative frame into which it is inserted. This also applied of course to Ancient Rome; in both the Republic and Principate lawgivers tried to define the limits of acceptable transfers and thus also to identify what we might call ‘corruption’. Yet, such definitions remained to a large extent blurred, and what was constructed was mostly a ‘code of conduct’, allowing Roman politicians to perform their own ‘honesty’ in public duty – while being aware at all times that their involvement in different kinds of transfer might be used by their opponents against them and presented as a case of ‘corrupt’ behaviour.
Have you already swiped or liked this morning? Have you taken part in a video conference at work, used or programmed a database? Have you paid with your smartphone on the way home, listened to a podcast, or extended the lending of books you borrowed from the library? And in the evening, have you filled out your tax return application on ELSTER.de on your tablet, shopped online, or paid invoices before you were tempted to watch a series on a streaming platform?
Our lives are entirely digitalized.
These changes make many things faster, easier, and more efficient. But keeping pace with these changes demands a lot from us, and not everyone succeeds. There are people who prefer to go to the bank to make a transfer, leave the programming to the experts, send their tax return by mail, and only use their smartphone to make phone calls. They don’t want to keep pace, or maybe they can’t. They haven’t learned these things. Others, younger people, grow up as “digital natives” surrounded by digital devices, tools, and processes. But does that mean they really know how to use them? Or do they also need digital education?
But what does successful digital education actually look like? Does it teach us how to use a tablet, how to google properly, and how to write Excel spreadsheets? Perhaps it’s about more than that. It’s about understanding the comprehensive change that has been taking hold of our world since it was broken down into digital ones and zeros and rebuilt virtually. But how do we learn to live in a world of digitality – with all that it entails, and to our benefit?
For the new issue of “Portal Wissen”, we looked around at the university and interviewed researchers about the role that the connection between digitalization and learning plays in the research of various disciplines. We spoke to Katharina Scheiter, Professor of Digital Education, about the future of German schools and had several experts show us examples of how digital tools can improve learning in schools. We also talked to computer science and agricultural researchers about how even experienced farmers can still learn a lot about their land and their work thanks to digital tools. We spoke to educational researchers who are using big data to analyze how boys and girls learn and what the possible causes for differences are. Education and political scientist Nina Kolleck, on the other hand, looks at education against the backdrop of globalization and relies on the analysis of large amounts of social media data.
Of course, we don’t lose sight of the diversity of research at the University of Potsdam. We learn, for example, what alternatives to antibiotics could soon be available. This magazine also looks at stress and how it makes us ill as well as the research into sustainable ore extraction.
A new feature of our magazine is a whole series of shorter articles that invite you to browse and read: from research news and photographic insights into laboratories to simple explanations of complex phenomena and outlooks into the wider world of research to a small scientific utopia and a personal thanks to research. All this in the name of education, of course. Enjoy your read!
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
The planetary commons
(2024)
The Anthropocene signifies the start of a no- analogue trajectory of the Earth system that is fundamentally different from the Holocene. This new trajectory is characterized by rising risks of triggering irreversible and unmanageable shifts in Earth system functioning. We urgently need a new global approach to safeguard critical Earth system regulating functions more effectively and comprehensively. The global commons framework is the closest example of an existing approach with the aim of governing biophysical systems on Earth upon which the world collectively depends. Derived during stable Holocene conditions, the global commons framework must now evolve in the light of new Anthropocene dynamics. This requires a fundamental shift from a focus only on governing shared resources beyond national jurisdiction, to one that secures critical functions of the Earth system irrespective of national boundaries. We propose a new framework—the planetary commons—which differs from the global commons framework by including not only globally shared geographic regions but also critical biophysical systems that regulate the resilience and state, and therefore livability, on Earth. The new planetary commons should articulate and create comprehensive stewardship obligations through Earth system governance aimed at restoring and strengthening planetary resilience and justice.
The dark side of metaverse: a multi-perspective of deviant behaviors from PLS-SEM and fsQCA findings
(2024)
The metaverse has created a huge buzz of interest because such a phenomenon is emerging. The behavioral aspect of the metaverse includes user engagement and deviant behaviors in the metaverse. Such technology has brought various dangers to individuals and society. There are growing cases reported of sexual abuse, racism, harassment, hate speech, and bullying because of online disinhibition make us feel more relaxed. This study responded to the literature call by investigating the effect of technical and social features through mediating roles of security and privacy on deviant behaviors in the metaverse. The data collected from virtual network users reached 1121 respondents. Partial Least Squares based structural equation modeling (PLS-SEM) and fuzzy set Qualitative Comparative Analysis (fsQCA) were used. PLS-SEM results revealed that social features such as user-to-user interaction, homophily, social ties, and social identity, and technical design such as immersive experience and invisibility significantly affect users’ deviant behavior in the metaverse. The fsQCA results provided insights into the multiple causal solutions and configurations. This study is exceptional because it provided decisive results by understanding the deviant behavior of users based on the symmetrical and asymmetrical approach to virtual networks.
This article analyses incremental institutional change and subsequent organizational and performance outcomes of the digital transformation from a comparative perspective. Through 31 expert interviews, the authors compare two digitalized public services in Germany. Two digitalization approaches are identified. The voluntary, decentralized bottom-up approach involves layering of new rules, limited organizational restructuring, and performance deficits. Conversely, the compulsory, top-down approach with centralized control facilitates displacement of existing rules and far-reaching organizational change; in this study, it is also associated with improved performance.
We would like to inform the readers and editors of the journal that we have discovered some errors in the references of our paper. These errors were brought to our attention by a reader who noticed some inconsistencies between the citations in the text and the bibliography. Upon further investigation, we realized that our literature management software had mistakenly linked some of the references to wrong or non-existent sources. We apologize for this oversight and assure you that it did not affect the validity or quality of our arguments and results, which were based on the correct sources. Below you find a list of the incorrect references along with their corresponding correct ones. We hope that this correction statement will clarify any confusion or misunderstanding that may have arisen from this mistake. The authors would like to apologise for any inconvenience caused.
In this paper, we study how the European Financial Reporting Advisory Group (EFRAG) used different legitimacy strategies between 2004 and 2021 to secure its organisational survival. Although EFRAG is now an established player within the regulatory space of corporate reporting, the organisation’s path towards this position was not straightforward. Based on 20 interviews with current and former members of EFRAG and archival documents, we investigate how EFRAG initially gained and maintained its legitimacy and how it responded to a legitimacy crisis arising in the aftermath of the 2008–2009 financial crisis. Based on prior research on organisational strategies for legitimising actions, we derive a framework for our analysis and show how EFRAG has adapted various legitimacy strategies over time. We further find that the use of legitimacy strategies is constrained by various systemic factors and show how EFRAG’s adaptations to its legitimacy strategies led to new tensions. Our findings contribute to the literature on private regulatory organisations’ legitimacy and the political economy of standard setting.
We use worldwide gridded satellite data to analyse how population size and density affect urban PM 2.5 pollution. We find that more populated and denser grid cells are more exposed to pollution. However, across urban areas, exposure increases with cities’ population size but decreases with density. Moreover, the population effect is driven mostly by population commuting to core cities rather than the core city population itself. We analyse heterogeneity by geography and income levels. A counterfactual simulation shows that exposure could fall by up to 40% if population size were equalized across all cities within countries, but the relocation of population from large to small cities that maximizes welfare would be small.
Existing curricula for entrepreneurship education do not necessarily represent the best way of teaching. How could entrepreneurship curricula be improved? To answer this question, we aim to identify and rank desirable teaching objectives, teaching contents, teaching methods, and assessment methods for higher entrepreneurship education. To this end, we employ an international real-time Delphi study with an expert panel consisting of entrepreneurship education instructors and researchers. The study reveals 17 favorable objectives, 17 items of content, 25 teaching methods, and 15 assessment methods, which are ranked according to their desirability and the group consensus. We contribute to entrepreneurship curriculum research by adding a normative perspective.
In this paper, we present data from an elicitation study and a corpus study that support the observation that the Yucatec Maya progressive aspect auxiliary táan is replaced by the habitual auxiliary k in sentences with contrastively focused fronted objects. Focus has been extensively studied in Yucatec, yet the incompatibility of object fronting and progressive aspect in Yucatec Maya remains understudied. Both our experimental results and our corpus study point in the direction that this incompatibility may very well be categorical. Theoretically, we take a progressive reading to be derived from an imperfectivity operator in combination with a singular operator, and we propose that this singular operator implicates the negation of event plurality, leading to an exhaustive interpretation which ranks below corrective focus on a contrastive focus scale. This means that, in a sentence with object focus fronting, the use of the marked auxiliary táan (as opposed to the more general k) would trigger two contrastive foci, which would be an unlikely and probably dispreferred speech act.
High-growth firms (HGFs) are important for job creation and productivity growth. We investigate the relationship between product and labour market regulations, as well as the quality of regional governments that implement these regulations, and the development of HGFs across European regions. Using data from Eurostat, the Organisation for Economic Co-operation and Development (OECD), World Economic Forum (WEF), and Gothenburg University, we show that both regulatory stringency and the quality of the regional government relate to the regional shares of HGFs. In particular, we find that the effect of labour and product market regulations is moderated by the quality of regional government. Depending on the quality of regional governments, regulations may have a ‘good, bad or ugly’ influence on the development of HGFs. Our findings contribute to the debate on the effects of regulations and offer important building blocks to develop tailored policy measures that may influence the development of HGFs in a region.
This article analyses the institutional design variants of local crisis governance responses to the COVID-19 pandemic and their entanglement with other locally impactful crises from a cross-country comparative perspective (France, Germany, Poland, Sweden, and the UK/England). The pandemic offers an excellent empirical lens for scrutinizing the phenomenon of polycrises governance because it occurred while European countries were struggling with the impacts of several prior, ongoing, or newly arrived crises. Our major focus is on institutional design variants of crisis governance (dependent variable) and the influence of different administrative cultures on it (independent variable). Furthermore, we analyze the entanglement and interaction of institutional responses to other (previous or parallel) crises (polycrisis dynamics). Our findings reveal a huge variance of institutional designs, largely evoked by country-specific administrative cultures and profiles. The degree of de-/centralization and the intensity of coordination or decoupling across levels of government differs significantly by country. Simultaneously, all countries were affected by interrelated and entangled crises, resulting in various patterns of polycrisis dynamics. While policy failures and “fatal remedies” from previous crises have partially impaired the resilience and crisis preparedness of local governments, we have also found some learning effects from previous crises.
Background: Societies worldwide have become more diverse yet continue to be inequitable. Understanding how youth growing up in these societies are socialized and consequently develop racial knowledge has important implications not only for their well-being but also for building more just societies. Importantly, there is a lack of research on these topics in Germany and Europe in general.
Aim and Method: The overarching aim of the dissertation is to investigate 1) where and how ethnic-racial socialization (ERS) happens in inequitable societies and 2) how it relates to youth’s development of racial knowledge, which comprises racial beliefs (e.g., prejudice, attitudes), behaviors (e.g., actions preserving or disrupting inequities), and identities (e.g., inclusive, cultural). Guided by developmental, cultural, and ecological theories of socialization and development, I first explored how family, as a crucial socialization context, contributes to the preservation or disruption of racism and xenophobia in inequitable societies through its influence on children’s racial beliefs and behaviors. I conducted a literature review and developed a conceptual model bridging research on ethnic-racial socialization and intergroup relations (Study 1). After documenting the lack of research on socialization and development of racial knowledge within and beyond family contexts outside of the U.S., I conducted a qualitative study to explore ERS in Germany through the lens of racially marginalized youth (Study 2). Then, I conducted two quantitative studies to explore the separate and interacting relations of multiple (i.e., family, school) socialization contexts for the development of racial beliefs and behaviors (Study 3), and identities (Studies 3, 4) in Germany. Participants of Study 2 were 26 young adults (aged between 19 and 32) of Turkish, Kurdish, East, and Southeast Asian heritage living across different cities in Germany. Study 3 was conducted with 503 eighth graders of immigrant and non-immigrant descent (Mage = 13.67) in Berlin, Study 4 included 311 early to mid-adolescents of immigrant descent (Mage= 13.85) in North Rhine-Westphalia with diverse cultural backgrounds.
Results and Conclusion: The findings revealed that privileged or marginalized positions of families in relation to their ethnic-racial and religious background in society entail differential experiences and thus are an important determining factor for the content/process of socialization and development of youth’s racial knowledge. Until recently, ERS research mostly focused on investigating how racially marginalized families have been the sources of support for their children in resisting racism and how racially privileged families contribute to transmission of information upholding racism (Study 1). ERS for racially marginalized youth in Germany centered heritage culture, discrimination, and resistance strategies to racism, yet resistance strategies transmitted to youth mostly help to survive racism (e.g., working hard) by upholding it instead of liberating themselves from racism by disrupting it (e.g., self-advocacy, Study 2). Furthermore, when families and schools foster heritage and intercultural learning, both contexts may separately promote stronger identification with heritage culture and German identities, and more prosocial intentions towards disadvantaged groups (i.e., refugees) among youth (Studies 3, 4). However, equal treatment in the school context led to mixed results: equal treatment was either unrelated to inclusive identity, or positively related to German and negatively related to heritage culture identities (Studies 3, 4). Additionally, youth receiving messages highlighting strained and preferential intergroup relations at home while attending schools promoting assimilation may develop a stronger heritage culture identity (Study 4). In conclusion, ERS happened across various social contexts (i.e., family, community centers, school, neighborhood, peer). ERS promoting heritage and intercultural learning, at least in one social context (family or school), might foster youth’s racial knowledge manifesting in stronger belonging to multiple cultures and in prosocial intentions toward disadvantaged groups. However, there is a need for ERS targeting increasing awareness of discrimination across social contexts of youth and teaching youth resistance strategies for liberation from racism.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
‘Modern talking’
(2024)
Despite growing interest, we lack a clear understanding of how the arguably ambiguous phenomenon of agile is perceived in government practice. This study aims to alleviate this puzzle by investigating how managers and employees in German public sector organisations make sense of agile as a spreading management fashion in the form of narratives. This is important because narratives function as innovation carriers that ultimately influence the manifestations of the concept in organisations. Based on a multi-case study of 31 interviews and 24 responses to a qualitative online survey conducted in 2021 and 2022, we provide insights into what public sector managers, employees and consultants understand (and, more importantly, do not understand) as agile and how they weave it into their existing reality of bureaucratic organisations. We uncover three meta-narratives of agile government, which we label ‘renew’, ‘complement’ and ‘integrate’. In particular, the meta-narratives differ in their positioning of how agile interacts with the characteristics of bureaucratic organisations. Importantly, we also show that agile as a management fad serves as a projection surface for what actors want from a modern and digital organisation. Thus, the vocabulary of agile government within the narratives is inherently linked to other diffusing phenomena such as new work or digitalisation.
The Convention Relating to the Status of Refugees adopted on 28 July 1951 in Geneva provides the most comprehensive codification of the rights of refugees yet attempted. Consolidating previous international instruments relating to refugees, the 1951 Convention with its 1967 Protocol marks a cornerstone in the development of international refugee law. At present, there are 144 States Parties to one or both of these instruments, expressing a worldwide consensus on the definition of the term refugee and the fundamental rights to be granted to refugees. These facts demonstrate and underline the extraordinary significance of these instruments as the indispensable legal basis of international refugee law. This Commentary provides for a systematic and comprehensive analysis of the 1951 Convention and the 1967 Protocol on an article-by-article basis, exposing the interrelationship between the different articles and discussing the latest developments in international refugee law. In addition, several thematic contributions analyse questions of international refugee law which are of general significance, such as regional developments and the relationship between refugee law and the law of the sea.
HPI Future SOC Lab
(2024)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2020. Selected projects have presented their results on April 21st and November 10th 2020 at the Future SOC Lab Day events.
Back to bureaucracy?
(2024)
In this contribution, the emergence of the neo-Weberian state (NWS) is analyzed with regard to German public administration. Drawing on the concept of a governance space, which consists of a hierarchy, markets, and networks, we distinguish between four empirical manifestations of the NWS, namely, the NWS as (1) come back of the public/ re-municipalization; (2) re-hierarchization; (3) de-agencification; (4) de-escalation in performance management. These movements can, on the one hand, be interpreted as a (partial) reversal of New Public Management (NPM) approaches and a “swinging back of the pendulum” (see Kuhlmann & Wollmann, 2019) toward public and classical Weberian principles (e.g., hierarchy, regulation, institutional re-aggregation). This reversal re-strengthened the hierarchy within the overall governance space to the detriment of, but without completely replacing, market mechanisms and networks. NPM’s failure to deliver what it promised and its inappropriateness as a response to more recent challenges connected to crises and wicked problems have engendered a partial return of the public and a move away from the economization logic of NPM. On the other hand, post-NPM reversals and managerial de-escalation gave rise to hybrid models that merge NPM and classic Weberian administration. While some well-functioning combinations of NPM and Weberianism exist, the hybridization of “old” and “neo” elements has also provoked ambivalent and negative assessments regarding the actual functioning of the NWS in Germany. Our analysis suggests that the NWS is only partially suitable as a model for reform and future administrative modernization, largely depending on the context surrounding reform and implementation practices.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
The present thesis looks at cultural conceptualisations in relation to DEATH in Irish English from a Cultural Linguistic perspective and puts a special focus on the diachronic development of these conceptualisations. For the study, a corpus consisting of 1,400 death notices from the Dublin-based national newspaper The Irish Times from 14 historical periods between 1859 and 2023 was compiled, resulting in a highly specialised 70,000-word corpus. First, the manual qualitative analysis of the death notices produced evidence for eight superordinate cultural conceptualisations surrounding DEATH, namely, in the order of their frequency THE DEAD ARE TO BE REMEMBERED OR REGRETTED, DEATH IS SOMETHING POSITIVE, DEATH IS REST, DEATH IS A JOURNEY, DYING IS THE BEGINNING OF ANOTHER LIFE, DEATH IS (NOT) A TABOO, DEATH IS GOD’S WILL, and DEATH IS THE END. These conceptualisations were derived from linguistic expressions in the death notices that have these conceptualisations as a cognitive basis. Second, the quantitative comparison of the individual conceptualisations detected diachronic variation, which is interconnected with historical and social developments in Ireland. The thesis, therefore, illustrates the applicability of Cultural Linguistics as an adequate method for diachronic studies interested in culturally determined developments of conceptualisations.
A new challenger seeks to enter the German party system: Bündnis Sahra Wagenknecht (BSW). With her new party, former Die Linke politician Sahra Wagenknecht combines a left-authoritarian profile (economically left-leaning, but culturally conservative) with anti-US, pro-Russia and anti-elitist stances. This article provides the first large-n academic study of the voter potential of this new party by using a quasi-representative sample (n = 6,000) drawn from a Voting Advice Application-like dataset that comes from a website designed to explore the Bündnis Sahra Wagenknecht’s positions. The results show that congruence with foreign policy positions and anti-elitism are strong predictors of the propensity to vote for the Bündnis Sahra Wagenknecht. In contrast, social/welfare and immigration policies are less predictive for assessing the party’s potential. Among the different socio-demographic groups, the Bündnis Sahra Wagenknecht has a strong potential among baby boomers, the less educated and East Germans. Regarding party voters, the Bündnis Sahra Wagenknecht is favoured by supporters of some minor parties like dieBasis, Freie Wähler and Die PARTEI, but also non-voters. Among the established parties, the party’s potential is high among Die Linke voters and, to a lesser extent, voters of the Social democrats (SPD) and Alternative for Germany (AfD). A potential below the average is reported for the supporters of the Liberals (FDP) and Christian Democrats (CDU/CSU) and most clearly for Green and Volt voters.
Spring Issue
(2024)
This chapter focuses on the features of Article 1's paragraph 1 of the 1951 Convention. The article primarily determines the scope of application of the Convention's ratione personae while outlining the basis of the protection of refugees. Additionally, Article 1 addresses the concerns surrounding the inclusion, cessation, and exclusion of refugees. The chapter then tackles the historical development of the article by considering the instruments used prior to the 1951 Convention. It also cites that the Constitution of the International Refugee Organization appears to contain an ambiguity as to how the refugee notion was perceived, so refugees only became the IRO Constitution's concern when they have valid objections to returning to their home country.
Article 22 1951 Convention
(2024)
This chapter covers the 1951 Convention's Article 22. It explains the provision's aim to grant refugees access to the contracting States' national educational systems. Moreover, Article 22 encompasses learning at all different levels of education in schools, universities, and other educational institutions. However, the provision does not address any issues related to the upbringing of children by their parents. The chapter mentions the relevancy of Article 22 when it comes to durable solutions for refugees in an effort to enable them to integrate into the host country's society. It also discusses the drafting history, declarations, and reservations of Article 22 and the instruments used prior to the 1951 Convention.
This chapter examines the extent of the 1951 Convention's Article 44 and the 1967 Protocol's Article IX. It starts with identifying the standard denunciation clause in Article 44 and Article IX. Multilateral treaties of unlimited duration allow States parties an unconditional right to withdraw. A denunciation releases the denouncing party from any obligation further to perform the treaty in relation to the other parties of the 1967 Protocol. The chapter clarifies that denunciation or withdrawal expresses the same legal concept since it is a procedure initiated unilaterally by a State that wants to terminate its legal engagements under a treaty.
This chapter tackles the analysis and function of Article 43 of the 1951 Convention and Article VIII of the 1967 Protocol. It explains that a multilateral treaty can be enforced when met with necessary conditions, such as the Article 24 of the Vienna Convention on the Law of Treaties (VCLT). The provision also regulates the 1951 Convention's entry into force of States' ratification or accession. The chapter notes that the 1967 Protocol entered into force after Sweden deposited its instrument of accession. It elaborates on the specific details needed for the ratification or accession prior to the entry into force.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.
This chapter focuses on Article 46 of the 1951 Convention and Article X of the 1967 Protocol. It explains the depository of a treaty playing an essential procedural role in ensuring the smooth operation of a multilateral treaty. Article 46 enumerates the Secretary-General's function as a depositary performed by the Treaty Section of the Office of Legal Affairs in the United Nations Secretariat. Similarly, Article X confirms and details the Secretary-General's designation and role as depositary of the 1967 Protocol. The chapter mentions that the enumeration of Article X's depositary notification is exemplary instead of conclusive. It examines the depositoary notifications of declarations, signatures, and researvations under Article 46 and Article X.
This chapter covers the function of Testimonium to the 1951 Convention and Article XI of the 1967 Protocol. It looks into the relevance of the 1951 Convetion's testimonium. The testimonium primarily focuses on the Convetion's authentic languages, regulation of deposition, and certified true copies being delivered to all members of the UN and non-member States. On the other hand, Article XI contains the standard procedures for regulating the deposition of a copy of the 1967 Protocol in the Secretariat of the United Nations and foreseeing the transmission of certified copies thereof by the Secretary general. The chapter mentions how both elements are not commonly explicitly indicated in modern treaties.
This chapter looks into the 1951 Convention's Article 39 and the 1967 Protocol's Article V. In 2000, the Secretary-General identified the 1951 Convention as belonging to a core group of 25 multilateral treaties representative of the key objectives of the UN and the spirit of its Charter. Additionally, the rules found in the Vienna Convention on the Law of Treaties (VCLT) apply to the 1951 Convention as a matter of customary international law. On the other hand, the 1967 Protocol does not amend the 1951 Convention but binds its parties to observe the substantive provisions. The chapter cites that the 1967 Protocol constitutes an independent and complete international instrument that is open not only to the States parties to the 1951 Convention.