Refine
Year of publication
- 2023 (548) (remove)
Document Type
- Doctoral Thesis (211)
- Article (210)
- Part of a Book (25)
- Postprint (25)
- Working Paper (19)
- Monograph/Edited Volume (17)
- Conference Proceeding (14)
- Review (8)
- Habilitation Thesis (7)
- Other (3)
Language
- English (548) (remove)
Keywords
- digital education (34)
- Digitale Bildung (32)
- Kursdesign (32)
- MOOC (32)
- Micro Degree (32)
- Online-Lehre (32)
- Onlinekurs (32)
- Onlinekurs-Produktion (32)
- e-learning (32)
- micro degree (32)
Institute
- Institut für Biochemie und Biologie (65)
- Fachgruppe Betriebswirtschaftslehre (64)
- Extern (63)
- Hasso-Plattner-Institut für Digital Engineering GmbH (59)
- Fachgruppe Politik- & Verwaltungswissenschaft (38)
- Fachgruppe Volkswirtschaftslehre (35)
- Institut für Chemie (35)
- Institut für Geowissenschaften (33)
- Institut für Physik und Astronomie (33)
- Historisches Institut (19)
“They Took to the Sea”
(2023)
The sea and maritime spaces have long been neglected in the field of Jewish studies despite their relevance in the context of Jewish religious texts and historical narratives. The images of Noah’s arche, king Salomon’s maritime activities or the miracle of the parting of the Red Sea immediately come into mind, however, only illustrate a few aspects of Jewish maritime activities. Consequently, the relations of Jews and the sea has to be seen in a much broader spatial and temporal framework in order to understand the overall importance of maritime spaces in Jewish history and culture.
Almost sixty years after Samuel Tolkowsky’s pivotal study on maritime Jewish history and culture and the publication of his book “They Took to the Sea” in 1964, this volume of PaRDeS seeks to follow these ideas, revisit Jewish history and culture from different maritime perspectives and shed new light on current research in the field, which brings together Jewish and maritime studies.
The articles in this volume therefore reflect a wide range of topics and illustrate how maritime perspectives can enrich our understanding of Jewish history and culture and its entanglement with the sea – especially in modern times. They study different spaces and examine their embedded narratives and functions. They follow in one way or another the discussions which evolved in the last decades, focused on the importance of spatial dimensions and opened up possibilities for studying the production and construction of spaces, their influences on cultural practices and ideas, as well as structures and changes of social processes. By taking these debates into account, the articles offer new insights into Jewish history and culture by taking us out to “sea” and inviting us to revisit Jewish history and culture from different maritime perspectives.
“One video fit for all”
(2023)
Online learning in mathematics has always been challenging, especially for mathematics in STEM education. This paper presents how to make “one fit for all” lecture videos for mathematics in STEM education. In general, we do believe that there is no such thing as “one fit for all” video. The curriculum requires a high level of prior knowledge in mathematics from high school to get a good understanding, and the variation of prior knowledge levels among STEM education students is often high. This creates challenges for both online teaching and on-campus teaching. This article presents experimenting and researching on a video format where students can get a real-time feeling, and which fits their needs regarding their existing prior knowledge. They have the possibility to ask and receive answers during the video without having to feel that they must jump into different sources, which helps to reduce unnecessary distractions. The fundamental video format presented here is that of dynamic branching videos, which has to little degree been researched in education related studies. The reason might be that this field is quite new for higher education, and there is relatively high requirement on the video editing skills from the teachers’ side considering the platforms that are available so far. The videos are implemented for engineering students who take the Linear Algebra course at the Norwegian University of Science and Technology in spring 2023. Feedback from the students gathered via anonymous surveys so far (N = 21) is very positive. With the high suitability for online teaching, this video format might lead the trend of online learning in the future. The design and implementation of dynamic videos in mathematics in higher education was presented for the first time at the EMOOCs conference 2023.
“Israel am Meere”
(2023)
For Jews in Germany, the period following the Nazis’ rise to power in January 1933 was a period of decision-making on many levels: How should they respond to the persecution? If they decided to emigrate, many more decisions had to be made: How does one leave a country, and where should one go? A key moment in the process and in the cultural practice of emigration is the beginning of the sea voyage – when the need for departure and the hope for a new arrival jointly create a period of liminality. Looking at reports from sea voyages of exploration and emigration from the 1930s, this contribution discusses the question whether, and in what ways, such reflections can be read in the context of religious experiences and in the search for Jewish identities in times of turmoil.
“Creating a Maritime Future”
(2023)
This article explores the importance of the port city of Hamburg in the evolving discourses on the creation of a maritime future, a vision which became influential in the 1930s, 1940s and 1950s. While some Jewish representatives in the city aimed at preserving and intertwining Hanseatic and Jewish traditions in order to secure a Jewish presence in the port city under the pressure of the Nazi regime and thereafter, others wanted to create new emigration opportunities, especially to Mandatory Palestine, and create a Jewish maritime future in Eretz Israel. Different Zionist organizations supported the newly evolving maritime ideas, such as the “conquest of the sea”, and promoted the image of a Jewish seafaring nation. Despite the difficulties in the 1940s, these concepts gained influence post-1945 and led to the foundation of the fishery kibbutz “Zerubavel” in Blankenese/Hamburg. However, the idea of a Hanseatic Jewish future also remained influential and illustrates how differently a “Jewish maritime future” was imagined and used to link past, present and future.
‘Crazy Man-Killing Monsters’
(2023)
The Amazons have a long legacy in literature and the visual arts, extending from antiquity to the present day. Prior scholarship tends to treat the Amazons as hostile ‘Other’ figures, embodying the antithesis of Greco-Roman cultural norms. Recently, scholars have begun to examine positive portrayals of Amazons in contemporary media, as role models and heroic figures. However, there is a dearth of scholarship examining the Amazons’ inherently multifaceted nature, and their subsequent polarised reception in popular media.
This article builds upon the large body of scholarship on contemporary Amazon narratives, in which the figures of Wonder Woman and Xena, Warrior Princess dominate scholarly discourse. These ‘modern Amazon’ figures epitomise the dominant contemporary trend of portraying Amazons as strong female role models and feminist icons. To highlight the complexity of the Amazon image in contemporary media, this article examines the representation of the Amazons in the Supernatural episode ‘Slice Girls’ (S7 E13, 2012), where their portrayal as hostile, monstrous figures diverges greatly from the positive characterisation of Wonder Woman and Xena. I also consider the show’s engagement with ancient written sources, to examine how the writers draw upon the motifs of ancient Amazon narratives when crafting their unique Amazon characters. By contrasting the Amazons of ‘Slice Girls’ to contemporary figures and ancient narratives, this article examines how factors such as feminist ideology, narrative story arcs, characters’/audience’s perspectives and male bias shape the representation of Amazons post-antiquity.
In times of ongoing biodiversity loss, understanding how communities are structured and what mechanisms and local adaptations underlie the patterns we observe in nature is crucial for predicting how future ecological and anthropogenic changes might affect local and regional biodiversity. Aquatic zooplankton are a group of primary consumers that represent a critical link in the food chain, providing nutrients for the entire food web. Thus, understanding the adaptability and structure of zooplankton communities is essential. In this work, the genetic basis for the different temperature adaptations of two seasonally shifted (i.e., temperature-dependent) occurring freshwater rotifers of a formerly cryptic species complex (Brachionus calyciflorus) was investigated to understand the overall genetic diversity and evolutionary scenario for putative adaptations to different temperature regimes. Furthermore, this work aimed to clarify to what extent the different temperature adaptations may represent a niche partitioning process thus enabling co-existence. The findings were then embedded in a metacommunity context to understand how zooplankton communities assemble in a kettle hole metacommunity located in the northeastern German "Uckermark" and which underlying processes contribute to the biodiversity patterns we observe. Using a combined approach of newly generated mitochondrial resources (genomes/cds) and the analysis of a candidate gene (Heat Shock Protein 40kDa) for temperature adaptation, I showed that the global representatives of B. calyciflorus s.s.. are genetically more similar than B. fernandoi (average pairwise nucleotide diversity: 0.079 intraspecific vs. 0.257 interspecific) indicating that both species carry different standing genetic variation. In addition to differential expression in the thermotolerant B. calyciflorus s.s. and thermosensitive B. fernandoi, the HSP 40kDa also showed structural variation with eleven fixed and six positively selected sites, some of which are located in functional areas of the protein. The estimated divergence time of ~ 25-29 Myr combined with the fixed sites and a prevalence of ancestral amino acids in B. calyciflorus s.s. indicate that B. calyciflorus s.s. remained in the ancestral niche, while B. fernandoi partitioned into a new niche. The comparison of mitochondrial and nuclear markers (HPS 40kDa, ITS1, COI) revealed a hybridisation event between the two species. However, as hybridisation between the two species is rare, it can be concluded that the temporally isolated niches (i.e., seasonal-shifted occurrence) they inhabit based on their different temperature preferences most likely represent a pre-zygotic isolation mechanism that allows sympatric occurrence while maintaining species boundaries. To determine the processes underlying zooplankton community assembly, a zooplankton metacommunity comprising 24 kettle holes was sampled over a two-year period. Active (i.e., water samples) and dormant communities (i.e., dormant eggs hatched from sediment) were identified using a two-fragment DNA metabarcoding approach (COI and 18S). Species richness and diversity as well as community composition were analysed considering spatial, temporal and environmental parameters. The analysis revealed that environmental filtering based on parameters such as pH, size and location of the habitat patch (i.e., kettle hole) and surrounding field crops largely determined zooplankton community composition (explained variance: Bray-Curtis dissimilarities: 10.5%; Jaccard dissimilarities: 12.9%), indicating that adaptation to a particular habitat is a key feature of zooplankton species in this system. While the spatial configuration of the kettle holes played a minor role (explained variance: Bray-Curtis dissimilarities: 2.8% and Jaccard dissimilarities: 5.5%), the individual kettle hole sites had a significant influence on the community composition. This suggests monopolisation/priority effects (i.e., dormant communities) of certain species in individual kettle holes. As environmental filtering is the dominating process structuring zooplankton communities, this system could be significantly influenced by future land-use change, pollution and climate change.
Zimzum
(2023)
The Hebrew word zimzum originally means “contraction,” “withdrawal,” “retreat,” “limitation,” and “concentration.” In Kabbalah, zimzum is a term for God’s self-limitation, done before creating the world to create the world. Jewish mystic Isaac Luria coined this term in Galilee in the sixteenth century, positing that the God who was “Ein-Sof,” unlimited and omnipresent before creation, must concentrate himself in the zimzum and withdraw in order to make room for the creation of the world in God’s own center. At the same time, God also limits his infinite omnipotence to allow the finite world to arise. Without the zimzum there is no creation, making zimzum one of the basic concepts of Judaism.
The Lurianic doctrine of the zimzum has been considered an intellectual showpiece of the Kabbalah and of Jewish philosophy. The teaching of the zimzum has appeared in the Kabbalistic literature across Central and Eastern Europe, perhaps most famously in Hasidic literature up to the present day and in philosopher and historian Gershom Scholem’s epoch-making research on Jewish mysticism. The Zimzum has fascinated Jewish and Christian theologians, philosophers, and writers like no other Kabbalistic teaching. This can be seen across the philosophy and cultural history of the twentieth century as it gained prominence among such diverse authors and artists as Franz Rosenzweig, Hans Jonas, Isaac Bashevis Singer, Harold Bloom, Barnett Newman, and Anselm Kiefer.
This book follows the traces of the zimzum across the Jewish and Christian intellectual history of Europe and North America over more than four centuries, where Judaism and Christianity, theosophy and philosophy, divine and human, mysticism and literature, Kabbalah and the arts encounter, mix, and cross-fertilize the interpretations and appropriations of this doctrine of God’s self-entanglement and limitation
xMOOCs
(2023)
The World Health Organization designed OpenWHO.org to provide an inclusive and accessible online environment to equip learners across the globe with critical up-to-date information and to be able to effectively protect themselves in health emergencies. The platform thus focuses on the eXtended Massive Open Online Course (xMOOC) modality – contentfocused and expert-driven, one-to-many modelled, and self-paced for scalable learning. In this paper, we describe how OpenWHO utilized xMOOCs to reach mass audiences during the COVID-19 pandemic; the paper specifically examines the accessibility, language inclusivity and adaptability of hosted xMOOCs. As of February 2023, OpenWHO had 7.5 million enrolments across 200 xMOOCs on health emergency, epidemic, pandemic and other public health topics available across 65 languages, including 46 courses targeted for the COVID-19 pandemic. Our results suggest that the xMOOC modality allowed OpenWHO to expand learning during the pandemic to previously underrepresented groups, including women, participants ages 70 and older, and learners younger than age 20. The OpenWHO use case shows that xMOOCs should be considered when there is a need for massive knowledge transfer in health emergency situations, yet the approach should be context-specific according to the type of health emergency, targeted population and region. Our evidence also supports previous calls to put intervention elements that contribute to removing barriers to access at the core of learning and health information dissemination. Equity must be the fundamental principle and organizing criteria for public health work.
Diversity is a term that is broadly used and challenging for informatics research, development and education. Diversity concerns may relate to unequal participation, knowledge and methodology, curricula, institutional planning etc. For a lot of these areas, measures, guidelines and best practices on diversity awareness exist. A systemic, sustainable impact of diversity measures on informatics is still largely missing. In this paper I explore what working with diversity and gender concepts in informatics entails, what the main challenges are and provide thoughts for improvement. The paper includes definitions of diversity and intersectionality, reflections on the disciplinary basis of informatics and practical implications of integrating diversity in informatics research and development. In the final part, two concepts from the social sciences and the humanities, the notion of “third space”/hybridity and the notion of “feminist ethics of care”, serve as a lens to foster more sustainable ways of working with diversity in informatics.
Leveraging two cohort-specific pension reforms, this paper estimates the forward-looking effects of an exogenous increase in the working horizon on (un)employment behaviour for individuals with a long remaining statutory working life. Using difference-in-differences and regression discontinuity approaches based on administrative and survey data, I show that a longer legal working horizon increases individuals’ subjective expectations about the length of their work life, raises the probability of employment, decreases the probability of unemployment, and increases the intensity of job search among the unemployed. Heterogeneity analyses show that the demonstrated employment effects are strongest for women and in occupations with comparatively low physical intensity, i.e., occupations that can be performed at older ages.
We analyze the impact of women’s managerial representation on the gender pay gap among employees on the establishment level using German Linked-Employer-Employee-Data from the years 2004 to 2018. For identification of a causal effect we employ a panel model with establishment fixed effects and industry-specific time dummies. Our results show that a higher share of women in management significantly reduces the gender pay gap within the firm. An increase in the share of women in first-level management e.g. from zero to above 33 percent decreases the adjusted gender pay gap from a baseline of 15 percent by 1.2 percentage points, i.e. to roughly 14 percent. The effect is stronger for women in second-level than first-level management, indicating that women managers with closer interactions with their subordinates have a higher impact on the gender pay gap than women on higher management levels. The results are similar for East and West Germany, despite the lower gender pay gap and more gender egalitarian social norms in East Germany. From a policy perspective, we conclude that increasing the number of women in management positions has the potential to reduce the gender pay gap to a limited extent. However, further policy measures will be needed in order to fully close the gender gap in pay.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Sanctions are critical to the Security Council's efforts to fight terrorism. What is striking is that the Council's sanctions regimes are subject to detailed sets of rules and decision criteria. The scholarship on human rights in counterterrorism assumes that rights advocacy and court litigation have prompted this development. The article complements this literature by highlighting an unexplored internal driver of legal-regulatory decision-making and explores how mixed-motive interest constellations among Security Council members have affected the extent of committee regulations and the content of decisions taken by sanctions committees. Based on internal documents and diplomatic cables, a comparative analysis of the Iraq sanctions regime and the counterterrorism sanctions regime demonstrates that mixed-motive interest constellations among Security Council members provide incentives to elaborate rules to guide decision-making resulting in legal-regulatory sanctions governance, even if the human rights of targeted individuals are not at stake. For comparative leverage and to assess the limits of the proposed mechanism, the analysis is briefly extended to other sanctions regimes targeting individuals (Democratic Republic of the Congo and Sudan). The findings have implications for this essential tool of the Security Council to react to threats to peace as diverse as counterterrorism, nonproliferation, and internal armed conflict.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
When moral authority speaks
(2023)
When are international organizations (IOs) responsive to the policy problems that motivated their establishment? While it is a conventional assumption that IOs exist to address transnational challenges, the question of whether and when IO policy-making is responsive to shifts in underlying problems has not been systematically explored. This study investigates the responsiveness of IOs from a large-n, comparative approach. Theoretically, we develop three alternative models of IO responsiveness, emphasizing severeness, dependence, and power differentials. Empirically, we focus on the domain of security, examining the responsiveness of eight multi-issue IOs to armed conflict between 1980 and 2015, using a novel and expansive dataset on IO policy decisions. Our findings suggest, first, that IOs are responsive to security problems and, second, that responsiveness is not primarily driven by dependence or power differentials but by problem severity. An in-depth study of the responsiveness of the UN Security Council using more granular data confirms these findings. As the first comparative study of whether and when IO policy adapts to problem severity, the article has implications for debates about IO responsiveness, performance, and legitimacy.
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
What is it good for?
(2023)
Military conflicts and wars affect a country’s development in various dimensions. Rising inflation rates are a potentially important economic effect associated with conflict. High inflation can undermine investment, weigh on private consumption, and threaten macroeconomic stability. Furthermore, these effects are not necessarily restricted to the locality of the conflict, but can also spill over to other countries. Therefore, to understand how conflict affects the economy and to make a more comprehensive assessment of the costs of armed conflict, it is important to take inflationary effects into account. To disentangle the conflict-inflation-nexus and to quantify this relationship, we conduct a panel analysis for 175 countries over the period 1950–2019. To capture indirect inflationary effects, we construct a distance based spillover index. In general, the results of our analysis confirm a statistically significant positive direct association between conflicts and inflation rates. This finding is robust across various model specifications. Moreover, our results indicate that conflict induced inflation is not solely driven by increasing money supply. Furthermore, we document a statistically significant positive indirect association between conflicts and inflation rates in uninvolved countries.
What does stunting tell us?
(2023)
Stunting is commonly linked with undernutrition. Yet, already after World War I, German pediatricians questioned this link and stated that no association exists between nutrition and height. Recent analyses within different populations of Low- and middle-income countries with high rates of stunted children failed to support the assumption that stunted children have a low BMI and skinfold sickness as signs of severe caloric deficiency. So, stunting is not a synonym of malnutrition. Parental education level has a positive influence on body height in stunted populations, e.g., in India and in Indonesia. Socially disadvantaged children tend to be shorter and lighter than children from affluent families.
Humans are social mammals; they regulate growth similar to other social mammals. Also in humans, body height is strongly associated with the position within the social hierarchy, reflecting the personal and group-specific social, economic, political, and emotional environment. These non-nutritional impact factors on growth are summarized by the concept of SEPE (Social-Economic-Political-Emotional) factors. SEPE reflects on prestige, dominance-subordination, social identity, and ego motivation of individuals and social groups.
Web scraping, a technique for extracting data from web pages, has been in use for decades, yet its utilization in the field of migration, mobility, and migrant integration studies has been limited. The field faces notorious limitations regarding data access and availability, particularly in low-income settings. Web scraping has the potential to provide new datasets for further qualitative and quantitative analysis. Web scraping requires no financial resources, is agnostic to epistemic divides in the field, reduces researcher bias, and increases transparency and replicability of data collection. As large providers of digital data such as Facebook or Twitter increasingly restrict access to their data for researchers, web scraping will become more important in the future and deserves its place in the toolbox of migration and mobility scholars. This short and nontechnical methods note introduces the fundamental concepts of web scraping, provides guidance on how to learn the technique, showcases practical applications of web scraping in the study of migrant populations, and discusses potential future use cases.
Weathering the storm?
(2023)
Democratization scholars are currently debating if we are indeed witnessing a third wave of autocratization. While this has led to an extensive debate about the future of the liberal international order, we still know relatively little about the consequences of autocratization for international organizations (IOs). In this article, we explore to what extent autocratization has led to changes in the composition of IO membership. We propose three different ways of conceptualizing autocratization of IO membership. We argue that we should move away from a dichotomous understanding of regime type and regime change, but rather focus on composition of subregime types to understand current developments. We build on updated membership data for 73 IOs through 2020 to map membership configurations based on the V-Dem Electoral Democracy Index. Contrary to current debates on the crisis of the liberal order, we find that many IOs are not (yet) affected by broad autocratization of their membership that would endanger democratic majorities or overall democratic densities. However, we also observe the disappearance of formerly homogenous democratic clubs due to democratic backsliding in a number of European and Latin American IO member states, as well as a return of autocratic clubs in Southeast Asia and Southern Africa. These findings have important implications for the broader research agenda on international democracy promotion and human right protection as well as the study of legitimacy and the effectiveness of international organizations.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
Founded in 2013, OpenClassrooms is a French online learning company that offers both paid courses and free MOOCs on a wide range of topics, including computer science and education. In 2021, in partnership with the EDA research unit, OpenClassrooms shared a database to solve the problem of how to increase persistence in their paid courses, which consist of a series of MOOCs and human mentoring. Our statistical analysis aims to identify reasons for dropouts that are due to the course design rather than demographic predictors or external factors.We aim to identify at-risk students, i.e. those who are on the verge of dropping out at a specific moment. To achieve this, we use learning analytics to characterize student behavior. We conducted data analysis on a sample of data related to the “Web Designers” and “Instructional Design” courses. By visualizing the student flow and constructing speed and acceleration predictors, we can identify which parts of the course need to be calibrated and when particular attention should be paid to these at-risk students.
The management of knowledge in organizations considers both established long-term
processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
In the last two centuries BC, with the Republic limping towards its end, the cultivated ruling elite began to lose its moral and political authority.1 Its members not only held themselves responsible for the so-called crisis of tradition, but at the same time also conveyed the impression of a loss of memory, as if all Romans were suffering from some kind of amnesia or identity crisis.2 In particular, institutional figures such as pontiffs and augurs, who had preserved Rome’s memory throughout its history, were accused of neglecting their duties and, by extension, of allowing ancient practices and values to slowly disappear.3 Accordingly, Cicero and Varro, both perfect representatives of this elite, employed recurrent terms such as neglect (neglegentia/neglegere), involuntary abandon (amittere), oblivion (oblivio), vanishing of institutions (evanescere), and ignorance (ignoratio/ignorare) to describe this critical loss of information; they depicted the citizenry of Rome (civitas) as disoriented and estranged, incapable of sharing any common knowledge or values.
This article proposes several conceptual frameworks for examining the widespread use of classical intertexts depicting the supernatural in popular media. Whether the supernatural is viewed as reality or simply a trope, it represents the human capacity and desire to explore worlds and meanings beyond the obvious and mundane. Representations of classical gods, heroes, and monsters evoke the power of mythic stories to probe and explain human psychology, social concerns, philosophical questions, and religious beliefs, including belief about the paranormal and supernatural. The entertainment value of popular media allows creators and audiences to engage with larger issues in non-dogmatic and playful ways that help them negotiate tensions among various beliefs and identities. This paper also gives an overview of the other articles in this journal issue, showing overlapping themes and patterns that connect with these tensions. By combining knowledge of classical myths in their original contexts with knowledge about contemporary culture, classical scholars contribute unique perspectives about why classical intertexts dominate in popular media today.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
Individuals with aphasia vary in the speed and accuracy they perform sentence comprehension tasks. Previous results indicate that the performance patterns of individuals with aphasia vary between tasks (e.g., Caplan, DeDe, & Michaud, 2006; Caplan, Michaud, & Hufford, 2013a). Similarly, it has been found that the comprehension performance of individuals with aphasia varies between homogeneous test sentences within and between sessions (e.g., McNeil, Hageman, & Matthews, 2005). These studies ascribed the variability in the performance of individuals with aphasia to random noise. This conclusion would be in line with an influential theory on sentence comprehension in aphasia, the resource reduction hypothesis (Caplan, 2012). However, previous studies did not directly compare variability in language-impaired and language-unimpaired adults. Thus, it is still unclear how the variability in sentence comprehension differs between individuals with and without aphasia. Furthermore, the previous studies were exclusively carried out in English. Therefore, the findings on variability in sentence processing in English still need to be replicated in a different language.
This dissertation aims to give a systematic overview of the patterns of variability in sentence comprehension performance in aphasia in German and, based on this overview, to put the resource reduction hypothesis to the test. In order to reach the first aim, variability was considered on three different dimensions (persons, measures, and occasions) following the classification by Hultsch, Strauss, Hunter, and MacDonald (2011). At the dimension of persons, the thesis compared the performance of individuals with aphasia and language-unimpaired adults. At the dimension of measures, this work explored the performance across different sentence comprehension tasks (object manipulation, sentence-picture matching). Finally, at the dimension of occasions, this work compared the performance in each task between two test sessions. Several methods were combined to study variability to gain a large and diverse database. In addition to the offline comprehension tasks, the self-paced-listening paradigm and the visual world eye-tracking paradigm were used in this work.
The findings are in line with the previous results. As in the previous studies, variability in sentence comprehension in individuals with aphasia emerged between test sessions and between tasks. Additionally, it was possible to characterize the variability further using hierarchical Bayesian models. For individuals with aphasia, it was shown that both between-task and between-session variability are unsystematic. In contrast to that, language-unimpaired individuals exhibited systematic differences between measures and between sessions. However, these systematic differences occurred only in the offline tasks. Hence, variability in sentence comprehension differed between language-impaired and language-unimpaired adults, and this difference could be narrowed down to the offline measures.
Based on this overview of the patterns of variability, the resource reduction hypothesis was evaluated. According to the hypothesis, the variability in the performance of individuals with aphasia can be ascribed to random fluctuations in the resources available for sentence processing. Given that the performance of the individuals with aphasia varied unsystematically, the results support the resource reduction hypothesis. Furthermore, the thesis proposes that the differences in variability between language-impaired and language-unimpaired adults can also be explained by the resource reduction hypothesis. More specifically, it is suggested that the systematic changes in the performance of language-unimpaired adults are due to decreasing fluctuations in available processing resources. In parallel, the unsystematic variability in the performance of individuals with aphasia could be due to constant fluctuations in available processing resources. In conclusion, the systematic investigation of variability contributes to a better understanding of language processing in aphasia and thus enriches aphasia research.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
Unveiling the Local Universe
(2023)
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Within the context of United Nations (UN) environmental institutions, it has become apparent that intergovernmental responses alone have been insufficient for dealing with pressing transboundary environmental problems. Diverging economic and political interests, as well as broader changes in power dynamics and norms within global (environmental) governance, have resulted in negotiation and implementation efforts by UN member states becoming stuck in institutional gridlock and inertia. These developments have sparked a renewed debate among scholars and practitioners about an imminent crisis of multilateralism, accompanied by calls for reforming UN environmental institutions. However, with the rise of transnational actors and institutions, states are not the only relevant actors in global environmental governance. In fact, the fragmented architectures of different policy domains are populated by a hybrid mix of state and non-state actors, as well as intergovernmental and transnational institutions. Therefore, coping with the complex challenges posed by severe and ecologically interdependent transboundary environmental problems requires global cooperation and careful management from actors beyond national governments.
This thesis investigates the interactions of three intergovernmental UN treaty secretariats in global environmental governance. These are the secretariats of the United Nations Framework Convention on Climate Change, the Convention on Biological Diversity, and the United Nations Convention to Combat Desertification. While previous research has acknowledged the increasing autonomy and influence of treaty secretariats in global policy-making, little attention has been paid to their strategic interactions with non-state actors, such as non-governmental organizations, civil society actors, businesses, and transnational institutions and networks, or their coordination with other UN agencies. Through qualitative case-study research, this thesis explores the means and mechanisms of these interactions and investigates their consequences for enhancing the effectiveness and coherence of institutional responses to underlying and interdependent environmental issues.
Following a new institutionalist ontology, the conceptual and theoretical framework of this study draws on global governance research, regime theory, and scholarship on international bureaucracies. From an actor-centered perspective on institutional interplay, the thesis employs concepts such as orchestration and interplay management to assess the interactions of and among treaty secretariats. The research methodology involves structured, focused comparison, and process-tracing techniques to analyze empirical data from diverse sources, including official documents, various secondary materials, semi-structured interviews with secretariat staff and policymakers, and observations at intergovernmental conferences.
The main findings of this research demonstrate that secretariats employ tailored orchestration styles to manage or bypass national governments, thereby raising global ambition levels for addressing transboundary environmental problems. Additionally, they engage in joint interplay management to facilitate information sharing, strategize activities, and mobilize relevant actors, thereby improving coherence across UN environmental institutions. Treaty secretariats play a substantial role in influencing discourses and knowledge exchange with a wide range of actors. However, they face barriers, such as limited resources, mandates, varying leadership priorities, and degrees of politicization within institutional processes, which may hinder their impact. Nevertheless, the secretariats, together with non-state actors, have made progress in advancing norm-building processes, integrated policy-making, capacity building, and implementation efforts within and across framework conventions. Moreover, they utilize innovative means of coordination with actors beyond national governments, such as data-driven governance, to provide policy-relevant information for achieving overarching governance targets.
Importantly, this research highlights the growing interactions between treaty secretariats and non-state actors, which not only shape policy outcomes but also have broader implications for the polity and politics of international institutions. The findings offer opportunities for rethinking collective agency and actor dynamics within UN entities, addressing gaps in institutionalist theory concerning the interaction of actors in inter-institutional spaces. Furthermore, the study addresses emerging challenges and trends in global environmental governance that are pertinent to future policy-making. These include reflections for the debate on reforming international institutions, the role of emerging powers in a changing international world order, and the convergence of public and private authority through new alliance-building and a division of labor between international bureaucracies and non-state actors in global environmental governance.
United in Diversity
(2023)
What are the future perspectives for Jews and Jewish networks in contemporary Europe? Is there a new quality of relations between Jews and non-Jews, despite or precisely because of the Holocaust trauma? How is the memory of the extermination of 6 million European Jews reflected in memorial events and literature, film, drama, and visual arts media? To what degree do European Jews feel as integrated people, as Europeans per see, and as safe citizens? An interdisciplinary team of historians, cultural anthropologists, sociologists, and literary theorists answers these questions for Poland, Hungary, the Czech Republic, Slovakia, and Germany. They show that the Holocaust has become an enduring topic in public among Jews and non-Jews. However, Jews in Europe work self-confidently on their future on the "old continent," new alliances, and in cooperation with a broad network of civil forces. Non-Jewish interest in Jewish history and the present has significantly increased over decades, and networks combatting anti-Semitism have strengthened.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
The near-Earth space environment is a highly complex system comprised of several regions and particle populations hazardous to satellite operations. The trapped particles in the radiation belts and ring current can cause significant damage to satellites during space weather events, due to deep dielectric and surface charging. Closer to Earth is another important region, the ionosphere, which delays the propagation of radio signals and can adversely affect navigation and positioning. In response to fluctuations in solar and geomagnetic activity, both the inner-magnetospheric and ionospheric populations can undergo drastic and sudden changes within minutes to hours, which creates a challenge for predicting their behavior. Given the increasing reliance of our society on satellite technology, improving our understanding and modeling of these populations is a matter of paramount importance.
In recent years, numerous spacecraft have been launched to study the dynamics of particle populations in the near-Earth space, transforming it into a data-rich environment. To extract valuable insights from the abundance of available observations, it is crucial to employ advanced modeling techniques, and machine learning methods are among the most powerful approaches available. This dissertation employs long-term satellite observations to analyze the processes that drive particle dynamics, and builds interdisciplinary links between space physics and machine learning by developing new state-of-the-art models of the inner-magnetospheric and ionospheric particle dynamics.
The first aim of this thesis is to investigate the behavior of electrons in Earth's radiation belts and ring current. Using ~18 years of electron flux observations from the Global Positioning System (GPS), we developed the first machine learning model of hundreds-of-keV electron flux at Medium Earth Orbit (MEO) that is driven solely by solar wind and geomagnetic indices and does not require auxiliary flux measurements as inputs. We then proceeded to analyze the directional distributions of electrons, and for the first time, used Fourier sine series to fit electron pitch angle distributions (PADs) in Earth's inner magnetosphere. We performed a superposed epoch analysis of 129 geomagnetic storms during the Van Allen Probes era and demonstrated that electron PADs have a strong energy-dependent response to geomagnetic activity. Additionally, we showed that the solar wind dynamic pressure could be used as a good predictor of the PAD dynamics. Using the observed dependencies, we created the first PAD model with a continuous dependence on L, magnetic local time (MLT) and activity, and developed two techniques to reconstruct near-equatorial electron flux observations from low-PA data using this model.
The second objective of this thesis is to develop a novel model of the topside ionosphere. To achieve this goal, we collected observations from five of the most widely used ionospheric missions and intercalibrated these data sets. This allowed us to use these data jointly for model development, validation, and comparison with other existing empirical models. We demonstrated, for the first time, that ion density observations by Swarm Langmuir Probes exhibit overestimation (up to ~40-50%) at low and mid-latitudes on the night side, and suggested that the influence of light ions could be a potential cause of this overestimation. To develop the topside model, we used 19 years of radio occultation (RO) electron density profiles, which were fitted with a Chapman function with a linear dependence of scale height on altitude. This approximation yields 4 parameters, namely the peak density and height of the F2-layer and the slope and intercept of the linear scale height trend, which were modeled using feedforward neural networks (NNs). The model was extensively validated against both RO and in-situ observations and was found to outperform the International Reference Ionosphere (IRI) model by up to an order of magnitude. Our analysis showed that the most substantial deviations of the IRI model from the data occur at altitudes of 100-200 km above the F2-layer peak. The developed NN-based ionospheric model reproduces the effects of various physical mechanisms observed in the topside ionosphere and provides highly accurate electron density predictions.
This dissertation provides an extensive study of geospace dynamics, and the main results of this work contribute to the improvement of models of plasma populations in the near-Earth space environment.
Recent debates in international relations increasingly focus on bureaucratic apparatuses of international organizations and highlight their role, influence, and autonomy in global public policy. In this contribution we follow the recent call made by Moloney and Rosenbloom in this journal to make use of “public administrative theory and empirically based knowledge in analyzing the behavior of international and regional organizations” and offer a systematic analysis of the inner structures of these administrative bodies. Changes in these structures can reflect both the (re-)assignment of responsibilities, competencies, and expertise, but also the (re)allocation of resources, staff, and corresponding signalling of priorities. Based on organizational charts, we study structural changes within 46 international bureaucracies in the UN system. Tracing formal changes to all internal units over two decades, this contribution provides the first longitudinal assessment of structural change at the international level. We demonstrate that the inner structures of international bureaucracies in the UN system became more fragmented over time but also experienced considerable volatility with periods of structural growth and retrenchment. The analysis also suggests that IO's political features yield stronger explanatory power for explaining these structural changes than bureaucratic determinants. We conclude that the politics of structural change in international bureaucracies is a missing piece in the current debate on international public administrations that complements existing research perspectives by reiterating the importance of the political context of international bureaucracies as actors in global governance.
Unavailable
(2023)
Biofilms are heterogeneous structures made of microorganisms embedded in a self-secreted extracellular matrix. Recently, biofilms have been studied as sustainable living materials with a focus on the tuning of their mechanical properties. One way of doing so is to use metal ions. In particular biofilms have been shown to stiffen in presence of some metal cations and to soften in presence of others. However, the specificity and the determinants of those interactions vary between species. While Escherichia coli is a widely studied model organism, little is known concerning the response of its biofilms to metal ions. In this work, we aimed at tuning the mechanics of E. coli biofilms by acting on the interplay between matrix composition and metal cations. To do so, we worked with E. coli strains producing a matrix composed of curli amyloid fibres or phosphoethanolamine-cellulose (pEtN-cellulose) fibres or both. The viscoelastic behaviour of the resulting biofilms was investigated with rheology after incubation with one of the following metal ion solutions: FeCl3, AlCl3, ZnCl2 and CaCl2 or ultrapure water. We observed that the strain producing both fibres stiffen by a factor of two when exposed to the trivalent metal cations Al(III) and Fe(III) while no such response is observed for the bivalent cations Zn(II) and Ca(II). Strains producing only one matrix component did not show any stiffening in response to either cation, but even a small softening. In order to investigate further the contribution of each matrix component to the mechanical properties, we introduced additional bacterial strains producing curli fibres in combination with non-modified cellulose, non-modified cellulose only or neither component. We measured biofilms produced by those different strains with rheology and without any solution. Since rheology does not preserve the architecture of the matrix, we compared those results to the mechanical properties of biofilms probed with the non-destructive microindentation. The microindentation results showed that biofilm stiffness is mainly determined by the presence of curli amyloid fibres in the matrix. However, this clear distinction between biofilm matrices containing or not containing curli is absent from the rheology results, i.e. following partial destruction of the matrix architecture. In addition, rheology also indicated a negative impact of curli on biofilm yield stress and flow stress. This suggests that curli fibres are more brittle and therefore more affected by the mechanical treatments. Finally, to examine the molecular interactions between the biofilms and the metal cations, we used Attenuated total reflectance - Fourier transform infrared spectroscopy (ATR-FTIR) to study the three E.coli strains producing a matrix composed of curli amyloid fibres, pEtN-cellulose fibres or both. We measured biofilms produced by those strains in presence of each of the aforementioned metal cation solutions or ultrapure water. We showed that the three strains cannot be distinguished based on their FTIR spectra and that metal cations seem to have a non-specific effect on bacterial membranes in absence of pEtN-cellulose. We subsequently conducted similar experiments on purified curli or pEtN-cellulose fibres. The spectra of the pEtN-cellulose fibres revealed a non-valence-specific interaction between metal cations and the phosphate of the pEtN-modification. Altogether, these results demonstrate that the mechanical properties of E. coli biofilms can be tuned via incubation with metal ions. While the mechanism involving curli fibres remains to be determined, metal cations seem to adsorb onto pEtN-cellulose and this is not valence-specific. This work also underlines the importance of matrix architecture to biofilm mechanics and emphasises the specificity of each matrix composition.
The conception of property at the basis of Hegel’s conception of abstract right seems committed to a problematic form of “possessive individualism.” It seems to conceive of right as the expression of human mastery over nature and as based upon an irreducible opposition of person and nature, rightful will, and rightless thing. However, this chapter argues that Hegel starts with a form of possessive individualism only to show that it undermines itself. This is evident in the way Hegel unfolds the nature of property as it applies to external things as well as in the way he explains our self-ownership of our own bodies and lives. Hegel develops the idea of property to a point where it reaches a critical limit and encounters the “true right” that life possesses against the “formal” and “abstract right” of property. Ultimately, Hegel’s account suggests that nature should precisely not be treated as a rightless object at our arbitrary disposal but acknowledged as the inorganic body of right.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Thus far, research into reservations to treaties has often overlooked reservations formulated to both European Social Charters (and its Protocols) and the relevant European Committee of Social Rights practices. There are several pressing reasons to further explore this gap in existing literature. First, an analysis of practices within the European Social Charters (and Protocols) will provide a fuller picture of the reservations and responses of treaty bodies. Second, in the context of previous landmark events it is worth noting the practices of another human rights treaty monitoring body that is often omitted from analyses. Third, the very fact that the formulation of reservations to treaties gives parties such far-reaching flexibility to shape their contractual obligations (à la carte) is surprising. An important outcome of the research is the finding that, despite the far-reaching flexibility present in the treaties analysed, both the States Parties and the European Committee of Social Rights generally treat them as conventional treaties to which the general rules on reservations apply. Consequently, there is no basis for assuming that the mere fact of adopting the à la carte system in a treaty with no reservation clause implies a formal prohibition of reservations or otherwise discourages their formulation.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Traditionally, mental disorders have been identified based on specific symptoms and standardized diagnostic systems such as the DSM-5 and ICD-10. However, these symptom-based definitions may only partially represent neurobiological and behavioral research findings, which could impede the development of targeted treatments. A transdiagnostic approach to mental health research, such as the Research Domain Criteria (RDoC) approach, maps resilience and broader aspects of mental health to associated components. By investigating mental disorders in a transnosological way, we can better understand disease patterns and their distinguishing and common factors, leading to more precise prevention and treatment options.
Therefore, this dissertation focuses on (1) the latent domain structure of the RDoC approach in a transnosological sample including healthy controls, (2) its domain associations to disease severity in patients with anxiety and depressive disorders, and (3) an overview of the scientific results found regarding Positive (PVS) and Negative Valence Systems (NVS) associated with mood and anxiety disorders.
The following main results were found: First, the latent RDoC domain structure for PVS and NVS, Cognitive Systems (CS), and Social Processes (SP) could be validated using self-report and behavioral measures in a transnosological sample. Second, we found transdiagnostic and disease-specific associations between those four domains and disease severity in patients with depressive and anxiety disorders. Third, the scoping review showed a sizable amount of RDoC research conducted on PVS and NVS in mood and anxiety disorders, with research gaps for both domains and specific conditions.
In conclusion, the research presented in this dissertation highlights the potential of the transnosological RDoC framework approach in improving our understanding of mental disorders. By exploring the latent RDoC structure and associations with disease severity and disease-specific and transnosological associations for anxiety and depressive disorders, this research provides valuable insights into the full spectrum of psychological functioning. Additionally, this dissertation highlights the need for further research in this area, identifying both RDoC indicators and research gaps. Overall, this dissertation represents an important contribution to the ongoing efforts to improve our understanding and the treatment of mental disorders, particularly within the commonly comorbid disease spectrum of mood and anxiety disorders.
International law is constantly navigating the tension between preserving the status quo and adapting to new exigencies. But when and how do such adaptation processes give way to a more profound transformation, if not a crisis of international law? To address the question of how attacks on the international legal order are changing the value orientation of international law, this book brings together scholars of international law and international relations. By combining theoretical and methodological analyses with individual case studies, this book offers readers conceptualizations and tools to systematically examine value change and explore the drivers and mechanisms of these processes. These case studies scrutinize value change in the foundational norms of the post-1945 order and in norms representing the rise of the international legal order post-1990. They cover diverse issues: the prohibition of torture, the protection of women’s rights, the prohibition of the use of force, the non-proliferation of nuclear weapons, sustainability norms, and accountability for core international crimes. The challenges to each norm, the reactions by norm defenders, and the fate of each norm are also studied. Combined, the analyses show that while a few norms have remained surprisingly robust, several are changing, either in substance or in legal or social validity. The book concludes by integrating the conceptual and empirical insights from this interdisciplinary exchange to assess and explain the ambiguous nature of value change in international law beyond the extremes of mere progress or decline.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
Using time-resolved x-ray diffraction, we demonstrate the manipulation of the picosecond strain response of a metallic heterostructure consisting of a dysprosium (Dy) transducer and a niobium (Nb) detection layer by an external magnetic field. We utilize the first-order ferromagnetic–antiferromagnetic phase transition of the Dy layer, which provides an additional large contractive stress upon laser excitation compared to its zerofield response. This enhances the laser-induced contraction of the transducer and changes the shape of the picosecond strain pulses driven in Dy and detected within the buried Nb layer. Based on our experiment with rare-earth metals we discuss required properties for functional transducers, which may allow for novel field-control of the emitted picosecond strain pulses.
Using time-resolved x-ray diffraction, we demonstrate the manipulation of the picosecond strain response of a metallic heterostructure consisting of a dysprosium (Dy) transducer and a niobium (Nb) detection layer by an external magnetic field. We utilize the first-order ferromagnetic–antiferromagnetic phase transition of the Dy layer, which provides an additional large contractive stress upon laser excitation compared to its zerofield response. This enhances the laser-induced contraction of the transducer and changes the shape of the picosecond strain pulses driven in Dy and detected within the buried Nb layer. Based on our experiment with rare-earth metals we discuss required properties for functional transducers, which may allow for novel field-control of the emitted picosecond strain pulses.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.
The rise of open source models for software and hardware development has catalyzed the debate regarding sustainable business models. Open Source Software has already become a dominant part in the software industry, whereas Open Source Hardware is still a little-researched phenomenon but has the potential to do the same to manufacturing in a wide range of products. This article addresses this potential by introducing a research design to analyze the prototyping phase of six different Open Source Hardware projects tackling ecological, social, and economical challenges. Using a design science research methodology, a process model is developed to concretise the prototype development steps. The prototype phase is important because it is where fundamental decisions are made that affect the openness of the final product. This paper aims to advance the discourse on open production as a concept that enables companies to apply the aspect of openness towards collaboration-oriented and sustainable business models.
Terminology is a critical instrument for each researcher. Different terminologies for the same research object may arise in different research communities. By this inconsistency, many synergistic effects get lost. Theories and models will be more understandable and reusable if a common terminology is applied. This paper examines the terminological (in)consistence for the research field of job-shop scheduling by a literature review. There is an enormous variety in the choice of terms and mathematical notation for the same concept. The comparability, reusability and combinability of scheduling methods is unnecessarily hampered by the arbitrary use of homonyms and synonyms. The acceptance in the community of used variables and notation forms is shown by means of a compliance quotient. This is proven by the evaluation of 240 scientific publications on planning methods.
Touching at a Distance
(2023)
Studies the capacity of Shakespeare’s plays to touch and think about touchBased on plays from all major genres: Hamlet, The Tempest, Richard III, Much Ado About Nothing and Troilus and CressidaCentres on creative, close readings of Shakespeare’s plays, which aim to generate critical impulses for the 21st century readerBrings Shakespeare Studies into touch with philosophers and theoreticians from a range of disciplinary areas – continental philosophy, literary criticism, psychoanalysis, sociology, phenomenology, law, linguistics: Friedrich Nietzsche, Maurice Blanchot, Jacques Lacan, Luce Irigaray, Jacques Derrida, Roland Barthes, Niklas Luhmann, Hans Blumenberg, Carl Schmitt, J. L. AustinTheatre has a remarkable capacity: it touches from a distance. The audience is affected, despite their physical separation from the stage. The spectators are moved, even though the fictional world presented to them will never come into direct touch with their real lives. Shakespeare is clearly one of the master practitioners of theatrical touch. As the study shows, his exceptional dramaturgic talent is intrinsically connected with being one of the great thinkers of touch. His plays fathom the complexity and power of a fascinating notion – touch as a productive proximity that is characterised by unbridgeable distance – which philosophers like Friedrich Nietzsche, Maurice Blanchot, Jacques Derrida, Luce Irigaray and Jean-Luc Nancy have written about, centuries later. By playing with touch and its metatheatrical implications, Shakespeare raises questions that make his theatrical art point towards modernity: how are communities to form when traditional institutions begin to crumble? What happens to selfhood when time speeds up, when oneness and timeless truth can no longer serve as reliable foundations? What is the role and the capacity of language in a world that has lost its seemingly unshakeable belief and trust in meaning? How are we to conceive of the unthinkable extremes of human existence – birth and death – when the religious orthodoxy slowly ceases to give satisfactory explanations? Shakespeare’s theatre not only prompts these questions, but provides us with answers. They are all related to touch, and they are all theatrical at their core: they are argued and performed by the striking experience of theatre’s capacities to touch – at a distance
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
Multiplexity, the coexistence of more than one type of relationship between two actors, is a prevalent phenomenon with clear relevance for a wide range of management settings and phenomena. While there is a substantial body of work on multiplexity, the absence of a shared terminology and a typology for the mechanisms and arguments that are used in theorizing about its implications nevertheless hamper its appeal to organizational network scholars and slow its progress. Based on content analysis of 103 studies, we propose “relational harmony,” “task complementarity,” and “relational scope” as three categories to integrate the mechanisms and arguments used in the literature to theorize about the implications of multiplexity. We then survey the literature in light of this typology to show how it is also useful in revealing patterns of theorizing; for example, with respect to the types of relationships that are studied in relation to multiplexity. We conclude with suggestions for future research directions, focusing on how these can be pursued based on our integrative typology. We hope that the common ground we provide for theorizing about the implications of multiplexity will make it an even more engaging topic for organizational network and management scholars, and place it in the company of more prominently used relational constructs in management research, as aligned with its prevalence and relevance.
This paper discusses Franz Rosenzweig’s use of the term “the unconscious” (das Unbewußte) and possible influences on his understanding of it. I claim that for Rosenzweig, it is through the unconscious that the individual becomes aware of himself and becomes capable of fulfilling his longing to achieve self-fulfillment and eventually to take part in a collective redemption. The unconscious is often perceived as the mental sphere related to trauma and repression in which defense mechanisms and fantasies are evolved. Fantasies are psychological tools that allow the individual to cope with trauma, but they are also “layers of enclosedness,” illusions that should be dissolved. Hence, in the unconscious, we find a possibility of liberation.
The Tetrarchy as Ideology
(2023)
The Tetrarchy as Ideology
(2023)
The 'Tetrarchy', the modern name assigned to the period of Roman history that started with the emperor Diocletian and ended with Constantine I, has been a much-studied and much-debated field of the Roman Empire. Debate, however, has focused primarily on whether it was a true 'system' of government, or rather a collection of ad-hoc measures undertaken to stabilise the empire after the troubled period of the 3rd century CE. The papers collected here aim to go beyond this question and to present an innovative approach to a fascinating period of Roman history by understanding the Tetrarchy not as a system of government, but primarily as a political language. Their focus thus lies on the language and ideology of the imperial college and court, on the performance of power in imperial ceremonies, the representation of the emperors and their enemies in the provinces of the Roman world, as well as on the afterlife of Tetrarchic power in the Constantinian period.
The limitations and possibilities of the state in solving societal problems are perennial issues in the political and policy sciences and increasingly so in studies of environmental politics. With the aim of better understanding the role of the state in addressing environmental degradation through policy making, this article investigates the nexus between the environmental policy outputs and the environmental performance. Drawing on three theoretical perspectives on the state and market nexus in the environmental dilemma, we identify five distinct pathways. We then examine the extent to which these pathways are manifested in the real world. Our empirical investigation covers up to 37 countries for the period 1970–2010. While we see no global pattern of linkages between policy outputs and performance, our exploratory analysis finds evidence of policy effects, which suggest that the state can, under certain circumstances, improve the environment through policy making.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
High growth firms (HGFs) are important for job creation and considered to be precursors of economic growth. We investigate how formal institutions, like product- and labor-market regulations, as well as the quality of regional governments that implement these regulations, affect HGF development across European regions. Using data from Eurostat, OECD, WEF, and Gothenburg University, we show that both regulatory stringency and the quality of the regional government influence the regional shares of HGFs. More importantly, we find that the effect of labor- and product-market regulations ultimately depends on the quality of regional governments: in regions with high quality of government, the share of HGFs is neither affected by the level of product market regulation, nor by more or less flexibility in hiring and firing practices. Our findings contribute to the debate on the effects of regulations by showing that regulations are not, per se, “good, bad, and ugly”, rather their impact depends on the efficiency of regional governments. Our paper offers important building blocks to develop tailored policy measures that may influence the development of HGFs in a region.
The G protein-coupled estrogen receptor (GPER1) is acknowledged as an important mediator of estrogen signaling. Given the ubiquitous expression of GPER1, it is likely that the receptor plays a role in a variety of malignancies, not only in the classic hormonally regulated tissues (e.g., breast, ovary, and prostate), but also in the colon. As colorectal cancer (CRC) is the third most common cancer in both men and women worldwide and environmental factors and dietary habits are important risk factors, it is increasingly recognized that natural and synthetic hormones and their associated receptors might play a role in CRC. Through oral consumption, environmental contaminants with endocrine activity are in contact with the gastrointestinal mucosa, where they might exert their toxic effects. Although GPER1 has been shown to be engaged in physiological and pathophysiological processes, its role in CRC remains poorly understood. Thus, pro- as well as anti-tumorigenic effects are described in the literature. This thesis has uncovered novel roles of GPER1 in mediating major CRC-associated phenotypes in transformed and non-transformed colon cell lines. Exposure to the estrogens 17β-estradiol (E2), bisphenol-A (BPA) and diethylstilbestrol (DES) but also the androgen dihydrotestosterone (DHT) resulted in GPER1-dependent induction of supernumerary centrosomes, whole chromosomal instability (w-CIN) and aneuploidy. Indeed, both knockdown and inhibition of GPER1 attenuated the generation of (xeno)hormone-driven supernumerary centrosomes and karyotype instability. Mechanistically, (xeno)hormone-induced centrosome amplification was associated with transient multipolar mitosis and the generation of so called anaphase “lagging” chromosomes. The results of this thesis propose a GPER1/PKA/AKAP9-pathway in regulating centrosome numbers in colorectal cancer cells and the involvement of the centriolar protein centrin. Remarkably, exposure to (xeno)hormones resulted in atypical enlargement and unexpected phosphorylation of the centriole marker centrin in interphase. These findings provide a novel role for GPER1 in key CRC-prone lesions and shed light on underlying mechanisms that involve GPER1 function in the colon. Elucidating to what extent centrosomal proteins are involved in the GPER1-mediated aneugenic effect will be an important task for future studies. The present study was intended to lay a first foundation to understand the molecular basis and potential risk factors of CRC which might help to reduce the use of laboratory animals. Since numerous animal experiments are conducted in biomedical research, the development of alternative methods is indispensable. The Federal Institute for Risk Assessment (BfR) as the German Center for the Protection of Laboratory Animals (Bf3R) addresses this issue by uncovering underlying mechanisms leading to colorectal cancer as necessary prerequisite in order to develop alternative methods.
Widespread on social networking sites (SNSs), envy has been linked to an array of detrimental outcomes for users’ well-being. While envy has been considered a status-related emotion and is likely to be experienced in response to perceiving another’s higher status, there is a lack of research exploring how status perceptions influence the emergence of envy on SNSs. This is important because SNSs typically quantify social interactions and reach with metrics that indicate users’ relative rank and status in the network. To understand how status perceptions impact SNS users, we introduce a new form of metric-based digital status rooted in SNS metrics that are available and visible on a platform. Drawing on social comparison theory and status literature, we conducted an online experiment to investigate how different forms of status contribute to the proliferation of envy on SNSs. Our findings shed light on how metric-based digital status influences feelings of envy on SNSs. Specifically, we could show that metric-based digital status impacts envy through increasing perceptions of others’ socioeconomic and sociometric statuses. Our study contributes to the growing discourse on the negative outcomes associated with SNS use and its consequences for users and society.
The Right to Research
(2023)
Refugees and displaced people rarely figure as historical actors, and almost never as historical narrators. We often assume a person residing in a refugee camp, lacking funding, training, social networks, and other material resources that enable the research and writing of academic history, cannot be a historian because a historian cannot be a person residing in a refugee camp.
The Right to Research disrupts this tautology by featuring nine works by refugee and host-community researchers from across Africa, Europe, and the Middle East. Identifying the intrinsic challenges of making space for diverse voices within a research framework and infrastructure that is inherently unequal, this edited volume offers a critical reflection on what history means, who narrates it, and what happens when those long excluded from authorship bring their knowledge and perspectives to bear. Chapters address topics such as education in Kakuma Refugee Camp, the political power of hip-hop in Rwanda, women migrants to Yemen, and the development of photojournalism in Kurdistan.
Exploring what it means to become a researcher, The Right to Research understands historical scholarship as an ongoing conversation - one in which we all have a right to participate.
Private international law (PIL) might seem disconnected from peacebuilding and peacekeeping efforts. However, this perception falls short. PIL, contrary to public international law’s direct peacekeeping potential, indirectly contributes to peace by fostering mutual respect between states. The relationship between PIL and peace stems from the recognition and respect states show for each other’s legal systems. PIL operates on the principle of comity, where states acknowledge the applicability of foreign laws to resolve cases. In essence, while PIL’s impact on peace is indirect and modest, its emphasis on mutual respect and fair treatment contributes to peaceful relations between states, making it an important element in the broader context of peacebuilding and peacekeeping efforts. Private international law (PIL) does not determine substantive fairness for parties but focuses on localizing cases at a meta-level of conflict-of-laws. This localization is guided by party, trade, and regulatory interests, and is rooted in neutrality and respect for other legal systems. While the principle of equivalence and neutrality remains foundational in PIL, exceptions and limitations have been established over time to address specific scenarios, ensuring a balanced approach that respects both foreign legal systems and fundamental legal principles.
Despite energy efficiency measures, global energy demand has gradually increased due to global economic growth and changes in consumer behavior. Even if people are aware of the problem and want to change their energy consumption, they have difficulty acting on their attitudes. This is called the attitude-behavior gap. To narrow this gap and reduce energy consumption and CO2 emissions, behavioral interventions beyond technological advances must be considered. A promising intervention is nudging, which uses insights from behavioral economics to gently nudge individuals toward more sustainable choices. In this study, we investigate how modifying digital choice architectures with nudges can be used to influence consumer energy conservation behavior in smart home applications (SHAs). We conducted an online experiment with 391 participants to test the effectiveness of the following three digital nudges in an SHA: self-commitment, reminder, and social norm nudge. While the results of a structural equation model indicated no effect on bridging the gap between attitude and behavior, we found the potential to promote energy conservation with two nudge types. Thus, this paper makes substantial contribution to persuasive and information systems-enabled sustainability for a better world in the form of digital nudges for emerging technologies.
Jacob Brandon Maduro’s Memoirs and Related Observations (Havana, 1953) speak to the lasting yet malleable legacy of Jewish Caribbean/Atlantic mercantile communities that defined early modern settlement in the Americas. A close reading of the Memoirs, alongside relevant archival records and community narratives, lends new perspectives to scholarship on Port Jewries and the Atlantic Diaspora. Specifically concerned with Jacob’s adoption of such leading intellectual and political tropes as the Monroe doctrine, José Martí’s Nuestra America, and a Zionism that evolved from an ideology to a reality, the Memoirs reveal a narrative at once defined by the tremendous upheavals of the first half of the 20th century, and an enduring sense of Jewish diasporic peoplehood defined through a Port Jew paradigm whereby the preservation of Jewish ethnicity is understood as synonymous with the championing of modernity.
Ethical issues surrounding modern computing technologies play an increasingly important role in the public debate. Yet, ethics still either doesn’t appear at all or only to a very small extent in computer science degree programs. This paper provides an argument for the value of ethics beyond a pure responsibility perspective and describes the positive value of ethical debate for future computer scientists. It also provides a systematic analysis of the module handbooks of 67 German universities and shows that there is indeed a lack of ethics in computer science education. Finally, we present a principled design of a compulsory course for undergraduate students.
The Persistence of Memory
(2023)
The 2017 Pixar film Coco and the 2021 Disney film Encanto form a small part of an increasing modern wave of media focused on parent-child conflicts caused by intergenerational trauma and rejection. Other recent works in this genre include the video game Hades, the films Turning Red and Everything Everywhere All At Once, and the television series Ms. Marvel. The traumas in all these films, some directed explicitly at a younger audience and some pitched more broadly, serve as a distinct set of meditations on the immigrant experience, even while not necessarily focusing on literal immigration. They also all invoke imagery of ghosts and death, both echoing specific classical Mediterranean motifs and tropes and incorporating a wide variety of other cultures’ supernatural traditions. These works’ concern with familial traumas of separation, culture shock, and loss of ancestral memories and connections contrasts sharply with the individual-focused myth of the American Dream common to earlier generations of American media, in which a lone individual typically emigrates, assimilates, and succeeds in a new culture, forming a new family and set of myths. However, themes of assimilation and questions of cultural imperialism also form a bridge between ancient Roman and modern North American anxieties and traditions.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
The persistence of food preferences, which are crucial for diet-related decisions, is a significant obstacle to changing unhealthy eating behavior. To overcome this obstacle, the current study investigates whether posthypnotic suggestions (PHSs) can enhance food-related decisions by measuring food choices and subjective ratings. After assessing hypnotic susceptibility in Session 1, at the beginning of Session 2, a PHS was delivered aiming to increase the desirability of healthy food items (e.g., vegetables and fruit). After the termination of hypnosis, a set of two tasks was administrated twice, once when the PHS was activated and once deactivated in counterbalanced order. The task set consisted of rating 170 pictures of food items, followed by an online supermarket where participants were instructed to select enough food from the same item pool for a fictitious week of quarantine. After 1 week, Session 3 mimicked Session 2 without renewed hypnosis induction to assess the persistence of the PHS effects. The Bayesian hierarchical modeling results indicate that the PHS increased preferences and choices of healthy food items without altering the influence of preferences in choices. In contrast, for unhealthy food items, not only both preferences and choices were decreased due to the PHS, but also their relationship was modified. That is, although choices became negatively biased against unhealthy items, preferences played a more dominant role in unhealthy choices when the PHS was activated. Importantly, all effects persisted over 1 week, qualitatively and quantitatively. Our results indicate that although the PHS affected healthy choices through resolve, i.e., preferred more and chosen more, unhealthy items were probably chosen less impulsively through effortful suppression. Together, besides the translational importance of the current results for helping the obesity epidemic in modern societies, our results contribute theoretically to the understanding of hypnosis and food choices.
About 15 years ago, the first Massive Open Online Courses (MOOCs) appeared and revolutionized online education with more interactive and engaging course designs. Yet, keeping learners motivated and ensuring high satisfaction is one of the challenges today's course designers face. Therefore, many MOOC providers employed gamification elements that only boost extrinsic motivation briefly and are limited to platform support. In this article, we introduce and evaluate a gameful learning design we used in several iterations on computer science education courses. For each of the courses on the fundamentals of the Java programming language, we developed a self-contained, continuous story that accompanies learners through their learning journey and helps visualize key concepts. Furthermore, we share our approach to creating the surrounding story in our MOOCs and provide a guideline for educators to develop their own stories. Our data and the long-term evaluation spanning over four Java courses between 2017 and 2021 indicates the openness of learners toward storified programming courses in general and highlights those elements that had the highest impact. While only a few learners did not like the story at all, most learners consumed the additional story elements we provided. However, learners' interest in influencing the story through majority voting was negligible and did not show a considerable positive impact, so we continued with a fixed story instead. We did not find evidence that learners just participated in the narrative because they worked on all materials. Instead, for 10-16% of learners, the story was their main course motivation. We also investigated differences in the presentation format and concluded that several longer audio-book style videos were most preferred by learners in comparison to animated videos or different textual formats. Surprisingly, the availability of a coherent story embedding examples and providing a context for the practical programming exercises also led to a slightly higher ranking in the perceived quality of the learning material (by 4%). With our research in the context of storified MOOCs, we advance gameful learning designs, foster learner engagement and satisfaction in online courses, and help educators ease knowledge transfer for their learners.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Research within the framework of Basic Psychological Need Theory (BPNT) finds strong associations between basic need frustration and depressive symptoms. This study examined the role of rumination as an underlying mechanism in the association between basic psychological need frustration and depressive symptoms. A cross-sectional sample of N = 221 adults (55.2% female, mean age = 27.95, range = 18–62, SD = 10.51) completed measures assessing their level of basic psychological need frustration, rumination, and depressive symptoms. Correlational analyses and multiple mediation models were conducted. Brooding partially mediated the relation between need frustration and depressive symptoms. BPNT and Response Styles Theory are compatible and can further advance knowledge about depression vulnerabilities.
Research within the framework of Basic Psychological Need Theory (BPNT) finds strong associations between basic need frustration and depressive symptoms. This study examined the role of rumination as an underlying mechanism in the association between basic psychological need frustration and depressive symptoms. A cross-sectional sample of N = 221 adults (55.2% female, mean age = 27.95, range = 18–62, SD = 10.51) completed measures assessing their level of basic psychological need frustration, rumination, and depressive symptoms. Correlational analyses and multiple mediation models were conducted. Brooding partially mediated the relation between need frustration and depressive symptoms. BPNT and Response Styles Theory are compatible and can further advance knowledge about depression vulnerabilities.
Atwood analyzes the effects of the 1963 U.S. measles vaccination on long-run labor market outcomes, using a generalized difference-in-differences approach. We reproduce the results of this paper and perform a battery of robustness checks. Overall, we confirm that the measles vaccination had positive labor market effects. While the negative effect on the likelihood of living in poverty and the positive effect on the probability of being employed are very robust across the different specifications, the headline estimate—the effect on earnings—is more sensitive to the exclusion of certain regions and survey years.
The long term relationship between medicaid expansion and adult life-threatening chronic conditions
(2023)
We test whether the expansions of children's Medicaid eligibility in the 1980s–1990s resulted in long-term health benefits in terms of severe chronic conditions. Still relatively rare in the field, we use prospective individual-level panel data from the Panel Study of Income Dynamics (PSID) along with the higher quality income measures from the Cross-National Equivalent File (adjusting for taxes, transfers and household size). We observe severe chronic conditions (high blood pressure/heart disease, cancer, diabetes, or lung disease) at ages 30–56 (average age 43.1) for 4670 respondents who were also prospectively observed during childhood (i.e., at ages 0–17). Our analysis exploits within-region temporal variation in childhood Medicaid eligibility and adjusts for state- and individual-level controls. We uniquely concentrate attention on adjusting for childhood income. A standard deviation greater childhood Medicaid eligibility significantly reduces the probability of severe chronic conditions in adulthood by 0.05 to 0.12 (16%–37.5% reduction from mean 0.32). Across the range of observed childhood Medicaid eligibility, the probability is approximately cut in half. Greater childhood Medicaid eligibility also substantially reduces childhood income disparities in severe chronic conditions. At higher levels of childhood Medicaid eligibility, we find no significant childhood income disparities in adult severe chronic conditions.
The EU and its member countries have been laggards in using forest carbon to reduce EU emissions. The European Green Deal aims to change this. As part of its long-term emissions reductions, the EU aims to offset this by creating land-based carbon sinks, especially forest carbon sinks as well as carbon capture and storage. This chapter focuses on the role of forest carbon as part of the EU's climate policies towards achieving net-zero greenhouse gas emissions by 2050. It furthermore examines the European Commission's proposed forest strategy and its proposal for a revised LULUCF Regulation. The chapter shows that the logic of appropriateness dominates the European Commission's forest policies. Finally, the chapter makes policy recommendations on how the EU could credibly use long-term carbon sinks to achieve climate neutrality.
Predicting entrepreneurial development based on individual and business-related characteristics is a key objective of entrepreneurship research. In this context, we investigate whether the motives of becoming an entrepreneur influence the subsequent entrepreneurial development. In our analysis, we examine a broad range of business outcomes including survival and income, as well as job creation, and expansion and innovation activities for up to 40 months after business formation. Using the self-determination theory as conceptual background, we aggregate the start-up motives into a continuous motivational index. We show – based on a unique dataset of German start-ups from unemployment and non-unemployment – that the later business performance is better, the higher they score on this index. Effects are particularly strong for growth-oriented outcomes like innovation and expansion activities. In a next step, we examine three underlying motivational categories that we term opportunity, career ambition, and necessity. We show that individuals driven by opportunity motives perform better in terms of innovation and business expansion activities, while career ambition is positively associated with survival, income, and the probability of hiring employees. All effects are robust to the inclusion of a large battery of covariates that are proven to be important determinants of entrepreneurial performance.