Refine
Has Fulltext
- yes (258) (remove)
Year of publication
- 2023 (258) (remove)
Document Type
- Doctoral Thesis (154)
- Article (47)
- Postprint (22)
- Working Paper (14)
- Monograph/Edited Volume (7)
- Review (5)
- Bachelor Thesis (2)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Conference Proceeding (1)
Language
- English (258) (remove)
Keywords
- digital education (33)
- Digitale Bildung (32)
- Kursdesign (32)
- MOOC (32)
- Micro Degree (32)
- Online-Lehre (32)
- Onlinekurs (32)
- Onlinekurs-Produktion (32)
- e-learning (32)
- micro degree (32)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (55)
- Extern (47)
- Institut für Geowissenschaften (27)
- Institut für Biochemie und Biologie (26)
- Institut für Physik und Astronomie (22)
- Institut für Chemie (19)
- Vereinigung für Jüdische Studien e. V. (13)
- Institut für Umweltwissenschaften und Geographie (12)
- Center for Economic Policy Analysis (CEPA) (11)
- Fachgruppe Volkswirtschaftslehre (11)
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
National Action Plans (NAPs) have been increas-ingly adopted world-wide after the Vienna Dec-laration in 1993, where it was urged to consider the improvement and promotion of Human Rights. In this paper, we discuss their usefulness and success by analysing the challenges present-ed during NAP processes as well as the benefits this set of actions entails: The challenges for their implementation outweigh its actual benefits. Nevertheless, NAPs have great potential. Based on new research, we elaborate a set of recom-mendations for improving the design and imple-mentation of national action planning. In order to effectively bring NAP into practice, we consider it crucial to plan and analyse every state local circumstances in detail. The latter is important, since the implementation of a concrete set of actions is intended to directly transform and improve the local living conditions of the people. In a long-term perspective, we defend the benefit of NAP’s implementation for complying obliga-tions set up by HR treaties.
The last years have been affected by Covid-19 and the international emergency mecha-nism to deal with health-related threats. The effects of this period manifested differ-ently worldwide, depending on matters such as international relations, national policies, power dynamics etc. Additionally, the impact of this time will likely have long-term effects which are yet to be known. This paper gives a critical overview of the Public Health Emergency of International Concern (PHEIC) mechanism in the context of Covid-19. It does so by explaining the legal framework for states of emergency, specifically in the context of a PHEIC, while considering its restrictions and limitations on human rights. It further outlines issues in the manifestation of global protections and limitations on human rights during Covid-19. Lastly, considering the likelihood of future PHEICs and the known systemic obstructions, this paper offers ways to im-prove this mechanism from a holistic, non-zero-sum perspective.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
The integration of MOOCs into the Moroccan Higher Education (MHE) took place in 2013 by developing different partnerships and projects at national and international levels. As elsewhere, the Covid-19 crisis has played an important role in accelerating distance education in MHE. However, based on our experience as both university professors and specialists in educational engineering, the effective execution of the digital transition has not yet been implemented. Thus, in this article, we present a retrospective feedback of MOOCs in Morocco, focusing on the policies taken by the government to better support the digital transition in general and MOOCs in particular. We are therefore seeking to establish an optimal scenario for the promotion of MOOCs, which emphasizes the policies to be considered, and which recalls the importance of conducting a delicate articulation taking into account four levels, namely environmental, institutional, organizational and individual. We conclude with recommendations that are inspired by the Moroccan academic contex that focus on the major role that MOOCs plays for university students and on maintaining lifelong learning.
As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs.
This research paper aims to introduce a novel practitioner-oriented and research-based taxonomy of video genres. This taxonomy can serve as a scaffolding strategy to support educators throughout the entire educational system in creating videos for pedagogical purposes. A taxonomy of video genres is essential as videos are highly valued resources among learners. Although the use of videos in education has been extensively researched and well-documented in systematic research reviews, gaps remain in the literature. Predominantly, researchers employ sophisticated quantitative methods and similar approaches to measure the performance of videos. This trend has led to the emergence of a strong learning analytics research tradition with its embedded literature. This body of research includes analysis of performance of videos in online courses such as Massive Open Online Courses (MOOCs). Surprisingly, this same literature is limited in terms of research outlining approaches to designing and creating educational videos, which applies to both video-based learning and online courses. This issue results in a knowledge gap, highlighting the need for developing pedagogical tools and strategies for video making. These can be found in frameworks, guidelines, and taxonomies, which can serve as scaffolding strategies. In contrast, there appears to be very few frameworks available for designing and creating videos for pedagogica purposes, apart from a few well-known frameworks. In this regard, this research paper proposes a novel taxonomy of video genres that educators can utilize when creating videos intended for use in either video-based learning environments or online courses. To create this taxonomy, a large number of videos from online courses were collected and analyzed using a mixed-method research design approach.
Many complex systems that we encounter in the world can be formalized using networks. Consequently, they have been in the focus of computer science for decades, where algorithms are developed to understand and utilize these systems.
Surprisingly, our theoretical understanding of these algorithms and their behavior in practice often diverge significantly. In fact, they tend to perform much better on real-world networks than one would expect when considering the theoretical worst-case bounds. One way of capturing this discrepancy is the average-case analysis, where the idea is to acknowledge the differences between practical and worst-case instances by focusing on networks whose properties match those of real graphs. Recent observations indicate that good representations of real-world networks are obtained by assuming that a network has an underlying hyperbolic geometry.
In this thesis, we demonstrate that the connection between networks and hyperbolic space can be utilized as a powerful tool for average-case analysis. To this end, we first introduce strongly hyperbolic unit disk graphs and identify the famous hyperbolic random graph model as a special case of them. We then consider four problems where recent empirical results highlight a gap between theory and practice and use hyperbolic graph models to explain these phenomena theoretically. First, we develop a routing scheme, used to forward information in a network, and analyze its efficiency on strongly hyperbolic unit disk graphs. For the special case of hyperbolic random graphs, our algorithm beats existing performance lower bounds. Afterwards, we use the hyperbolic random graph model to theoretically explain empirical observations about the performance of the bidirectional breadth-first search. Finally, we develop algorithms for computing optimal and nearly optimal vertex covers (problems known to be NP-hard) and show that, on hyperbolic random graphs, they run in polynomial and quasi-linear time, respectively.
Our theoretical analyses reveal interesting properties of hyperbolic random graphs and our empirical studies present evidence that these properties, as well as our algorithmic improvements translate back into practice.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
Background and aims:
To succeed in competition, elite team and individual athletes often seek the development of both, high levels of muscle strength and power as well as cardiorespiratory endurance. In this context, concurrent training (CT) is a commonly applied and effective training approach. While being exposed to high training loads, youth athletes (≤ 18 years) are yet underrepresented in the scientific literature. Besides, immunological responses to CT have received little attention. Therefore, the aims of this work were to examine the acute (< 15min) and delayed (≥ 6 hours) effects of dif-ferent exercise order in CT on immunological stress responses, muscular fitness, metabolic response, and rating of perceived exertion (RPE) in highly trained youth male and female judo athletes.
Methods:
A total of twenty male and thirteen female participants, with an average age of 16 ± 1.8 years and 14.4 ± 2.1 years, respectively, were included in the study. They were randomly assigned to two CT sessions; power-endurance versus endurance-power (i.e., study 1), or strength-endurance versus endurance-strength (i.e., study 2). Markers of immune response (i.e., white-blood-cells, granulocytes, lymphocytes, mon-ocytes, and lymphocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index), muscular fitness (i.e., counter-movement jump [CMJ]), metabolic responses (i.e., blood lactate, glucose), and RPE were collected at different time points (i.e., PRE12H, PRE, MID, POST, POST6H, POST22H).
Results (study 1):
There were significant time*order interactions for white-blood-cells, lymphocytes, granulocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The power-endurance order resulted in significantly larger PRE-to-POST increases in white-blood-cells, monocytes, and lymphocytes while the endur-ance-power order resulted in significantly larger PRE-to-POST increases in the granu-locyte-lymphocyte-ratio and systemic-inflammation-index. Likewise, significantly larger increases from PRE-to-POST6H in white-blood-cells and granulocytes were observed following the power-endurance order compared to endurance-power. All markers of immune response returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. Following the endur-ance-power order, blood lactate and glucose increased from PRE-to-MID but not from PRE-to-POST. Meanwhile, in the power-endurance order blood lactate and glucose increased from PRE-to-POST but not from PRE-to-MID. A significant time*order inter-action was observed for CMJ-force with larger PRE-to-POST decreases in the endur-ance-power order compared to power-endurance order. Further, CMJ-power showed larger PRE-to-MID performance decreases following the power-endurance order, com-pared to the endurance-power order. Regarding RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endurance-power order and larger PRE-to-POST values following the power-endurance order.
Results (study 2):
There were significant time*order interactions for lymphocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The strength-endurance order resulted in significantly larger PRE-to-POST increases in lymphocytes while the endurance-strength order resulted in significantly larger PRE-to-POST increases in the granulocyte-lymphocyte-ratio and systemic-inflammation-index. All markers of the immune system returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. From PRE-to-MID, there was a significantly greater increase in blood lactate and glu-cose following the endurance-strength order compared to strength-endurance order. Meanwhile, from PRE-to-POST there was a significantly higher increase in blood glu-cose following the strength-endurance order compared to endurance-strength order. Regarding physical fitness, a significant time*order interaction was observed for CMJ-force and CMJ-power with larger PRE-to-MID increases following the endurance-strength order compared to the strength-endurance order. For RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endur-ance-power order and larger PRE-to-POST values following the power-endurance or-der.
Conclusions:
The primary findings from both studies revealed order-dependent effects on immune responses. In male youth judo athletes, the results demonstrated greater immunological stress responses, both immediately (≤ 15 min) and delayed (≥ 6 hours), following the power-endurance order compared to the endurance-power order. For female youth judo athletes, the results indicated higher acute, but not delayed, order-dependent changes in immune responses following the strength-endurance order compared to the endurance-strength order. It is worth noting that in both studies, all markers of immune system response returned to baseline levels within 22 hours. This suggests that successful recovery from the exercise-induced immune stress response was achieved within 22 hours. Regarding metabolic responses, physical fitness, and perceived exertion, the findings from both studies indicated acute (≤ 15 minutes) alterations that were dependent on the exercise order. These alterations were primarily influ-enced by the endurance exercise component. Moreover, study 1 provided substantial evidence suggesting that internal load measures, such as immune markers, may differ from external load measures. This indicates a disparity between immunological, perceived, and physical responses following both concurrent training orders. Therefore, it is crucial for practitioners to acknowledge these differences and take them into consideration when designing training programs.
The electrical resistivity tomography (ERT) method is widely used to investigate geological, geotechnical, and hydrogeological problems in inland and aquatic environments (i.e., lakes, rivers, and seas). The objective of the ERT method is to obtain reliable resistivity models of the subsurface that can be interpreted in terms of the subsurface structure and petrophysical properties. The reliability of the resulting resistivity models depends not only on the quality of the acquired data, but also on the employed inversion strategy. Inversion of ERT data results in multiple solutions that explain the measured data equally well. Typical inversion approaches rely on different deterministic (local) strategies that consider different smoothing and damping strategies to stabilize the inversion. However, such strategies suffer from the trade-off of smearing possible sharp subsurface interfaces separating layers with resistivity contrasts of up to several orders of magnitude. When prior information (e.g., from outcrops, boreholes, or other geophysical surveys) suggests sharp resistivity variations, it might be advantageous to adapt the parameterization and inversion strategies to obtain more stable and geologically reliable model solutions. Adaptations to traditional local inversions, for example, by using different structural and/or geostatistical constraints, may help to retrieve sharper model solutions. In addition, layer-based model parameterization in combination with local or global inversion approaches can be used to obtain models with sharp boundaries.
In this thesis, I study three typical layered near-surface environments in which prior information is used to adapt 2D inversion strategies to favor layered model solutions. In cooperation with the coauthors of Chapters 2-4, I consider two general strategies. Our first approach uses a layer-based model parameterization and a well-established global inversion strategy to generate ensembles of model solutions and assess uncertainties related to the non-uniqueness of the inverse problem. We apply this method to invert ERT data sets collected in an inland coastal area of northern France (Chapter~2) and offshore of two Arctic regions (Chapter~3). Our second approach consists of using geostatistical regularizations with different correlation lengths. We apply this strategy to a more complex subsurface scenario on a local intermountain alluvial fan in southwestern Germany (Chapter~4). Overall, our inversion approaches allow us to obtain resistivity models that agree with the general geological understanding of the studied field sites. These strategies are rather general and can be applied to various geological environments where a layered subsurface structure is expected. The flexibility of our strategies allows adaptations to invert other kinds of geophysical data sets such as seismic refraction or electromagnetic induction methods, and could be considered for joint inversion approaches.
Advances in hydrogravimetry
(2023)
The interest of the hydrological community in the gravimetric method has steadily increased within the last decade. This is reflected by numerous studies from many different groups with a broad range of approaches and foci. Many of those are traditionally rather hydrology-oriented groups who recognized gravimetry as a potential added value for their hydrological investigations. While this resulted in a variety of interesting and useful findings, contributing to extend the respective knowledge and confirming the methodological potential, on the other hand, many interesting and unresolved questions emerged.
This thesis manifests efforts, analyses and solutions carried out in this regard. Addressing and evaluating many of those unresolved questions, the research contributes to advancing hydrogravimetry, the combination of gravimetric and hydrological methods, in showing how gravimeters are a highly useful tool for applied hydrological field research.
In the first part of the thesis, traditional setups of stationary terrestrial superconducting gravimeters are addressed. They are commonly installed within a dedicated building, the impermeable structure of which shields the underlying soil from natural exchange of water masses (infiltration, evapotranspiration, groundwater recharge). As gravimeters are most sensitive to mass changes directly beneath the meter, this could impede their suitability for local hydrological process investigations, especially for near-surface water storage changes (WSC). By studying temporal local hydrological dynamics at a dedicated site equipped with traditional hydrological measurement devices, both below and next to the building, the impact of these absent natural dynamics on the gravity observations were quantified. A comprehensive analysis with both a data-based and model-based approach led to the development of an alternative method for dealing with this limitation. Based on determinable parameters, this approach can be transferred to a broad range of measurement sites where gravimeters are deployed in similar structures. Furthermore, the extensive considerations on this topic enabled a more profound understanding of this so called umbrella effect.
The second part of the thesis is a pilot study about the field deployment of a superconducting gravimeter. A newly developed field enclosure for this gravimeter was tested in an outdoor installation adjacent to the building used to investigate the umbrella effect. Analyzing and comparing the gravity observations from both indoor and outdoor gravimeters showed performance with respect to noise and stable environmental conditions was equivalent while the sensitivity to near-surface WSC was highly increased for the field deployed instrument. Furthermore it was demonstrated that the latter setup showed gravity changes independent of the depth where mass changes occurred, given their sufficiently wide horizontal extent. As a consequence, the field setup suits monitoring of WSC for both short and longer time periods much better. Based on a coupled data-modeling approach, its gravity time series was successfully used to infer and quantify local water budget components (evapotranspiration, lateral subsurface discharge) on the daily to annual time scale.
The third part of the thesis applies data from a gravimeter field deployment for applied hydrological process investigations. To this end, again at the same site, a sprinkling experiment was conducted in a 15 x 15 m area around the gravimeter. A simple hydro-gravimetric model was developed for calculating the gravity response resulting from water redistribution in the subsurface. It was found that, from a theoretical point of view, different subsurface water distribution processes (macro pore flow, preferential flow, wetting front advancement, bypass flow and perched water table rise) lead to a characteristic shape of their resulting gravity response curve. Although by using this approach it was possible to identify a dominating subsurface water distribution process for this site, some clear limitations stood out. Despite the advantage for field installations that gravimetry is a non-invasive and integral method, the problem of non-uniqueness could only be overcome by additional measurements (soil moisture, electric resistivity tomography) within a joint evaluation. Furthermore, the simple hydrological model was efficient for theoretical considerations but lacked the capability to resolve some heterogeneous spatial structures of water distribution up to a needed scale. Nevertheless, this unique setup for plot to small scale hydrological process research underlines the high potential of gravimetery and the benefit of a field deployment.
The fourth and last part is dedicated to the evaluation of potential uncertainties arising from the processing of gravity observations. The gravimeter senses all mass variations in an integral way, with the gravitational attraction being directly proportional to the magnitude of the change and inversely proportional to the square of the distance of the change. Consequently, all gravity effects (for example, tides, atmosphere, non-tidal ocean loading, polar motion, global hydrology and local hydrology) are included in an aggregated manner. To isolate the signal components of interest for a particular investigation, all non-desired effects have to be removed from the observations. This process is called reduction. The large-scale effects (tides, atmosphere, non-tidal ocean loading and global hydrology) cannot be measured directly and global model data is used to describe and quantify each effect. Within the reduction process, model errors and uncertainties propagate into the residual, the result of the reduction. The focus of this part of the thesis is quantifying the resulting, propagated uncertainty for each individual correction. Different superconducting gravimeter installations were evaluated with respect to their topography, distance to the ocean and the climate regime. Furthermore, different time periods of aggregated gravity observation data were assessed, ranging from 1 hour up to 12 months. It was found that uncertainties were highest for a frequency of 6 months and smallest for hourly frequencies. Distance to the ocean influences the uncertainty of the non-tidal ocean loading component, while geographical latitude affects uncertainties of the global hydrological component. It is important to highlight that the resulting correction-induced uncertainties in the residual have the potential to mask the signal of interest, depending on the signal magnitude and its frequency. These findings can be used to assess the value of gravity data across a range of applications and geographic settings.
In an overarching synthesis all results and findings are discussed with a general focus on their added value for bringing hydrogravimetric field research to a new level. The conceptual and applied methodological benefits for hydrological studies are highlighted. Within an outlook for future setups and study designs, it was once again shown what enormous potential is offered by gravimeters as hydrological field tools.
The field of exercise psychology has established robust evidence on the health benefits of physical activity. However, interventions to promote sustained exercise behavior have often proven ineffective. This dissertation addresses challenges in the field, particularly the neglect of situated and affective processes in understanding and changing exercise behavior. Dual process models, considering both rational and affective processes, have gained recognition. The Affective Reflective Theory of Physical Inactivity and Exercise (ART) is a notable model in this context, positing that situated processes in-the-moment of choice influence exercise decisions and subsequent exercise behavior.
The dissertation identifies current challenges within exercise psychology and proposes methodological and theoretical advancements. It emphasizes the importance of momentary affective states and situated processes, offering alternatives to self-reported measures and advocating for a more comprehensive modeling of individual variability. The focus is on the affective processes during exercise, theorized to reappear in momentary decision-making, shaping overall exercise behavior.
The first publication introduces a new method by using automated facial action analysis to measure variable affective responses during exercise. It explores how these behavioral indicators covary with self-reported measures of affective valence and perceived exertion. The second publication delves into situated processes at the moment of choice between exercise and non-exercise options, revealing that intraindividual factors play a crucial role in explaining exercise-related choices. The third publication presents an open-source research tool, the Decisional Preferences in Exercising Test (DPEX), designed to capture repeated situated decisions and predict exercise behavior based on past experiences.
The findings challenge previous assumptions and provide insights into the complex interplay of affective responses, situated processes, and exercise choices. The dissertation underscores the need for individualized interventions that manipulate affective responses during exercise and calls for systematic testing to establish causal links to automatic affective processes and subsequent exercise behavior. This dissertation highlights the necessity for methodological and conceptual refinements in understanding and promoting exercise behavior, ultimately contributing to the broader goal of combating increasing inactivity trends.
This short paper sets out to propose a novel and interesting learning design that facilitates for cooperative learning in which students do not conduct traditional group work in an asynchronous online education setting. This learning design will be explored in a Small Private Online Course (SPOC) among teachers and school managers at a teacher education. Such an approach can be made possible by applying specific criteria commonly used to define collaborative learning. Collaboration can be defined, among other things, as a structured way of working among students that includes elements of co-laboring. The cooperative learning design involves adapting various traditional collaborative learning approaches for use in an online learning environment. A critical component of this learning design is that students work on a self-defined case project related to their professional practices. Through an iterative process, students will receive ongoing feedback and formative assessments from instructors and follow students at specific points, meaning that co-constructing of knowledge and learning takes place as the SPOC progresses. This learning design can contribute to better learning experiences and outcomes for students, and be a valuable contribution to current research discussions on learning design in Massive Open Online Courses (MOOCs).
Loss of expertise in the fields of Nuclear- and Radio-Chemistry (NRC) is problematic at a scientific and social level. This has been addressed by developing a MOOC, in order to let students in scientific matters discover all the benefits of NRC to society and improving their awareness of this discipline. The MOOC “Essential Radiochemistry for Society” includes current societal challenges related to health, clean and sustainable energy for safety and quality of food and agriculture.
NRC teachers belonging to CINCH network were invited to use the MOOC in their teaching, according to various usage models: on the basis of these different experiences, some usage patterns were designed, describing context characteristics (number and age of students, course), activities’ scheduling and organization, results and students’ feedback, with the aim of encouraging the use of MOOCs in university teaching, as an opportunity for both lecturers and students. These models were the basis of a “toolkit for teachers”. By experiencing digital teaching resources created by different lecturers, CINCH teachers took a first meaningful step towards understanding the worth of Open Educational Resources (OER) and the importance of their creation, adoption and sharing for knowledge progress. In this paper, the entire path from MOOC concept to MOOC different usage models, to awareness-raising regarding OER is traced in conceptual stages.
An exploration of activity and therapist preferences and their predictors in German-speaking samples
(2023)
According to current definitions of evidence-based practice, patients’ preferences play an important role for the psychotherapeutic process and outcomes. However, whereas a significant body of research investigated preferences regarding specific treatments, research on preferred activities or therapist characteristics is rare, investigated heterogeneous aspects with inconclusive results, lacked validated assessment tools, and neglected relevant preferences, their predictors as well as the perspective of mental health professionals. Therefore, the three studies of this dissertation aimed to address the most fundamental drawbacks in current preference research by providing a validated questionnaire, focus efforts on activity and therapist preferences and add preferences of psychotherapy trainees. To this end, Paper I reports the translation and validation of the 18-item Cooper-Norcross Inventory of Preference (C-NIP) in a broad, heterogeneous sample of N = 969 laypeople, resulting in good to acceptable reliabilities and first evidence of validity. However, the original factor structure was not replicated. Paper II assesses activity preferences of psychotherapists in training using the C-NIP and compares them with the initial laypeople sample. There were significant differences between both samples, with trainees preferring a more patient-directed, emotionally intense and confrontational approach than laypeople. CBT trainees preferred a more therapist-directed, present-focused, challenging and less emotional intense approach than psychodynamic or -analytic trainees. Paper III explores therapist preferences and tests predictors for specific preference choices. For most characteristics, more than half of the participants did not have specific preferences. Results pointed towards congruency effects (i.e., preference for similar characteristics), especially for members of marginalized groups. The dissertation provides both researchers and practitioners with a validated questionnaire, shows potentially obstructive differences between patients and therapists and underlines the importance of therapist characteristics for marginalized groups, thereby laying the foundation for future applications and implementations in research and practice.
Anchored in ink
(2023)
This book serves as a gateway to the Elementa grammaticae Huronicae, an eighteenth-century grammar of the Wendat (‘Huron’) language by Jesuit Pierre-Philippe Potier (1708–1781). The volume falls into three main parts. The first part introduces the grammar and some of its contexts, offering information about the Huron-Wendat and Wyandot, the early modern Jesuit mission in New France and the Jesuits’ linguistic output. The heart of the volume is made up by its second part, a text edition of the Elementa. The third part presents some avenues of research by way of specific case studies.
Aquatic ecosystems are frequently overlooked as fungal habitats, although there is increasing evidence that their diversity and ecological importance are greater than previously considered. Aquatic fungi are critical and abundant components of nutrient cycling and food web dynamics, e.g., exerting top-down control on phytoplankton communities and forming symbioses with many marine microorganisms. However, their relevance for microphytobenthic communities is almost unexplored. In the light of global warming, polar regions face extreme changes in abiotic factors with a severe impact on biodiversity and ecosystem functioning. Therefore, this study aimed to describe, for the first time, fungal diversity in Antarctic benthic habitats along the salinity gradient and to determine the co-occurrence of fungal parasites with their algal hosts, which were dominated by benthic diatoms. Our results reveal that Ascomycota and Chytridiomycota are the most abundant fungal taxa in these habitats. We show that also in Antarctic waters, salinity has a major impact on shaping not just fungal but rather the whole eukaryotic community composition, with a diversity of aquatic fungi increasing as salinity decreases. Moreover, we determined correlations between putative fungal parasites and potential benthic diatom hosts, highlighting the need for further systematic analysis of fungal diversity along with studies on taxonomy and ecological roles of Chytridiomycota.
A degree course in IT and business administration solely for women (FIW) has been offered since 2009 at the HTW Berlin – University of Applied Sciences. This contribution discusses student motivations for enrolling in such a women only degree course and gives details of our experience over recent years. In particular, the approach to attracting new female students is described and the composition of the intake is discussed. It is shown that the women-only setting together with other factors can attract a new clientele for computer science.
This research investigated the relationship between frequent engagement in industrial action (also known as ‘employee strikes’) and the internal attractiveness of government employment. It focused on a special group of public employees: public university lecturers and public-school teachers in Uganda who frequently engaged in industrial action. At the very basic level, the research explored whether public employees frequently engaged in industrial action because they considered public service employment to be unattractive or whether frequent engagement in industrial action was in fact part of the attractiveness of government employment. Beyond exploring these relationships, it also explained why (or why not) such relationships existed.
Methodologically, the research was conducted using an exploratory sequential design – a mixed methods study design that starts with a qualitative followed by a quantitative phase. It is the results of the initial qualitative phase that determined the direction of the subsequent quantitative phase. The qualitative phase started with an exploration of the relationship between industrial action and internal public service attractiveness, resulting into two specific research questions:
1) Why do public employees engage in industrial action and what role does frequent engagement in industrial action play in their perception of public service attractiveness?
2) Why and how is organizational justice related to public employees’ perception of public service attractiveness?
The above questions were answered both qualitatively and quantitatively. The theoretical postulations of the Social Movements Theories, Social Exchange Theory, and the Signaling Theory were used to structure the research assumptions and hypotheses.
The results showed that public employees engaged in industrial action mostly because of relative, rather than absolute deprivation. An established culture of workplace militancy was also found to be key in actualizing industrial action as was the (perceived) absence of alternatives to achieve workplace justice. Importantly, there was a clear dichotomy between absolute working conditions and frequent engagement in industrial action. Frequent engagement in industrial action was itself found to have both positive and negative effects on internal public service attractiveness. It was also found that public service attractiveness from the perspective of current public employees might be different from what it is from the perspective of prospective employees. This is because current public employees do not assume what it feels like to work for government, but mostly use their day-to-day lived experiences to judge the attractiveness of their employer. The existing literature is particularly deficient on analyzing public service attractiveness from an internal perspective, which is surprising given the public sector’s high reliance on internal recruitment.
The research results underlined key implications for theory, practice, and research. At theory level, the results suggested that public employee ratings of internal public service attractiveness were heavily affected by halo effects and should therefore not be taken at face value. The complex workplace social exchanges which are deeply rooted in organizational justice and the ‘personification metaphor’ were also emphasized. From an empirical perspective, the results underlined the need to prioritize internal public service attractiveness as recent research has confirmed the value of family socialization and internal recommendations in making public sector employment attractive, even to external applicants. This research argues that the centrality of organizational justice in public sector employee relations requires public sector organizations to be intentional in their bid to create fair, just, and attractive workplaces. Beyond assessing the fairness of personnel policies, procedures, and interactional relationships, it is also important to prepare and equip public managers with the right skills to promote and practice justice in their day-to-day interactions with public employees, and to encourage, improve, and facilitate alternative public employee feedback mechanisms.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Climate change of anthropogenic origin is affecting Earth’s biodiversity and therefore ecosystems and their services. High latitude ecosystems are even more impacted than the rest of Northern Hemisphere because of the amplified polar warming. Still, it is challenging to predict the dynamics of high latitude ecosystems because of complex interaction between abiotic and biotic components. As the past is the key to the future, the interpretation of past ecological changes to better understand ongoing processes is possible. In the Quaternary, the Pleistocene experienced several glacial and interglacial stages that affected past ecosystems. During the last Glacial, the Pleistocene steppe-tundra was covering most of unglaciated northern hemisphere and disappeared in parallel to the megafauna’s extinction at the transition to the Holocene (~11,700 years ago). The origin of the steppe-tundra decline is not well understood and knowledge on the mechanisms, which caused shifts in past communities and ecosystems, is of high priority as they are likely comparable to those affecting modern ecosystems. Lake or permafrost core sediments can be retrieved to investigate past biodiversity at transitions between glacial and interglacial stages. Siberia and Beringia were the origin of dispersal of the steppe-tundra, which make investigation this area of high priority. Until recently, macrofossils and pollen were the most common approaches. They are designed to reconstruct past composition changes but have limit and biases. Since the end of the 20th century, sedimentary ancient DNA (sedaDNA) can also be investigated. My main objectives were, by using sedaDNA approaches to provide scientific evidence of compositional and diversity changes in the Northern Hemisphere ecosystems at the transition between Quaternary glacial and interglacial stages.
In this thesis, I provide snapshots of entire ancient ecosystems and describe compositional changes between Quaternary glacial and interglacial stages, and confirm the vegetation composition and the spatial and temporal boundaries of the Pleistocene steppe-tundra. I identify a general loss of plant diversity with extinction events happening in parallel of megafauna’ extinction. I demonstrate how loss of biotic resilience led to the collapse of a previously well-established system and discuss my results in regards to the ongoing climate change. With further work to constrain biases and limits, sedaDNA can be used in parallel or even replace the more established macrofossils and pollen approaches as my results support the robustness and potential of sedaDNA to answer new palaeoecological questions such as plant diversity changes, loss and provide snapshots of entire ancient biota.
Biogeochemical analyses of lacustrine environments are well-established methods that allow exploring and understanding complex systems in the lake ecosystem. However, most were conducted in temperate lakes controlled by entirely different physical conditions than in tropical climates. The most important difference between the temperate and tropical lakes is lacking seasonal temperature fluctuations in the latter, which leads to a stable temperature gradient in the water column. Thus, the water column in tropical latitudes generally is void of perturbations that can be seen in their temperate counterparts. Permanent stratification in the water column provides optimal conditions for intact sedimentation. The geochemical processes in the water column and the weathering process in the distinct lithology in the catchment leads to the different biogeochemical characteristic in the sediment. Conducting a biogeochemical study in this lake sediment, especially in the Sediment Water Interface (SWI) helps reveal the sedimentation and diagenetic process records influenced by the internal or external loading. Lake Sentani, the study area, is one of the thousands of lakes in Indonesia and located in the Papua province. This tropical lake has a unique feature, as it consists of four interconnected sub-basins with different water depths. More importantly, its catchment is comprised of various different lithologies. Hence, its lithological characteristics are highly diverse, and range from mafic and ultramafic rocks to clastic sediment and carbonates. Each sub-basin receives a distinct sediment input. Equally important, besides the natural loading, Lake Sentani is also influenced by anthropogenic input. Previous studies have elaborated that there is an increase in population growth rate around the lake which has direct consequences on eutrophication. Considering these factors, the government of The Republic of Indonesia put Lake Sentani on the list of national priority lakes for restoration. This thesis aims to develop a fundamental understanding of Lake Sentani's sedimentary geochemistry and geomicrobiology with a special focus on the effects of different lithologies and anthropogenic pressures in the catchment area. We conducted geochemical and geomicrobiology research on Lake Sentani to meet this objective. We investigated geochemical characteristics in the water column, porewater, and sediment core of the four sub-basins. Additional to direct investigations of the lake itself, we also studied the sediments in the tributary rivers, of which some are ephemeral, as well as the river mouths, as connections between riverine and the lacustrine habitat. The thesis is composed of three main publications about Lake Sentani and supported by several publications that focus on other tropical lakes in Indonesia. The first main publication investigates the geochemical characterization of the water column, porewater, and surface sediment (upper 40-50 cm) from the center of the four sub-basins. It reveals that besides catchment lithology, the water column heavily influences the geochemical characteristics in the lake sediments and their porewater. The findings indicate that water column stratification has a strong influence on overall chemistry. The four sub-basins are very different with regard to their water column chemistry. Based on the physicochemical profiles, especially dissolved oxygen, one sub-basin is oxygenated, one intermediate i.e. just reaches oxygen depletion at the sediment-water interface, and two sub-basins are fully meromictic. However, all four sub-basins share the same surface water chemistry. The structure of the water column creates differences on the patterns of anions and cations in the porewater. Likewise, the distinct differences in geochemical composition between the sub-basins show that the lithology in the catchment affects the geochemical characteristic in the sediment. Overall, water column stratification and particularly bottom water oxygenation strongly influence the overall elemental composition of the sediment and porewater composition. The second publication reveals differences in surface sediment composition between habitats, influenced by lithological variations in the catchment area. The macro-element distribution shows that the geochemical characteristics between habitats are different. Furthermore, the geochemical composition also indicates a distinct distribution between the sub-basins. The geochemical composition of the eastern sub-basin suggests that lithogenic elements are more dominant than authigenic elements. This is also supported by sulfide speciation, particle distribution, and smear slide data. The third publication is a geomicrobiological study of the surface sediment. We compare the geochemical composition of the surface sediment and its microbiological composition and compare the different signals. Next Generation Sequencing (NGS) of the 16S rRNA gene was applied to determine the microbial community composition of the surface sediment from a great number of locations. We use a large number of sampling sites in all four sub-basins as well as in the rivers and river mouths to illustrate the links between the river, the river mouth, and the lake. Rigorous assessment of microbial communities across the diverse Lake Sentani habitats allowed us to study some of these links and report novel findings on microbial patterns in such ecosystems. The main result of the Principal Coordinates Analysis (PCoA) based on microbial community composition highlighted some commonalities but also differences between the microbial community analysis and the geochemical data. The microbial community in rivers, river mouths and sub-basins is strongly influenced by anthropogenic input from the catchment area. Generally, Bacteroidetes and Firmicutes could be an indicator for river sediments. The microbial community in the river is directly influenced by anthropogenic pressure and is markedly different from the lake sediment. Meanwhile, the microbial community in the lake sediment reflects the anoxic environment, which is prevalent across the lake in all sediments below a few mm burial depth. The lake sediments harbour abundant sulfate reducers and methanogens. The microbial communities in sediments from river mouths are influenced by both rivers and lake ecosystems. This study provides valuable information to understand the basic processes that control biogeochemical cycling in Lake Sentani. Our findings are critical for lake managers to accurately assess the uncertainties of the changing environmental conditions related to the anthropogenic pressure in the catchment area. Lake Sentani is a unique study site directly influenced by the different geology across the watershed and morphometry of the four studied basins. As a result of these factors, there are distinct geochemical differences between the habitats (river, river mouth, lake) and the four sub-basins. In addition to geochemistry, microbial community composition also shows differences between habitats, although there are no obvious differences between the four sub-basins. However, unlike sediment geochemistry, microbial community composition is impacted by human activities. Therefore, this thesis will provide crucial baseline data for future lake management.
Achilles tendinopathy (AT) is a debilitating injury in athletes, especially for those engaged in repetitive stretch-shortening cycle activities. Clinical risk factors are numerous, but it has been suggested that altered biomechanics might be associated with AT. No systematic review has been conducted investigating these biomechanical alterations in specifically athletic populations. Therefore, the aim of this systematic review was to compare the lower-limb biomechanics of athletes with AT to athletically matched asymptomatic controls. Databases were searched for relevant studies investigating biomechanics during gait activities and other motor tasks such as hopping, isolated strength tasks, and reflex responses. Inclusion criteria for studies were an AT diagnosis in at least one group, cross-sectional or prospective data, at least one outcome comparing biomechanical data between an AT and healthy group, and athletic populations. Studies were excluded if patients had Achilles tendon rupture/surgery, participants reported injuries other than AT, and when only within-subject data was available.. Effect sizes (Cohen's d) with 95% confidence intervals were calculated for relevant outcomes. The initial search yielded 4,442 studies. After screening, twenty studies (775 total participants) were synthesised, reporting on a wide range of biomechanical outcomes. Females were under-represented and patients in the AT group were three years older on average. Biomechanical alterations were identified in some studies during running, hopping, jumping, strength tasks and reflex activity. Equally, several biomechanical variables studied were not associated with AT in included studies, indicating a conflicting picture. Kinematics in AT patients appeared to be altered in the lower limb, potentially indicating a pattern of “medial collapse”. Muscular activity of the calf and hips was different between groups, whereby AT patients exhibited greater calf electromyographic amplitudes despite lower plantar flexor strength. Overall, dynamic maximal strength of the plantar flexors, and isometric strength of the hips might be reduced in the AT group. This systematic review reports on several biomechanical alterations in athletes with AT. With further research, these factors could potentially form treatment targets for clinicians, although clinical approaches should take other contributing health factors into account. The studies included were of low quality, and currently no solid conclusions can be drawn.
Establishment of final leaf size in plants represents a complex mechanism that relies on the precise regulation of two interconnected cellular processes, cell division and cell expansion. In previous work, the barley protein BROAD LEAF1 (BLF1) was identified as a novel negative regulator of cell proliferation, that mainly limits leaf growth in the width direction. Here I identified a novel RING/U-box protein that interacts with BLF1 through a yeast two hybrid screen. Using BiFC, Co-IP and FRET I confirmed the interaction of the two proteins in planta. Enrichment of the BLF1-mEGFP fusion protein and the increase of the FRET signal upon MG132 treatment of tobacco plants, together with an in vivo ubiquitylation assay in bacteria, confirmed that the RING/U-box E3 interacts with BLF1 to mediate its ubiquitylation and degradation by the 26S proteasome system. Consistent with regulation of endogenous BLF1 in barley by proteasomal degradation, inhibition of the proteasome by bortezomib treatment on BLF1-vYFP transgenic barley plants also resulted in an enrichment of the BLF1 protein. I thus demonstrated that RING/U-box E3 is colocalized with BLF1 in nuclei and negatively regulates BLF1 protein levels. Analysis of ring-e3_1 knock-out mutants suggested the involvement of the RING/U-box E3 gene in leaf growth control, although the effect was mainly on leaf length. Together, my results suggest that proteasomal degradation, possibly mediated by RING/U-box E3, contributes to fine-tuning BLF1 protein-level in barley.
The urge of light utilization in fabrication of materials is as encouraging as challenging. Steadily increasing energy consumption in accordance with rapid population growth, is requiring a corresponding solution within the same rate of occurrence speed. Therefore, creating, designing and manufacturing materials that can interact with light and in further be applicable as well as disposable in photo-based applications are very much under attention of researchers. In the era of sustainability for renewable energy systems, semiconductor-based photoactive materials have received great attention not only based on solar and/or hydrocarbon fuels generation from solar energy, but also successful stimulation of photocatalytic reactions such as water splitting, pollutant degradation and organic molecule synthesisThe turning point had been reached for water splitting with an electrochemical cell consisting of TiO2-Pt electrode illuminated by UV light as energy source rather than an external voltage, that successfully pursued water photolysis by Fujishima and Honda in 1972. Ever since, there has been a great deal of interest in research of semiconductors (e.g. metal oxide, metal-free organic, noble-metal complex) exhibiting effective band gap for photochemical reactions. In the case of environmental friendliness, toxicity of metal-based semiconductors brings some restrictions in possible applications. Regarding this, very robust and ‘earth-abundant’ organic semiconductor, graphitic carbon nitride has been synthesized and successfully applied in photoinduced applications as novel photocatalyst. Properties such as suitable band gap, low charge carrier recombination and feasibility for scaling up, pave the way of advance combination with other catalysts to gather higher photoactivity based on compatible heterojunction.
This dissertation aims to demonstrate a series of combinations between organic semiconductor g-CN and polymer materials that are forged through photochemistry, either in synthesis or in application. Fabrication and design processes as well as applications performed in accordance to the scope of thesis will be elucidated in detail. In addition to UV light, more attention is placed on visible light as energy source with a vision of more sustainability and better scalability in creation of novel materials and solar energy based applications.
Solar photocatalysis is the one of leading concepts of research in the current paradigm of sustainable chemical industry. For actual practical implementation of sunlight-driven catalytic processes in organic synthesis, a cheap, efficient, versatile and robust heterogeneous catalyst is necessary. Carbon nitrides are a class of organic semiconductors who are known to fulfill these requirements.
First, current state of solar photocatalysis in economy, industry and lab research is overviewed, outlining EU project funding, prospective synthetic and reforming bulk processes, small scale solar organic chemistry, and existing reactor designs and prototypes, concluding feasibility of the approach.
Then, the photocatalytic aerobic cleavage of oximes to corresponding aldehydes and ketones by anionic poly(heptazine imide) carbon nitride is discussed. The reaction provides a feasible method of deprotection and formation of carbonyl compounds from nitrosation products and serves as a convenient model to study chromoselectivity and photophysics of energy transfer in heterogeneous photocatalysis.
Afterwards, the ability of mesoporous graphitic carbon nitride to conduct proton-coupled electron transfer was utilized for the direct oxygenation of 1,3-oxazolidin-2-ones to corresponding 1,3-oxazlidine-2,4-diones. This reaction provides an easier access to a key scaffold of diverse types of drugs and agrochemicals.
Finally, a series of novel carbon nitrides based on poly(triazine imide) and poly(heptazine imide) structure was synthesized from cyanamide and potassium rhodizonate. These catalysts demonstrated a good performance in a set of photocatalytic benchmark reactions, including aerobic oxidation, dual nickel photoredox catalysis, hydrogen peroxide evolution and chromoselective transformation of organosulfur precursors.
Concluding, the scope of carbon nitride utilization for net-oxidative and net-neutral photocatalytic processes was expanded, and a new tunable platform for catalyst synthesis was discovered.
Continental rifts are key geodynamic regions where the complex interplay of magmatism and faulting activity can be studied to understand the driving forces of extension and the formation of new divergent plate boundaries. Well-preserved rift morphology can provide a wealth of information on the growth, interaction, and linkage of normal-fault systems through time. If rift basins are preserved over longer geologic time periods, sedimentary archives generated during extensional processes may mirror tectonic and climatic influences on erosional and sedimentary processes that have varied over time. Rift basins are furthermore strategic areas for hydrocarbon and geothermal energy exploration, and they play a central role in species dispersal and evolution as well as providing or inhibiting hydrologic connectivity along basins at emerging plate boundaries.
The Cenozoic East African rift system (EARS) is one of the most important continental extension zones, reflecting a range of evolutionary stages from an early rift stage with isolated basins in Malawi to an advanced stage of continental extension in southern Afar. Consequently, the EARS is an ideal natural laboratory that lends itself to the study of different stages in the breakup of a continent. The volcanically and seismically active eastern branch of the EARS is characterized by multiple, laterally offset tectonic and magmatic segments where adjacent extensional basins facilitate crustal extension either across a broad deformation zone or via major transfer faulting. The Broadly Rifted Zone (BRZ) in southern Ethiopia is an integral part of the eastern branch of the EARS; in this region, rift segments of the southern Ethiopian Rift (sMER) and northern Kenyan Rift (nKR) propagate in opposite directions in a region with one of the earliest manifestations of volcanism and extensional tectonism in East Africa. The basin margins of the Chew-Bahir Basin and the Gofa Province, characterized by a semi-arid climate and largely uniform lithology, provide ideal conditions for studying the tectonic and geomorphologic features of this complex kinematic transfer zone, but more importantly, this area is suitable for characterizing and quantifying the overlap between the propagating structures of the sMER and nKR and the resulting deformation patterns of the BRZ transfer zones.
In this study, I have combined data from thermochronology, thermal modeling, morphometry, paleomagnetic analysis, geochronology, and geomorphological field observations with information from published studies to reconstruct the spatiotemporal relationship between volcanism and fault activity in the BRZ and quantify the deformation patterns of the overlapping rift segments. I present the following results: (1) new thermochronological data from the en-échelon basin margins and footwall blocks of the rift flanks and morphometric results verified in the field to link different phases of magmatism and faulting during extension and infer geomorphological landscape features related to the current tectonic interaction between the nKR and the sMER; (2) temporally constrained paleomagnetic data from the BRZ overlap zone between the Ethiopian and Kenyan rifts to quantitatively determine block rotation between the two segments. Combining the collected data, time-temperature histories of thermal modeling results from representative samples show well-defined deformation phases between 25–20 Ma, 15–9Ma, and ~5 Ma to the present. Each deformation phase is characterized by the onset of rapid cooling (>2°C/Ma) of the crust associated with uplift or exhumation of the rift shoulder. After an initial, spatially very diffuse phase of extension, the rift has gradually evolved into a system of connected structures formed in an increasingly focused rift zone during the last 5 Ma. Regarding the morphometric analysis of the rift structures, it can be shown that normalized slope indices of the river courses, spatial arrangement of knickpoints in the river longitudinal profiles of the footwall blocks, local relief values, and the average maximum values of the slope of the river profiles indicate a gradual increase in the extension rate from north (Sawula basin: mature) to south (Chew Bahir: young). The complexity of the structural evolution of the BRZ overlap zone between nKR and sMER is further emphasized by the documentation of crustal blocks around a vertical axis. A comparison of the mean directions obtained for the Eo-Oligocene (Ds=352.6°, Is=-17.0°, N=18, α95=5.5°) and Miocene (Ds=2.9°, Is=0.9°, N=9, α95=12.4°) volcanics relative to the pole for stable South Africa and with respect to the corresponding ages of the analyzed units record a significant counterclockwise rotation of ~11.1°± 6.4° and insignificant CCW rotation of ~3.2° ± 11.5°, respectively.
To implement OERs at HEIs sustainably, not just technical infrastructure is required, but also well-trained staff. The University of Graz is in charge of an OER training program for university staff as part of the collaborative project Open Education Austria Advanced (OEAA) with the aim of ensuring long-term competence growth in the use and creation of OERs. The program consists of a MOOC and a guided blended learning format that was evaluated to find out which accompanying teaching and learning concepts can best facilitate targeted competence development. The evaluation of the program shows that learning videos, self-study assignments and synchronous sessions are most useful for the learning process. The results indicate that the creation of OERs is a complex process that can be undergone more effectively in the guided program.
Challenges and proposals for introducing digital certificates in higher education infrastructures
(2023)
Questions about the recognition of MOOCs within and outside higher education were already being raised in the early 2010s. Today, recognition decisions are still made more or less on a case-by-case basis. However, digital certification approaches are now emerging that could automate recognition processes. The technical development of the required machinereadable documents and infrastructures is already well advanced in some cases. The DigiCerts consortium has developed a solution based on a collective blockchain. There are ongoing and open discussions regarding the particular technology, but the institutional implementation of digital certificates raises further questions. A number of workshops have been held at the Institute for Interactive Systems at Technische Hochschule Lübeck, which have identified the need for new responsibilities for issuing certificates. It has also become clear that all members of higher education institutions need to develop skills in the use of digital certificates.
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
The trace elements, selenium (Se) and copper (Cu) play an important role in maintaining normal brain function. Since they have essential functions as cofactors of enzymes or structural components of proteins, an optimal supply as well as a well-defined homeostatic regulation are crucial. Disturbances in trace element homeostasis affect the health status and contribute to the incidence and severity of various diseases. The brain in particular is vulnerable to oxidative stress due to its extensive oxygen consumption and high energy turnover, among other factors. As components of a number of antioxidant enzymes, both elements are involved in redox homeostasis. However, high concentrations are also associated with the occurrence of oxidative stress, which can induce cellular damage. Especially high Cu concentrations in some brain areas are associated with the development and progression of neurodegenerative diseases such as Alzheimer's disease (AD). In contrast, reduced Se levels were measured in brains of AD patients. The opposing behavior of Cu and Se renders the study of these two trace elements as well as the interactions between them being particularly relevant and addressed in this work.
This thesis discusses heat and charge transport phenomena in single-crystalline Silicon penetrated by nanometer-sized pores, known as mesoporous Silicon (pSi). Despite the extensive attention given to it as a thermoelectric material of interest, studies on microscopic thermal and electronic transport beyond its macroscopic characterizations are rarely reported. In contrast, this work reports the interplay of both.
PSi samples synthesized by electrochemical anodization display a temperature dependence of specific heat 𝐶𝑝 that deviates from the characteristic 𝑇^3 behaviour (at 𝑇<50𝐾). A thorough analysis reveals that both 3D and 2D Einstein and Debye modes contribute to this specific heat. Additional 2D Einstein modes (~3 𝑚𝑒𝑉) agree reasonably well with the boson peak of SiO2 in pSi pore walls. 2D Debye modes are proposed to account for surface acoustic modes causing a significant deviation from the well-known 𝑇^3 dependence of 𝐶𝑝 at 𝑇<50𝐾.
A novel theoretical model gives insights into the thermal conductivity of pSi in terms of porosity and phonon scattering on the nanoscale. The thermal conductivity analysis utilizes the peculiarities of the pSi phonon dispersion probed by the inelastic neutron scattering experiments. A phonon mean-free path of around 10 𝑛𝑚 extracted from the presented model is proposed to cause the reduced thermal conductivity of pSi by two orders of magnitude compared to p-doped bulk Silicon. Detailed analysis indicates that compound averaging may cause a further 10-50% reduction. The percolation threshold of 65% for thermal conductivity of pSi samples is subsequently determined by employing theoretical effective medium models.
Temperature-dependent electrical conductivity measurements reveal a thermally activated transport process. A detailed analysis of the activation energy 𝐸𝐴𝜎 in the thermally activated transport exhibits a Meyer Neldel compensation rule between different samples that originates in multi-phonon absorption upon carrier transport. Activation energies 𝐸𝐴𝑆 obtained from temperature-dependent thermopower measurements provide further evidence for multi-phonon assisted hopping between localized states as a dominant charge transport mechanism in pSi, as they systematically differ from the determined 𝐸𝐴𝜎 values.
Planets outside our solar system, so-called "exoplanets", can be detected with different methods, and currently more than 5000 exoplanets have been confirmed, according to NASA Exoplanet Archive. One major highlight of the studies on exoplanets in the past twenty years is the characterization of their atmospheres usingtransmission spectroscopy as the exoplanet transits. However, this characterization is a challenging process and sometimes there are reported discrepancies in the literature regarding the atmosphere of the same exoplanet. One potential reason for the observed atmospheric inconsistencies is called impact parameter degeneracy, and it is highly driven by the limb darkening effect of the host star. A brief introductionto those topics in presented in chapter 1, while the motivation and objectives of thiswork are described in chapter 2.The first goal is to clarify the origin of the transmission spectrum, which is anindicator of an exoplanet’s atmosphere; whether it is real or influenced by the impactparameter degeneracy. A second goal is to determine whether photometry from space using the Transiting Exoplanet Survey Satellite (TESS), could improve on the major parameters, which are responsible for the aforementioned degeneracy, of known exoplanetary systems. Three individual projects were conducted in order toaddress those goals. The three manuscripts are presented, in short, in the manuscriptoverview in chapter 3.More specifically, in chapter 4, the first manuscript is presented, which is an ex-tended investigation on the impact parameter degeneracy and its application onsynthetic transmission spectra. Evidently, the limb darkening of the host star isan important driver for this effect. It keeps the degeneracy persisting through different groups of exoplanets, based on the uncertainty of their impact parameter and on the type of their host star. The second goal, was addressed in the second and third manuscripts (chapter 5 and chapter 6 respectively). Using observationsfrom the TESS mission, two samples of exoplanets were studied; 10 transiting inflated hot-Jupiters and 43 transiting grazing systems. Potentially, the refinement or confirmation of their major system parameters’ measurements can assist in solving current or future discrepancies regarding their atmospheric characterization.In chapter 7 the conclusions of this work are discussed, while in chapter 8 itis proposed how TESS’s measurements can be able to discern between erroneousinterpretations of transmission spectra, especially on systems where the impact parameter degeneracy is likely not applicable.
The work is designed to investigate the impacts and sensitivity of climate change on water resources, droughts and hydropower production in Malawi, the South-Eastern region which is highly vulnerable to climate change. It is observed that rainfall is decreasing and temperature is increasing which calls for the understanding of what these changes may impact the water resources, drought occurrences and hydropower generation in the region. The study is conducted in the Greater Lake Malawi Basin (Lake Malawi and Shire River Basins) and is divided into three projects. The first study is assessing the variability and trends of both meteorological and hydrological droughts from 1970-2013 in Lake Malawi and Shire River basins using the standardized precipitation index (SPI) and standardized precipitation and evaporation Index (SPEI) for meteorological droughts and the lake level change index (LLCI) for hydrological droughts. And later the relationship of the meteorological and hydrological droughts is established.
While the second study extends the drought analysis into the future by examining the potential future meteorological water balance and associated drought characteristics such as the drought intensity (DI), drought months (DM), and drought events (DE) in the Greater Lake Malawi Basin. The sensitivity of drought to changes of rainfall and temperature is also assessed using the scenario-neutral approach. The climate change projections from 20 Coordinated Regional Climate Downscaling Experiment (CORDEX) models for Africa based on two scenarios (RCP4.5 and RCP8.5) for the periods 2021–2050 and 2071–2100 are used. The study also investigates the effect of bias-correction (i.e., empirical quantile mapping) on the ability of the climate model ensemble in reproducing observed drought characteristics as compared to raw climate projections.
The sensitivity of key hydrologic variables and hydropower generation to climate change in Lake Malawi and Shire River basins is assessed in third study. The study adapts the mesoscale Hydrological Model (mHM) which is applied separately in the Upper Lake Malawi and Shire River basins. A particular Lake Malawi model, which focuses on reservoir routing and lake water balance, has been developed and is interlinked between the two basins. Similar to second study, the scenario-neutral approach is also applied to determine the sensitivity of climate change on water resources more particularly Lake Malawi level and Shire River flow which later helps to estimate the hydropower production susceptibility.
Results suggest that meteorological droughts are increasing due to a decrease in precipitation which is exacerbated by an increase in temperature (potential evapotranspiration). The hydrological system of Lake Malawi seems to have a >24-month memory towards meteorological conditions since the 36-months SPEI can predict hydrological droughts ten-months in advance. The study has found the critical lake level that would trigger hydrological drought to be 474.1 m.a.s.l.
Despite the differences in the internal structures and uncertainties that exist among the climate models, they all agree on an increase of meteorological droughts in the future in terms of higher DI and longer events (DM). DI is projected to increase between +25% and +50% during 2021-2050 and between +131% and +388% during 2071-2100. This translates into +3 to +5, and +7 to +8 more drought months per year during both periods, respectively. With longer lasting drought events, DE is decreasing. Projected droughts based on RCP8.5 are 1.7 times more severe than droughts based on RCP4.5.
It is also found that an annual temperature increase of 1°C decreases mean lake level and outflow by 0.3 m and 17%, respectively, signifying the importance of intensified evaporation for Lake Malawi’s water budget. Meanwhile, a +5% (-5%) deviation in annual rainfall changes mean lake level by +0.7 m (-0.6 m). The combined effects of temperature increase and rainfall decrease result in significantly lower flows on Shire River. The hydrological river regime may change from perennial to seasonal with the combination of annual temperature increase and precipitation decrease beyond 1.5°C (3.5°C) and -20% (-15%). The study further projects a reduction in annual hydropower production between 1% (RCP8.5) and 2.5% (RCP4.5) during 2021–2050 and between 5% (RCP4.5) and 24% (RCP8.5) during 2071–2100.
The findings are later linked to global policies more particularly the United Nations Framework Convention on Climate Change (UNFCCC)’s Paris Agreement and the United Nations (UN)’s Sustainable Development Goals (SDGs), and how the failure to adhere the restriction of temperature increase below the global limit of 1.5°C will affect drought and the water resources in Malawi consequently impact the hydropower production. As a result, the achievement of most of the SDGs will be compromised.
The results show that it is of great importance that a further development of hydro energy on the Shire River should take into account the effects of climate change. The information generation is important for decision making more especially supporting the climate action required to fight against climate change. The frequency of extreme climate events due to climate change has reached the climate emergency as saving lives and livelihoods require urgent action.
Soil is today considered a non-renewable resource on societal time scale, as the rate of soil loss is higher than the one of soil formation.
Soil formation is complex, can take several thousands of years and is influenced by a variety of factors, one of them is time. Oftentimes, there is the assumption of constant and progressive conditions for soil and/or profile development (i.e., steady-state). In reality, for most of the soils, their (co-)evolution leads to a complex and irregular soil development in time and space characterised by “progressive” and “regressive” phases.
Lateral transport of soil material (i.e., soil erosion) is one of the principal processes shaping the land surface and soil profile during “regressive” phases and one of the major environmental problems the world faces.
Anthropogenic activities like agriculture can exacerbate soil erosion. Thus, it is of vital importance to distinguish short-term soil redistribution rates (i.e., within decades) influenced by human activities differ from long-term natural rates. To do so, soil erosion (and denudation) rates can be determined by using a set of isotope methods that cover different time scales at landscape level.
With the aim to unravel the co-evolution of weathering, soil profile development and lateral redistribution on a landscape level, we used Pluthonium-239+240 (239+240Pu), Beryllium-10 (10Be, in situ and meteoric) and Radiocarbon (14C) to calculate short- and long-term erosion rates in two settings, i.e., a natural and an anthropogenic environment in the hummocky ground moraine landscape of the Uckermark, North-eastern Germany. The main research questions were:
1. How do long-term and short-term rates of soil redistributing processes differ?
2. Are rates calculated from in situ 10Be comparable to those of using meteoric 10Be?
3. How do soil redistribution rates (short- and long-term) in an agricultural and in a natural landscape compare to each other?
4. Are the soil patterns observed in northern Germany purely a result of past events (natural and/or anthropogenic) or are they imbedded in ongoing processes?
Erosion and deposition are reflected in a catena of soil profiles with no or almost no erosion on flat positions (hilltop), strong erosion on the mid-slope and accumulation of soil material at the toeslope position. These three characteristic process domains were chosen within the CarboZALF-D experimental site, characterised by intense anthropogenic activities. Likewise, a hydrosequence in an ancient forest was chosen for this study and being regarded as a catena strongly influenced by natural soil transport.
The following main results were obtained using the above-mentioned range of isotope methods available to measure soil redistribution rates depending on the time scale needed (e.g., 239+240Pu, 10Be, 14C):
1. Short-term erosion rates are one order of magnitude higher than long-term rates in agricultural settings.
2. Both meteoric and in situ 10Be are suitable soil tracers to measure the long-term soil redistribution rates giving similar results in an anthropogenic environment for different landscape positions (e.g., hilltop, mid-slope, toeslope)
3. Short-term rates were extremely low/negligible in a natural landscape and very high in an agricultural landscape – -0.01 t ha-1 yr-1 (average value) and -25 t ha-1 yr-1 respectively. On the contrary, long-term rates in the forested landscape are comparable to those calculated in the agricultural area investigated with average values of -1.00 t ha-1 yr-1 and -0.79 t ha-1 yr-1.
4. Soil patterns observed in the forest might be due to human impact and activities started after the first settlements in the region, earlier than previously postulated, between 4.5 and 6.8 kyr BP, and not a result of recent soil erosion.
5. Furthermore, long-term soil redistribution rates are similar independently from the settings, meaning past natural soil mass redistribution processes still overshadow the present anthropogenic erosion processes.
Overall, this study could make important contributions to the deciphering of the co-evolution of weathering, soil profile development and lateral redistribution in North-eastern Germany. The multi-methodological approach used can be challenged by the application in a wider range of landscapes and geographic regions.
Modern datasets often exhibit diverse, feature-rich, unstructured data, and they are massive in size. This is the case of social networks, human genome, and e-commerce databases. As Artificial Intelligence (AI) systems are increasingly used to detect pattern in data and predict future outcome, there are growing concerns on their ability to process large amounts of data. Motivated by these concerns, we study the problem of designing AI systems that are scalable to very large and heterogeneous data-sets.
Many AI systems require to solve combinatorial optimization problems in their course of action. These optimization problems are typically NP-hard, and they may exhibit additional side constraints. However, the underlying objective functions often exhibit additional properties. These properties can be exploited to design suitable optimization algorithms. One of these properties is the well-studied notion of submodularity, which captures diminishing returns. Submodularity is often found in real-world applications. Furthermore, many relevant applications exhibit generalizations of this property.
In this thesis, we propose new scalable optimization algorithms for combinatorial problems with diminishing returns. Specifically, we focus on three problems, the Maximum Entropy Sampling problem, Video Summarization, and Feature Selection. For each problem, we propose new algorithms that work at scale. These algorithms are based on a variety of techniques, such as forward step-wise selection and adaptive sampling. Our proposed algorithms yield strong approximation guarantees, and the perform well experimentally.
We first study the Maximum Entropy Sampling problem. This problem consists of selecting a subset of random variables from a larger set, that maximize the entropy. By using diminishing return properties, we develop a simple forward step-wise selection optimization algorithm for this problem. Then, we study the problem of selecting a subset of frames, that represent a given video. Again, this problem corresponds to a submodular maximization problem. We provide a new adaptive sampling algorithm for this problem, suitable to handle the complex side constraints imposed by the application. We conclude by studying Feature Selection. In this case, the underlying objective functions generalize the notion of submodularity. We provide a new adaptive sequencing algorithm for this problem, based on the Orthogonal Matching Pursuit paradigm.
Overall, we study practically relevant combinatorial problems, and we propose new algorithms to solve them. We demonstrate that these algorithms are suitable to handle massive datasets. However, our analysis is not problem-specific, and our results can be applied to other domains, if diminishing return properties hold. We hope that the flexibility of our framework inspires further research into scalability in AI.
The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action.
In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linné 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage.
The specifically developed simulation model HiLEG (High-resolution Large Environmental
Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands.
In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields.
Overall, simulation models like HiLEG can make an important contribution and add value
to conservation planning and policy-making. Properly used, simulation results shed light
on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.
Its properties make copper one of the world’s most important functional metals. Numerous megatrends are increasing the demand for copper. This requires the prospection and exploration of new deposits, as well as the monitoring of copper quality in the various production steps. A promising technique to perform these tasks is Laser Induced Breakdown Spectroscopy (LIBS). Its unique feature, among others, is the ability to measure on site without sample collection and preparation. In this work, copper-bearing minerals from two different deposits are studied. The first set of field samples come from a volcanogenic massive sulfide (VMS) deposit, the second part from a stratiform sedimentary copper (SSC) deposit. Different approaches are used to analyze the data. First, univariate regression (UVR) is used. However, due to the strong influence of matrix effects, this is not suitable for the quantitative analysis of copper grades. Second, the multivariate method of partial least squares regression (PLSR) is used, which is more suitable for quantification. In addition, the effects of the surrounding matrices on the LIBS data are characterized by principal component analysis (PCA), alternative regression methods to PLSR are tested and the PLSR calibration is validated using field samples.
Housing in metabolic cages can induce a pronounced stress response. Metabolic cage systems imply housing mice on metal wire mesh for the collection of urine and feces in addition to monitoring food and water intake. Moreover, mice are single-housed, and no nesting, bedding, or enrichment material is provided, which is often argued to have a not negligible impact on animal welfare due to cold stress. We therefore attempted to reduce stress during metabolic cage housing for mice by comparing an innovative metabolic cage (IMC) with a commercially available metabolic cage from Tecniplast GmbH (TMC) and a control cage. Substantial refinement measures were incorporated into the IMC cage design. In the frame of a multifactorial approach for severity assessment, parameters such as body weight, body composition, food intake, cage and body surface temperature (thermal imaging), mRNA expression of uncoupling protein 1 (Ucp1) in brown adipose tissue (BAT), fur score, and fecal corticosterone metabolites (CMs) were included. Female and male C57BL/6J mice were single-housed for 24 h in either conventional Macrolon cages (control), IMC, or TMC for two sessions. Body weight decreased less in the IMC (females—1st restraint: 6.94%; 2nd restraint: 6.89%; males—1st restraint: 8.08%; 2nd restraint: 5.82%) compared to the TMC (females—1st restraint: 13.2%; 2nd restraint: 15.0%; males—1st restraint: 13.1%; 2nd restraint: 14.9%) and the IMC possessed a higher cage temperature (females—1st restraint: 23.7°C; 2nd restraint: 23.5 °C; males—1st restraint: 23.3 °C; 2nd restraint: 23.5 °C) compared with the TMC (females—1st restraint: 22.4 °C; 2nd restraint: 22.5 °C; males—1st restraint: 22.6 °C; 2nd restraint: 22.4 °C). The concentration of fecal corticosterone metabolites in the TMC (females—1st restraint: 1376 ng/g dry weight (DW); 2nd restraint: 2098 ng/g DW; males—1st restraint: 1030 ng/g DW; 2nd restraint: 1163 ng/g DW) was higher compared to control cage housing (females—1st restraint:
640 ng/g DW; 2nd restraint: 941 ng/g DW; males—1st restraint: 504 ng/g DW; 2nd restraint: 537 ng/g DW). Our results show the stress potential induced by metabolic cage restraint that is markedly influenced by the lower housing temperature. The IMC represents a first attempt to target cold stress reduction during metabolic cage application thereby producing more animal welfare friendly data.
Following the extinction of dinosaurs, the great adaptive radiation of mammals occurred, giving rise to an astonishing ecological and phenotypic diversity of mammalian species. Even closely related species often inhabit vastly different habitats, where they encounter diverse environmental challenges and are exposed to different evolutionary pressures. As a response, mammals evolved various adaptive phenotypes over time, such as morphological, physiological and behavioural ones. Mammalian genomes vary in their content and structure and this variation represents the molecular mechanism for the long-term evolution of phenotypic variation. However, understanding this molecular basis of adaptive phenotypic variation is usually not straightforward.
The recent development of sequencing technologies and bioinformatics tools has enabled a better insight into mammalian genomes. Through these advances, it was acknowledged that mammalian genomes differ more, both within and between species, as a consequence of structural variation compared to single-nucleotide differences. Structural variant types investigated in this thesis - such as deletion, duplication, inversion and insertion, represent a change in the structure of the genome, impacting the size, copy number, orientation and content of DNA sequences. Unlike short variants, structural variants can span multiple genes. They can alter gene dosage, and cause notable gene expression differences and subsequently phenotypic differences. Thus, they can lead to a more dramatic effect on the fitness (reproductive success) of individuals, local adaptation of populations and speciation.
In this thesis, I investigated and evaluated the potential functional effect of structural variations on the genomes of mustelid species. To detect the genomic regions associated with phenotypic variation I assembled the first reference genome of the tayra (Eira barbara) relying on linked-read sequencing technology to achieve a high level of genome completeness important for reliable structural variant discovery. I then set up a bioinformatics pipeline to conduct a comparative genomic analysis and explore variation between mustelid species living in different environments. I found numerous genes associated with species-specific phenotypes related to diet, body condition and reproduction among others, to be impacted by structural variants.
Furthermore, I investigated the effects of artificial selection on structural variants in mice selected for high fertility, increased body mass and high endurance. Through selective breeding of each mouse line, the desired phenotypes have spread within these populations, while maintaining structural variants specific to each line. In comparison to the control line, the litter size has doubled in the fertility lines, individuals in the high body mass lines have become considerably larger, and mice selected for treadmill performance covered substantially more distance. Structural variants were found in higher numbers in these trait-selected lines than in the control line when compared to the mouse reference genome. Moreover, we have found twice as many structural variants spanning protein-coding genes (specific to each line) in trait-selected lines. Several of these variants affect genes associated with selected phenotypic traits. These results imply that structural variation does indeed contribute to the evolution of the selected phenotypes and is heritable.
Finally, I suggest a set of critical metrics of genomic data that should be considered for a stringent structural variation analysis as comparative genomic studies strongly rely on the contiguity and completeness of genome assemblies. Because most of the available data used to represent reference genomes of mammalian species is generated using short-read sequencing technologies, we may have incomplete knowledge of genomic features. Therefore, a cautious structural variation analysis is required to minimize the effect of technical constraints.
The impact of structural variants on the adaptive evolution of mammalian genomes is slowly gaining more focus but it is still incorporated in only a small number of population studies. In my thesis, I advocate the inclusion of structural variants in studies of genomic diversity for a more comprehensive insight into genomic variation within and between species, and its effect on adaptive evolution.
Control over spin and electronic structure of MoS₂ monolayer via interactions with substrates
(2023)
The molybdenum disulfide (MoS2) monolayer is a semiconductor with a direct bandgap while it is a robust and affordable material.
It is a candidate for applications in optoelectronics and field-effect transistors.
MoS2 features a strong spin-orbit coupling which makes its spin structure promising for acquiring the Kane-Mele topological concept with corresponding applications in spintronics and valleytronics.
From the optical point of view, the MoS2 monolayer features two valleys in the regions of K and K' points. These valleys are differentiated by opposite spins and a related valley-selective circular dichroism.
In this study we aim to manipulate the MoS2 monolayer spin structure in the vicinity of the K and K' points to explore the possibility of getting control over the optical and electronic properties.
We focus on two different substrates to demonstrate two distinct routes: a gold substrate to introduce a Rashba effect and a graphene/cobalt substrate to introduce a magnetic proximity effect in MoS2.
The Rashba effect is proportional to the out-of-plane projection of the electric field gradient. Such a strong change of the electric field occurs at the surfaces of a high atomic number materials and effectively influence conduction electrons as an in-plane magnetic field. A molybdenum and a sulfur are relatively light atoms, thus, similar to many other 2D materials, intrinsic Rashba effect in MoS2 monolayer is vanishing small. However, proximity of a high atomic number substrate may enhance Rashba effect in a 2D material as it was demonstrated for graphene previously.
Another way to modify the spin structure is to apply an external magnetic field of high magnitude (several Tesla), and cause a Zeeman splitting, the conduction electrons.
However, a similar effect can be reached via magnetic proximity which allows us to reduce external magnetic fields significantly or even to zero. The graphene on cobalt interface is ferromagnetic and stable for MoS2 monolayer synthesis. Cobalt is not the strongest magnet; therefore, stronger magnets may lead to more significant results.
Nowadays most experimental studies on the dichalcogenides (MoS2 included) are performed on encapsulated heterostructures that are produced by mechanical exfoliation.
While mechanical exfoliation (or scotch-tape method) allows to produce a huge variety of structures, the shape and the size of the samples as well as distance between layers in heterostructures are impossible to control reproducibly.
In our study we used molecular beam epitaxy (MBE) methods to synthesise both MoS2/Au(111) and MoS2/graphene/Co systems.
We chose to use MBE, as it is a scalable and reproducible approach, so later industry may adapt it and take over.
We used graphene/cobalt instead of just a cobalt substrate because direct contact of MoS2\ monolayer and a metallic substrate may lead to photoluminescence (PL) quenching in the metallic substrate. Graphene and hexagonal boron nitride monolayer are considered building blocks of a new generation of electronics also commonly used as encapsulating materials for PL studies. Moreover graphene is proved to be a suitable substrate for the MBE growth of transitional metal dichalcogenides (TMDCs).
In chapter 1,
we start with an introduction to TMDCs. Then we focus on MoS2 monolayer state of the art research in the fields of application scenario; synthesis approaches; electronic, spin, and optical properties; and interactions with magnetic fields and magnetic materials.
We briefly touch the basics of magnetism in solids and move on to discuss various magnetic exchange interactions and magnetic proximity effect.
Then we describe MoS2 optical properties in more detail. We start from basic exciton physics and its manifestation in the MoS2 monolayer. We consider optical selection rules in the MoS2 monolayer and such properties as chirality, spin-valley locking, and coexistence of bright and dark excitons.
Chapter 2 contains an overview of the employed surface science methods: angle-integrated, angle-resolved, and spin-resolved photoemission; low energy electron diffraction and scanning tunneling microscopy.
In chapter 3, we describe MoS2 monolayer synthesis details for two substrates: gold monocrystal with (111) surface and graphene on cobalt thin film with Co(111) surface orientation.
The synthesis descriptions are followed by a detailed characterisation of the obtained structures: fingerprints of MoS2 monolayer formation; MoS2 monolayer symmetry and its relation to the substrate below; characterisation of MoS2 monolayer coverage, domain distribution, sizes and shapes, and moire structures.
In chapter~4, we start our discussion with MoS2/Au(111) electronic and spin structure. Combining density functional theory computations (DFT) and spin-resolved photoemission studies, we demonstrate that the MoS2 monolayer band structure features an in-plane Rashba spin splitting. This confirms the possibility of MoS2 monolayer spin structure manipulation via a substrate.
Then we investigate the influence of a magnetic proximity in the MoS2/graphene/Co system on the MoS2 monolayer spin structure.
We focus our investigation on MoS2 high symmetry points: G and K.
First, using spin-resolved measurements, we confirm that electronic states are spin-split at the G point via a magnetic proximity effect. Second, combining spin-resolved measurements and DFT computations for MoS2 monolayer in the K point region, we demonstrate the appearance of a small in-plane spin polarisation in the valence band top and predict a full in-plane spin polarisation for the conduction band bottom.
We move forward discussing how these findings are related to the MoS2 monolayer optical properties, in particular the possibility of dark exciton observation. Additionally, we speculate on the control of the MoS2 valley energy via magnetic proximity from cobalt.
As graphene is spatially buffering the MoS2 monolayer from the Co thin film, we speculate on the role of graphene in the magnetic proximity transfer by replacing graphene with vacuum and other 2D materials in our computations.
We finish our discussion by investigating the K-doped MoS2/graphene/Co system and the influence of this doping on the electronic and spin structure as well as on the magnetic proximity effect.
In summary, using a scalable MBE approach we synthesised
MoS2/Au(111) and MoS2/graphene/Co systems. We found a Rashba effect taking place in MoS2/Au(111) which proves that the MoS2 monolayer in-plane spin structure can be modified. In MoS2/graphene/Co the in-plane magnetic proximity effect indeed takes place which rises the possibility of fine tuning the MoS2 optical properties via manipulation of the the substrate magnetisation.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.
The collaboration-based professional development approach Lesson Study (LS), which has its roots in the Japanese education system, has gained international recognition over the past three decades and spread quickly throughout the world. LS is a collaborative method to professional development (PD) that incorporates multiple characteristics that have been identified in the research literature as key to effective PD. Specifically, LS is a long-term process that consists of subsequent inquiry cycles, it is site-based and integrated in teachers’ practice, it encourages collaboration and reflection, places a strong emphasis on student learning, and it typically involves external experts that support the process or offer additional insights.
As LS integrates all these characteristics, it has rapidly gained international popularity since the turn of the 21st century and is currently being practiced in over 40 countries around the world. This international borrowing of the idea of LS to new national contexts has given rise to a research field that aims to investigate the effectiveness of LS on teacher learning as well as the circumstances and mechanisms that make LS effective in various settings around the world. Such research is important, as borrowing educational innovations and adapting them to new contexts can be a challenging process. Educational innovations that fail to deliver the expected outcomes tend to be abandoned prematurely and before they have been completely understood or a substantial research base has been established.
In order to prevent LS from early abandonment, Lewis and colleagues outlined three critical research needs in 2006, not long after LS was initially introduced to the United States. These research needs included (1) developing a descriptive knowledge base on LS, (2) examining the mechanisms by which teachers learn through LS, and (3) using design-based research cycles to analyze and improve LS.
This dissertation set out to take stock of the progress that has been made on these research needs over the past 20 years. The scoping review conducted for the framework of this dissertation indicates that, while a large and international knowledge base has been developed, the field has not yet produced reliable evidence of the effectiveness of LS. Based on the scoping review, this dissertation makes the case that Lewis et al.’s (2006) critical research needs should be updated. In order to do so, a number of limitations to the current knowledge base on LS need to be addressed. These limitations include (1) the frequent lack of comparable and replicable descriptions of the LS intervention in publications, (2) the incoherent use or lack of use of theoretical frameworks to explain teacher learning through LS, (3) the inconsistent use of terminology and concepts, and (4) the lack of scientific rigor in research studies and of established ways or tools to measure the effectiveness of LS.
This dissertation aims to advance the critical research needs in the field by examining the extent and nature of these limitations in three research studies. The focus of these studies lies on the LS stages of observation and reflection, as these stages have a high potential to facilitate teacher learning. The first study uses a mixed-method design to examine how teachers at German primary schools reflect critically together. The study derives a theory-based definition of critical and collaborative reflection in order to re-frame the reflection element in LS.
The second study, a systematic review of 129 articles on LS, assess how transparent research articles are in reporting how teachers observed and reflected together. In addition, it is investigated whether these articles provide any kind of theorization for the stages of observation and reflection.
The third study proposes a conceptual model for the field of LS that is based on existing models of continuous professional development and research findings on team effectiveness and collaboration. The model describes the dimensions of input, mediating mechanisms, and outcomes in order to provide a conceptual grid to teachers’ continuous professional development through LS.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
The Andes reflect Cenozoic deformation and uplift along the South American margin in the context of regional shortening associated with the interaction between the subducting Nazca plate and the overriding continental South American plate. Simultaneously, multiple levels of uplifted marine terraces constitute laterally continuous geomorphic features related to the accumulation of permanent forearc deformation in the coastal realm. However, the mechanisms responsible for permanent coastal uplift and the persistency of current/decadal deformation patterns over millennial timescales are still not fully understood. This dissertation presents a continental-scale database of last interglacial terrace elevations and uplift rates along the South American coast that provides the basis for an analysis of a variety of mechanisms that are possibly responsible for the accumulation of permanent coastal uplift. Regional-scale mapping and analysis of multiple, late Pleistocene terrace levels in central Chile furthermore provide valuable insights regarding the persistency of current seismic asperities, the role of upper-plate faulting, and the impact of bathymetric ridges on permanent forearc deformation.
The database of last interglacial terrace elevations reveals an almost continuous signal of background-uplift rates along the South American coast at ~0.22 mm/yr that is modified by various short- to long-wavelength changes. Spatial correlations with crustal faults and subducted bathymetric ridges suggest long-term deformation to be affected by these features, while the latitudinal variability of climate forcing factors has a profound impact on the generation and preservation of marine terraces. Systematic wavelength analyses and comparisons of the terrace-uplift rate signal with different tectonic parameters reveal short-wavelength deformation to result from crustal faulting, while intermediate- to long-wavelength deformation might indicate various extents of long-term seismotectonic segments on the megathrust, which are at least partially controlled by the subduction of bathymetric anomalies. The observed signal of background-uplift rate is likely accumulated by moderate earthquakes near the Moho, suggesting multiple, spatiotemporally distinct phases of uplift that manifest as a continuous uplift signal over millennial timescales.
Various levels of late Pleistocene marine terraces in the 2015 M8.3 Illapel-earthquake area reveal a range of uplift rates between 0.1 and 0.6 mm/yr and indicate decreasing uplift rates since ~400 ka. These glacial-cycle uplift rates do not correlate with current or decadal estimates of coastal deformation suggesting seismic asperities not to be persistent features on the megathrust that control the accumulation of permanent forearc deformation over long timescales of 105 years. Trench-parallel, crustal normal faults modulate the characteristics of permanent forearc-deformation; upper-plate extension likely represents a second-order phenomenon resulting from subduction erosion and subsequent underplating that lead to regional tectonic uplift and local gravitational collapse of the forearc. In addition, variable activity with respect to the subduction of the Juan Fernández Ridge can be detected in the upper plate over the course of multiple interglacial periods, emphasizing the role of bathymetric anomalies in causing local increases in terrace-uplift rate. This thesis therefore provides new insights into the current understanding of subduction-zone processes and the dynamics of coastal forearc deformation, whose different interacting forcing factors impact the topographic and geomorphic evolution of the western South American coast.
Defining the metaverse
(2023)
The term Metaverse is emerging as a result of the late push by multinational technology conglomerates and a recent surge of interest in Web 3.0, Blockchain, NFT, and Cryptocurrencies. From a scientific point of view, there is no definite consensus on what the Metaverse will be like. This paper collects, analyzes, and synthesizes scientific definitions and the accompanying major characteristics of the Metaverse using the methodology of a Systematic Literature Review (SLR). Two revised definitions for the Metaverse are presented, both condensing the key attributes, where the first one is rather simplistic holistic describing “a three-dimensional online environment in which users represented by avatars interact with each other in virtual spaces decoupled from the real physical world”. In contrast, the second definition is specified in a more detailed manner in the paper and further discussed. These comprehensive definitions offer specialized and general scholars an application within and beyond the scientific context of the system science, information system science, computer science, and business informatics, by also introducing open research challenges. Furthermore, an outlook on the social, economic, and technical implications is given, and the preconditions that are necessary for a successful implementation are discussed.
Digital technologies have enabled a variety of learning offers that opened new challenges in terms of recognition of formal, informal and non-formal learning, such as MOOCs.
This paper focuses on how providing relevant data to describe a MOOC is conducive to increase the transparency of information and, ultimately, the flexibility of European higher education.
The EU-funded project ECCOE took up these challenges and developed a solution by identifying the most relevant descriptors of a learning opportunity with a view to supporting a European system for micro-credentials. Descriptors indicate the specific properties of a learning opportunity according to European standards. They can provide a recognition framework also for small volumes of learning (micro-credentials) to support the integration of non-formal learning (MOOCs) into formal learning (e.g. institutional university courses) and to tackle skills shortage, upskilling and reskilling by acquiring relevant competencies. The focus on learning outcomes can facilitate the recognition of skills and competences of students and enhance both virtual and physical mobility and employability.
This paper presents two contexts where ECCOE descriptors have been adopted: the Politecnico di Milano MOOC platform (Polimi Open Knowledge – POK), which is using these descriptors as the standard information to document the features of its learning opportunities, and the EU-funded Uforest project on urban forestry, which developed a blended training program for students of partner universities whose MOOCs used the ECCOE descriptors.
Practice with ECCOE descriptors shows how they can be used not only to detail MOOC features, but also as a compass to design the learning offer. In addition, some rules of thumb can be derived and applied when using specific descriptors.
Recent research suggests that design thinking practices may foster the development of needed capabilities in new digitalised landscapes. However, existing publications represent individual contributions, and we lack a holistic understanding of the value of design thinking in a digital world. No review, to date, has offered a holistic retrospection of this research. In response, in this bibliometric review, we aim to shed light on the intellectual structure of multidisciplinary design thinking literature related to capabilities relevant to the digital world in higher education and business settings, highlight current trends and suggest further studies to advance theoretical and empirical underpinnings. Our study addresses this aim using bibliometric methods—bibliographic coupling and co-word analysis as they are particularly suitable for identifying current trends and future research priorities at the forefront of the research. Overall, bibliometric analyses of the publications dealing with the related topics published in the last 10 years (extracted from the Web of Science database) expose six trends and two possible future research developments highlighting the expanding scope of the design thinking scientific field related to capabilities required for the (more sustainable and human-centric) digital world. Relatedly, design thinking becomes a relevant approach to be included in higher education curricula and human resources training to prepare students and workers for the changing work demands. This paper is well-suited for education and business practitioners seeking to embed design thinking capabilities in their curricula and for design thinking and other scholars wanting to understand the field and possible directions for future research.
Design Thinking is a human-centered approach to innovation that has become increasingly popular globally over the last decade. While the spread of Design Thinking is well understood and documented in the Western cultural contexts, particularly in Europe and the US due to the popularity of the Stanford-Potsdam Design Thinking education model, this is not the case when it comes to non-Western cultural contexts. This thesis fills a gap identified in the literature regarding how Design Thinking emerged, was perceived, adopted, and practiced in the Arab world. The culture in that part of the world differs from that of the Western context, which impacts the mindset of people and how they interact with Design Thinking tools and methods.
A mixed-methods research approach was followed in which both quantitative and qualitative methods were employed. First, two methods were used in the quantitative phase: a social media analysis using Twitter as a source of data, and an online questionnaire. The results and analysis of the quantitative data informed the design of the qualitative phase in which two methods were employed: ten semi-structured interviews, and participant observation of seven Design Thinking training events.
According to the analyzed data, the Arab world appears to have had an early, though relatively weak, and slow, adoption of Design Thinking since 2006. Increasing adoption, however, has been witnessed over the last decade, especially in Saudi Arabia, the United Arab Emirates and Egypt. The results also show that despite its limited spread, Design Thinking has been practiced the most in education, information technology and communication, administrative services, and the non-profit sectors. The way it is being practiced, though, is not fully aligned with how it is being practiced and taught in the US and Europe, as most people in the region do not necessarily believe in all mindset attributes introduced by the Stanford-Potsdam tradition.
Practitioners in the Arab world also seem to shy away from the 'wild side' of Design Thinking in particular, and do not fully appreciate the connection between art-design, and science-engineering. This questions the role of the educational institutions in the region since -according to the findings- they appear to be leading the movement in promoting and developing Design Thinking in the Arab world. Nonetheless, it is notable that people seem to be aware of the positive impact of applying Design Thinking in the region, and its potential to bring meaningful transformation. However, they also seem to be concerned about the current cultural, social, political, and economic challenges that may challenge this transformation. Therefore, they call for more awareness and demand to create Arabic, culturally appropriate programs to respond to the local needs. On another note, the lack of Arabic content and local case studies on Design Thinking were identified by several interviewees and were also confirmed by the participant observation as major challenges that are slowing down the spread of Design Thinking or sometimes hampering capacity building in the region. Other challenges that were revealed by the study are: changing the mindset of people, the lack of dedicated Design Thinking spaces, and the need for clear instructions on how to apply Design Thinking methods and activities. The concept of time and how Arabs deal with it, gender management during trainings, and hierarchy and power dynamics among training participants are also among the identified challenges. Another key finding revealed by the study is the confirmation of التفكير التصميمي as the Arabic term to be most widely adopted in the region to refer to Design Thinking, since four other Arabic terms were found to be associated with Design Thinking.
Based on the findings of the study, the thesis concludes by presenting a list of recommendations on how to overcome the mentioned challenges and what factors should be considered when designing and implementing culturally-customized Design Thinking training in the Arab region.
At the beginning of 2020, with COVID-19, courts of justice worldwide had to move online to continue providing judicial service. Digital technologies materialized the court practices in ways unthinkable shortly before the pandemic creating resonances with judicial and legal regulation, as well as frictions. A better understanding of the dynamics at play in the digitalization of courts is paramount for designing justice systems that serve their users better, ensure fair and timely dispute resolutions, and foster access to justice. Building on three major bodies of literature —e-justice, digitalization and organization studies, and design research— Designing for Digital Justice takes a nuanced approach to account for human and more-than-human agencies.
Using a qualitative approach, I have studied in depth the digitalization of Chilean courts during the pandemic, specifically between April 2020 and September 2022. Leveraging a comprehensive source of primary and secondary data, I traced back the genealogy of the novel materializations of courts’ practices structured by the possibilities offered by digital technologies. In five (5) cases studies, I show in detail how the courts got to 1) work remotely, 2) host hearings via videoconference, 3) engage with users via social media (i.e., Facebook and Chat Messenger), 4) broadcast a show with judges answering questions from users via Facebook Live, and 5) record, stream, and upload judicial hearings to YouTube to fulfil the publicity requirement of criminal hearings. The digitalization of courts during the pandemic is characterized by a suspended normativity, which makes innovation possible yet presents risks. While digital technologies enabled the judiciary to provide services continuously, they also created the risk of displacing traditional judicial and legal regulation.
Contributing to liminal innovation and digitalization research, Designing for Digital Justice theorizes four phases: 1) the pre-digitalization phase resulting in the development of regulation, 2) the hotspot of digitalization resulting in the extension of regulation, 3) the digital innovation redeveloping regulation (moving to a new, preliminary phase), and 4) the permanence of temporal practices displacing regulation. Contributing to design research Designing for Digital Justice provides new possibilities for innovation in the courts, focusing at different levels to better address tensions generated by digitalization. Fellow researchers will find in these pages a sound theoretical advancement at the intersection of digitalization and justice with novel methodological references. Practitioners will benefit from the actionable governance framework Designing for Digital Justice Model, which provides three fields of possibilities for action to design better justice systems. Only by taking into account digital, legal, and social factors can we design better systems that promote access to justice, the rule of law, and, ultimately social peace.
Desperados at Sea
(2023)
Pirates are fortune-seeking fighters at sea. Their exploits fire the imaginations of their victims and admirers, drawing a veil over individuals who rarely bear a real name and pursue their adventurous occupations as buccaneers, filibusters, freebooters, privateers, pirates, or corsairs. Piracy, corsairing, and contraband trade were epidemic among the Egyptians and the Phoenicians, the Greeks and the Vikings, the Spaniards and the Ottomans, the Muslims, and the Christians. And the Jews.
Species are adapted to the environment they live in. Today, most environments are subjected to rapid global changes induced by human activity, most prominently land cover and climate changes. Such transformations can cause adjustments or disruptions in various eco-evolutionary processes. The repercussions of this can appear at the population level as shifted ranges and altered abundance patterns. This is where global change effects on species are usually detected first.
To understand how eco-evolutionary processes act and interact to generate patterns of range and abundance and how these processes themselves are influenced by environmental conditions, spatially-explicit models provide effective tools. They estimate a species’ niche as the set of environmental conditions in which it can persist. However, the currently most commonly used models rely on static correlative associations that are established between a set of spatial predictors and observed species distributions. For this, they assume stationary conditions and are therefore unsuitable in contexts of global change. Better equipped are process-based models that explicitly implement algorithmic representations of eco-evolutionary mechanisms and evaluate their joint dynamics. These models have long been regarded as difficult to parameterise, but an increased data availability and improved methods for data integration lessen this challenge. Hence, the goal of this thesis is to further develop process-based models, integrate them into a complete modelling workflow, and provide the tools and guidance for their successful application.
With my thesis, I presented an integrated platform for spatially-explicit eco-evolutionary modelling and provided a workflow for their inverse calibration to observational data. In the first chapter, I introduced RangeShiftR, a software tool that implements an individual-based modelling platform for the statistical programming language R. Its open-source licensing, extensive help pages and available tutorials make it accessible to a wide audience. In the second chapter, I demonstrated a comprehensive workflow for the specification, calibration and validation of RangeShiftR by the example of the red kite in Switzerland. The integration of heterogeneous data sources, such as literature and monitoring data, allowed to successfully calibrate the model. It was then used to make validated, spatio-temporal predictions of future red kite abundance. The presented workflow can be adopted to any study species if data is available. In the third chapter, I extended RangeShiftR to directly link demographic processes to climatic predictors. This allowed me to explore the climate-change responses of eight Swiss breeding birds in more detail. Specifically, the model could identify the most influential climatic predictors, delineate areas of projected demographic suitability, and attribute current population trends to contemporary climate change.
My work shows that the application of complex, process-based models in conservation-relevant contexts is feasible, utilising available tools and data. Such models can be successfully calibrated and outperform other currently used modelling approaches in terms of predictive accuracy. Their projections can be used to predict future abundances or to assess alternative conservation scenarios. They further improve our mechanistic understanding of niche and range dynamics under climate change. However, only fully mechanistic models, that include all relevant processes, allow to precisely disentangle the effects of single processes on observed abundances. In this respect, the RangeShiftR model still has potential for further extensions that implement missing influential processes, such as species interactions.
Dynamic, process-based models are needed to adequately model a dynamic reality. My work contributes towards the advancement, integration and dissemination of such models. This will facilitate numeric, model-based approaches for species assessments, generate ecological insights and strengthen the reliability of predictions on large spatial scales under changing conditions.
Development of electrochemical antibody-based and enzymatic assays for mycotoxin analysis in food
(2023)
Electrochemical methods are promising to meet the demand for easy-to-use devices monitoring key parameters in the food industry. Many companies run own lab procedures for mycotoxin analysis, but it is a major goal to simplify the analysis. The enzyme-linked immunosorbent assay using horseradish peroxidase as enzymatic label, together with 3,3',5,5' tetramethylbenzidine (TMB)/H2O2 as substrates allows sensitive mycotoxin detection with optical detection methods. For the miniaturization of the detection step, an electrochemical system for mycotoxin analysis was developed. To this end, the electrochemical detection of TMB was studied by cyclic voltammetry on different screen-printed electrodes (carbon and gold) and at different pH values (pH 1 and pH 4). A stable electrode reaction, which is the basis for the further construction of the electrochemical detection system, could be achieved at pH 1 on gold electrodes. An amperometric detection method for oxidized TMB, using a custom-made flow cell for screen-printed electrodes, was established and applied for a competitive magnetic bead-based immunoassay for the mycotoxin ochratoxin A. A limit of detection of 150 pM (60 ng/L) could be obtained and the results were verified with optical detection. The applicability of the magnetic bead-based immunoassay was tested in spiked beer using a handheld potentiostat connected via Bluetooth to a smartphone for amperometric detection allowing to quantify ochratoxin A down to 1.2 nM (0.5 µg/L).
Based on the developed electrochemical detection system for TMB, the applicability of the approach was demonstrated with a magnetic bead-based immunoassay for the ergot alkaloid, ergometrine. Under optimized assay conditions a limit of detection of 3 nM (1 µg/L) was achieved and in spiked rye flour samples ergometrine levels in a range from 25 to 250 µg/kg could be quantified. All results were verified with optical detection. The developed electrochemical detection method for TMB gives great promise for the detection of TMB in many other HRP-based assays.
A new sensing approach, based on an enzymatic electrochemical detection system for the mycotoxin fumonisin B1 was established using an Aspergillus niger fumonisin amine oxidase (AnFAO). AnFAO was produced recombinantly in E. coli as maltose-binding protein fusion protein and catalyzes the oxidative deamination of fumonisins, producing hydrogen peroxide. It was found that AnFAO has a high storage and temperature stability. The enzyme was coupled covalently to magnetic particles, and the enzymatically produced H2O2 in the reaction with fumonisin B1 was detected amperometrically in a flow injection system using Prussian blue/carbon electrodes and the custom-made wall-jet flow cell. Fumonisin B1 could be quantified down to 1.5 µM (≈ 1 mg/L). The developed system represents a new approach to detect mycotoxins using enzymes and electrochemical methods.
Digital technology offers significant political, economic, and societal opportunities. At the same time, the notion of digital sovereignty has become a leitmotif in German discourse: the state’s capacity to assume its responsibilities and safeguard society’s – and individuals’ – ability to shape the digital transformation in a self-determined way. The education sector is exemplary for the challenge faced by Germany, and indeed Europe, of harnessing the benefits of digital technology while navigating concerns around sovereignty. It encompasses education as a core public good, a rapidly growing field of business, and growing pools of highly sensitive personal data. The report describes pathways to mitigating the tension between digitalization and sovereignty at three different levels – state, economy, and individual – through the lens of concrete technical projects in the education sector: the HPI Schul-Cloud (state sovereignty), the MERLOT data spaces (economic sovereignty), and the openHPI platform (individual sovereignty).
Digitalization, as well as sustainability, are gaining increased relevance and have attracted significant attention in research and practice. However, the research already published about this topic examining digitalization in the retail sector does not consider the acceptance of related innovations, nor their impact on sustainability. Therefore, this article critically analyzes the acceptance of customers towards digital technologies in fashion stores as well as their impact on sustainability in the textile industry. The comprehensive analysis of the literature and the current state of research provide the basis of this paper. Theoretical models, such as the Technology-Acceptance-Model (TAM) and the Unified Theory of Acceptance and Use of Technology 2 (UTAUT 2) enable the evaluation of expectations and acceptance, as well as the assessment of possible inhibitory factors for the subsequent descriptive and statistical examination of the acceptance of digital technologies in fashion stores. The research on this subject was examined in a quantitative way. The key findings show that customers do accept digital technologies in fashion stores. The final part of this contribution describes the innovative Digitalization 4 Sustainability Framework which shows that digital technologies at the point of sale (PoS) in fashion stores could have a positive impact on sustainability. Overall, this paper shows that it is particularly important for fashion stores to concentrate on their individual strengths and customer needs as well as to indicate a more sustainable way by using digital technologies, in order to achieve added value for the customers and to set themselves apart from the competition while designing a more sustainable future. Moreover, fashion stores should make it a point of their honor to harness the power of digitalization for sake of sustainability and economic value creation.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
In this work, the role of the TusA protein was investigated for the cell functionality and FtsZ ring assembly in Escherichia coli. TusA is the tRNA-2-thiouridine synthase that acts as a sulfur transferase in tRNA thiolation for the formation of 2-thiouridine at the position 34 (wobble base) of tRNALys, tRNAGlu and tRNAGln. It binds the persulfide form of sulfur and transfers it to further proteins during mnm5s2U tRNA modification at wobble position and for Moco biosynthesis. With this thiomodification of tRNA, the ribosome binding is more efficient and frameshifting is averted during the protein translation. Previous studies have revealed an essential role of TusA in bacterial cell physiology since deletion of the tusA gene resulted in retarded growth and filamentous cells during the exponential growth phase in a rich medium which suddenly disappeared during the stationary phase. This indicates a problem in the cell division process. Therefore the focus of this work was to investigate the role of TusA for cell functionality and FtsZ ring formation and thus the cell separation.
The reason behind the filamentous growth of the tusA mutant strain was investigated by growth and morphological analyses. ΔtusA cells showed a retarded growth during the exponential phase compared to the WT strain. Also, morphological analysis of ΔtusA cells confirmed the filamentous cell shape. The growth and cell division defects in ΔtusA indicated a defect in FtsZ protein as a key player of cell division. The microscopic investigation revealed that filamentous ΔtusA cells possessed multiple DNA parts arranged next to each other. This suggested that although the DNA replication occurred correctly, there was a defect in the step where FtsZ should act; probably FtsZ is unable to assemble to the ring structure or the assembled ring is not able to constrict. All tested mutant strains (ΔtusD, ΔtusE and ΔmnmA) involved in the mnm5s2U34 tRNA modification pathway shared the similar retarded growth and filamentous cell shape like ΔtusA strain. Thus, the cell division defect arises from a defect in mnm5s2U34 tRNA thiolation.
Since the FtsZ ring formation was supposed to be defective in filaments, a possible intracellular interaction of TusA and FtsZ was examined by fluorescent (EGFP and mCherry) fusion proteins expression and FRET. FtsZ expressing tusA mutant (DE3) cells showed a red mCherry signal at the cell poles, indicating that FtsZ is still in the assembling phase. Interestingly, the cellular region of EGFP-TusA fusion protein expressed in ΔtusA (DE3) was conspicuous; the EGFP signal was spread throughout the whole cell and, in addition, a slight accumulation of the EGFP-TusA fluorescence was detectable at the cell poles, the same part of the cell as for mCherry-FtsZ. Thus, this strongly suggested an interaction of TusA and FtsZ.
Furthermore, the cellular FtsZ and Fis concentrations, and their change during different growth phases were determined via immunoblotting. All tested deletion strains of mnm5s2U34 tRNA modification show high cellular FtsZ and Fis levels in the exponential phase, shifting to the later growth phases. This shift reflects the retarded growth, whereby the deletion strains reach later the exponential phase. Conclusively, the growth and cell division defect, and thus the formation of filaments, is most likely caused by changes in the cellular FtsZ and Fis concentrations.
Finally, the translation efficiencies of certain proteins (RpoS, Fur, Fis and mFis) in tusA mutant and in additional gene deletion strains were studied whether they were affected by using unmodified U34 tRNAs of Lys, Glu and Gln. The translation efficiency is decreased in mnm5s2U34 tRNA modification-impaired strains in addition to their existing growth and cell division defect due to the elimination of these three amino acids. Finally, these results confirm and reinforce the importance of Lys, Glu and Gln and the mnm5s2U34 tRNA thiolation for efficient protein translation. Thus, these findings verify that the translation of fur, fis and rpoS is regulated by mnm5s2U34 tRNA modifications, which is growth phase-dependent.
In total, this work showed the importance of the role of TusA for bacterial cell functionality and physiology. The deletion of the tusA gene disrupted a complex regulatory network within the cell, that most influenced by the decreased translation of Fis and RpoS, caused by the absence of mnm5s2U34 tRNA modifications. The disruption of RpoS and Fis cellular network influences in turn the cellular FtsZ level in the early exponential phase. Finally, the reduced FtsZ concentration leads to elongated, filamentous E. coli cells, which are unable to divide.
Divergent thinking is the ability to produce numerous and diverse responses to questions or tasks, and it is used as a predictor of creative achievement. It plays a significant role in the business organization’s innovation process and the recognition of new business opportunities. Drawing upon the cumulative process model of creativity in entrepreneurship, we hypothesize that divergent thinking has a lasting effect on post-launch entrepreneurial outcomes related to innovation and growth, but that this relation might not always be linear. Additionally, we hypothesize that domain-specific experience has a moderating role in this relation. We test our hypotheses based on a representative longitudinal sample of 457 German business founders, which we observe up until 40 months after start-up. We find strong relative effects for innovation and growth outcomes. For survival we find conclusive evidence for non-linearities in the effects of divergent thinking. Additionally, we show that such effects are moderated by the type of domain-specific experience that entrepreneurs gathered pre-launch, as it shapes the individual’s ideational abilities to fit into more sophisticated strategies regarding entrepreneurial creative achievement. Our findings have relevant policy implications in characterizing and identifying business start-ups with growth and innovation potential, allowing a more efficient allocation of public and private funds.
River flooding is a constant peril for societies, causing direct economic losses in the order of $100 billion worldwide each year. Under global change, the prolonged concentration of people and assets in floodplains is accompanied by an emerging intensification of flood extremes due to anthropogenic global warming, ultimately exacerbating flood risk in many regions of the world.
Flood adaptation plays a key role in the mitigation of impacts, but poor understanding of vulnerability and its dynamics limits the validity of predominant risk assessment methods and impedes effective adaptation strategies. Therefore, this thesis investigates new methods for flood risk assessment that embrace the complexity of flood vulnerability, using the understudied commercial sector as an application example.
Despite its importance for accurate risk evaluation, flood loss modeling has been based on univariable and deterministic stage-damage functions for a long time. However, such simplistic methods only insufficiently describe the large variation in damage processes, which initiated the development of multivariable and probabilistic loss estimation techniques. The first study of this thesis developed flood loss models for companies that are based on emerging statistical and machine learning approaches (i.e., random forest, Bayesian network, Bayesian regression). In a benchmarking experiment on basis of object-level loss survey data, the study showed that all proposed models reproduced the heterogeneity in damage processes and outperformed conventional stage-damage functions with respect to predictive accuracy. Another advantage of the novel methods is that they convey probabilistic information in predictions, which communicates the large remaining uncertainties transparently and, hence, supports well-informed risk assessment.
Flood risk assessment combines vulnerability assessment (e.g., loss estimation) with hazard and exposure analyses. Although all of the three risk drivers interact and change over time, such dependencies and dynamics are usually not explicitly included in flood risk models. Recently, systemic risk assessment that dissolves the isolated consideration of risk drivers has gained traction, but the move to holistic risk assessment comes with limited thoroughness in terms of loss estimation and data limitations. In the second study, I augmented a socio-hydrological system dynamics model for companies in Dresden, Germany, with the multivariable Bayesian regression loss model from the first study. The additional process-detail and calibration data improved the loss estimation in the systemic risk assessment framework and contributed to more accurate and reliable simulations. The model uses Bayesian inference to quantify uncertainty and learn the model parameters from a combination of prior knowledge and diverse data.
The third study demonstrates the potential of the socio-hydrological flood risk model for continuous, long-term risk assessment and management. Using hydroclimatic ad socioeconomic forcing data, I projected a wide range of possible risk trajectories until the end of the century, taking into account the adaptive behavior of companies. The study results underline the necessity of increased adaptation efforts to counteract the expected intensification of flood risk due to climate change. A sensitivity analysis of the effectiveness of different adaptation measures and strategies revealed that optimized adaptation has the potential to mitigate flood risk by up to 60%, particularly when combining structural and non-structural measures. Additionally, the application shows that systemic risk assessment is capable of capturing adverse long-term feedbacks in the human-flood system such as the levee effect.
Overall, this thesis advances the representation of vulnerability in flood risk modeling by offering modeling solutions that embrace the complexity of human-flood interactions and quantify uncertainties consistently using probabilistic modeling. The studies show how scarce information in data and previous experiments can be integrated in the inference process to provide model predictions and simulations that are reliable and rich in information. Finally, the focus on the flood vulnerability of companies provides new insights into the heterogeneous damage processes and distinct flood coping of this sector.
Early sensitivity to prosodic phrase boundary cues: Behavioral evidence from German-learning infants
(2023)
This dissertation seeks to shed light on the relation of phrasal prosody and developmental speech perception in German-learning infants. Three independent empirical studies explore the role of acoustic correlates of major prosodic boundaries, specifically pitch change, final lengthening, and pause, in infant boundary perception. Moreover, it was examined whether the sensitivity to prosodic phrase boundary markings changes during the first year of life as a result of perceptual attunement to the ambient language (Aslin & Pisoni, 1980).
Using the headturn preference procedure six- and eight-month-old monolingual German-learning infants were tested on their discrimination of two different prosodic groupings of the same list of coordinated names either with or without an internal IPB after the second name, that is, [Moni und Lilli] [und Manu] or [Moni und Lilli und Manu]. The boundary marking was systematically varied with respect to single prosodic cues or specific cue combinations.
Results revealed that six- and eight-month-old German-learning infants successfully detect the internal prosodic boundary when it is signaled by all the three main boundary cues pitch change, final lengthening, and pause. For eight-, but not for six-month-olds, the combination of pitch change and final lengthening, without the occurrence of a pause, is sufficient. This mirrors an adult-like perception by eight-months (Holzgrefe-Lang et al., 2016). Six-month-olds detect a prosodic phrase boundary signaled by final lengthening and pause. The findings suggest a developmental change in German prosodic boundary cue perception from a strong reliance on the pause cue at six months to a differentiated sensitivity to the more subtle cues pitch change and final lengthening at eight months. Neither for six- nor for eight-month-olds the occurrence of pitch change or final lengthening as single cues is sufficient, similar to what has been observed for adult speakers of German (Holzgrefe-Lang et al., 2016).
The present dissertation provides new scientific knowledge on infants’ sensitivity to individual prosodic phrase boundary cues in the first year of life. Methodologically, the studies are pathbreaking since they used exactly the same stimulus materials – phonologically thoroughly controlled lists of names – that have also been used with adults (Holzgrefe-Lang et al., 2016) and with infants in a neurophysiological paradigm (Holzgrefe-Lang, Wellmann, Höhle, & Wartenburger, 2018), allowing for comparisons across age (six/ eight months and adults) and method (behavioral vs. neurophysiological methods). Moreover, materials are suited to be transferred to other languages allowing for a crosslinguistic comparison. Taken together with a study with similar French materials (van Ommen et al., 2020) the observed change in sensitivity in German-learning infants can be interpreted as a language-specific one, from an initial language-general processing mechanism that primarily focuses on the presence of pauses to a language-specific processing that takes into account prosodic properties available in the ambient language. The developmental pattern is discussed as an interplay of acoustic salience, prosodic typology (prosodic regularity) and cue reliability.
Conservation of the jaguar relies on holistic and transdisciplinary conservation strategies that integratively safeguard essential, connected habitats, sustain viable populations and their genetic exchange, and foster peaceful human-jaguar coexistence. These strategies define four research priorities to advance jaguar conservation throughout the species’ range. In this thesis I provide several relevant ecological and sociological insights into these research priorities, each addressed in a separate chapter. I focus on the effects of anthropogenic landscapes on jaguar habitat use and population gene flow, spatial patterns of jaguar habitat suitability and functional population connectivity, and on innovative governance approaches which can work synergistically to help achieve human-wildlife conviviality. Furthermore, I translate these insights into recommendations for conservation practice by providing tools and suggestions that conservation managers and stakeholders can use to implement local actions but also make broad scale conservation decisions in Central America. In Chapter 2, I model regional habitat use of jaguars, producing spatially-explicit maps for management of key areas of habitat suitability. Using an occupancy model of 13-year-camera-trap occurrence data, I show that human influence has the strongest impact on jaguar habitat use, and that Jaguar Conservation Units are the most important reservoirs of high quality habitat in this region. I build upon these results by zooming in to an area of high habitat suitability loss in Chapter 3, northern Central America. Here I study the drivers of jaguar gene flow and I produce spatially-explicit maps for management of key areas of functional population connectivity in this region. I use microsatellite data and pseudo-optimized multiscale, multivariate resistance surfaces of gene flow to show that jaguar gene flow is influenced by environmental, and even more strongly, by human influence variables; and that the areas of lowest gene flow resistance largely coincide with the location of the Jaguar Conservation Units. Given that human activities significantly impact jaguar habitat use and gene flow, securing viable jaguar populations in anthropogenic landscapes also requires fostering peaceful human-wildlife coexistence. This is a complex challenge that cannot be met without transdisciplinary academic research and cross-sectoral, collaborative governance structures that effectively respond to the multiple challenges of such coexistence. With this in mind, I focus in Chapter 4 on carnivore conservation initiatives that apply transformative governance approaches to enact transformative change towards human-carnivore coexistence. Using the frameworks of transformative biodiversity governance and convivial conservation, I highlight in this chapter concrete pathways, supported by more inclusive, democratic forms of conservation decision-making and participation that promote truly transformative changes towards human-jaguar conviviality.
EMOOCs 2023
(2023)
From June 14 to June 16, 2023, Hasso Plattner Institute, Potsdam, hosted the eighth European MOOC Stakeholder Summit (EMOOCs 2023).
The pandemic is fortunately over. It has once again shown how important digital education is. How well-prepared a country was could be seen in our schools, universities, and companies. In different countries, the problems manifested themselves differently. The measures and approaches to solving the problems varied accordingly. Digital education, whether micro-credentials, MOOCs, blended learning formats, or other e-learning tools, received a major boost.
EMOOCs 2023 focusses on the effects of this emergency situation. How has it affected the development and delivery of MOOCs and other e-learning offerings all over Europe? Which projects can serve as models for successful digital learning and teaching? Which roles can MOOCs and micro-credentials bear in the current business transformation? Is there a backlash to the routine we knew from pre-Corona times? Or have many things become firmly established in the meantime, e.g. remote work, hybrid conferences, etc.?
Furthermore, EMOOCs 2023 has a closer look at the development and formalization of digital learning. Micro-credentials are just the starting point. Further steps in this direction would be complete online study programs or full online universities.
Another main topic is the networking of learning offers and the standardization of formats and metadata. Examples of fruitful cooperations are the MOOChub, the European MOOC Consortium, and the Common Micro-Credential Framework.
The learnings, derived from practical experience and research, are explored in EMOOCs 2023 in four tracks and additional workshops, covering various aspects of this field. In this publication, we present papers from the conference’s Research & Experience Track, the Business Track and the International Track.
This research focuses on empowering leadership, a leadership style that shares autonomy and responsibilities with the followers. Empowering leadership enhances the meaningfulness of work by fostering participation in decision-making, expressing confidence in high performance, and providing autonomy in target setting (Cheong, 2016). I examine how empowering leadership affects followers’ reflection. I used data from 528 individuals across 172 teams and found a positive relationship between empowering leadership and followers’ reflection. Followers’ reflection, in turn, is negatively associated with followers’ withdrawal, which mediates the beneficial effect of empowering leadership on leaders’ emotional exhaustion. As for the leaders, I propose that empowering leadership is negatively related also to leaders’ emotional exhaustion. This research broadens our understanding of empowering leadership's effects on both followers and leaders. Moreover, it integrates empowering leadership, leader emotional exhaustion, and burnout literature. Overall, empowering leadership strengthens members’ reflective attitudes and behaviors, which result in reduced withdrawal (and increased presence and contribution) in teams. Because the members contribute to team effort more, the leaders experience less emotional exhaustion. Hence, my work not only identifies new ways through which empowering leadership positively affects followers but also shows how these positive effects on followers benefit the leaders’ well-being.
Increasing demand for food, healthcare, and transportation arising from the growing world population is accompanied by and driving global warming challenges due to the rise of the atmospheric CO2 concentration. Industrialization for human needs has been increasingly releasing CO2 into the atmosphere for the last century or more. In recent years, the possibility of recycling CO2 to stabilize the atmospheric CO2 concentration and combat rising temperatures has gained attention. Thus, using CO2 as the feedstock to address future world demands is the ultimate solution while controlling the rapid climate change. Valorizing CO2 to produce activated and stable one-carbon feedstocks like formate and methanol and further upgrading them to industrial microbial processes to replace unsustainable feedstocks would be crucial for a future biobased circular economy. However, not all microbes can grow on formate as a feedstock, and those microbes that can grow are not well established for industrial processes.
S. cerevisiae is one of the industrially well-established microbes, and it is a significant contributor to bioprocess industries. However, it cannot grow on formate as a sole carbon and energy source. Thus, engineering S. cerevisiae to grow on formate could potentially pave the way to sustainable biomass and value-added chemicals production.
The Reductive Glycine Pathway (RGP), designed as the aerobic twin of the anaerobic Reductive Acetyl-CoA pathway, is an efficient formate and CO2 assimilation pathway. The RGP comprises of the glycine synthesis module (Mis1p, Gcv1p, Gcv2p, Gcv3p, and Lpd1p), the glycine to serine conversion module (Shmtp), the pyruvate synthesis module (Cha1p), and the energy supply module (Fdh1p). The RGP requires formate and elevated CO2 levels to operate the glycine synthesis module. In this study, I established the RGP in the yeast system using growth-coupled selection strategies to achieve formate and CO2-dependent biomass formation in aerobic conditions.
Firstly, I constructed serine biosensor strains by disrupting the native serine and glycine biosynthesis routes in the prototrophic S288c and FL100 yeast strains and insulated serine, glycine, and one-carbon metabolism from the central metabolic network. These strains cannot grow on glucose as the sole carbon source but require the supply of serine or glycine to complement the engineered auxotrophies. Using growth as a readout, I employed these strains as selection hosts to establish the RGP. Initially, to achieve this, I engineered different serine-hydroxymethyltransferases in the genome of serine biosensor strains for efficient glycine to serine conversion. Then, I implemented the glycine synthesis module of the RGP in these strains for the glycine and serine synthesis from formate and CO2. I successfully conducted Adaptive Laboratory Evolution (ALE) using these strains, which yielded a strain capable of glycine and serine biosynthesis from formate and CO2. Significant growth improvements from 0.0041 h-1 to 0.03695 h-1 were observed during ALE. To validate glycine and serine synthesis, I conducted carbon tracing experiments with 13C formate and 13CO2, confirming that more than 90% of glycine and serine biosynthesis in the evolved strains occurs via the RGP. Interestingly, labeling data also revealed that 10-15% of alanine was labelled, indicating pyruvate synthesis from the formate-derived serine using native serine deaminase (Cha1p) activity. Thus, RGP contributes to a small pyruvate pool which is converted to alanine without any selection pressure for pyruvate synthesis from formate. Hence, this data confirms the activity of all three modules of RGP even in the presence of glucose. Further, ALE in glucose limiting conditions did not improve pyruvate flux via the RGP.
Growth characterization of these strains showed that the best growth rates were achieved in formate concentrations between 25 mM to 300 mM. Optimum growth required 5% CO2, and dropped when the CO2 concentration was reduced from 5% to 2.5%.
Whole-genome sequencing of these evolved strains revealed mutations in genes that encode Gdh1p, Pet9p, and Idh1p. These enzymes might influence intracellular NADPH, ATP, and NADH levels, indicating adjustment to meet the energy demand of the RGP. I reverse-engineered the GDH1 truncation mutation on unevolved serine biosensor strains and reproduced formate dependent growth. To elucidate the effect of the GDH1 mutation on formate assimilation, I reintroduced this mutation in the S288c strain and conducted carbon-tracing experiments to compared formate assimilation between WT and ∆gdh1 mutant strains. Comparatively, enhanced formate assimilation was recorded in the ∆gdh1 mutant strain.
Although the 13C carbon tracing experiments confirmed the activity of all three modules of the RGP, the overall pyruvate flux via the RGP might be limited by the supply of reducing power. Hence, in a different approach, I overexpressed the formate dehydrogenase (Fdh1p) for energy supply and serine deaminase (Cha1p) for active pyruvate synthesis in the S288c parental strain and established growth on formate and serine without glucose in the medium. Further reengineering and evolution of this strain with a consistent energy, and formate-derived serine supply for pyruvate synthesis, is essential to achieve complete formatotrophic growth in the yeast system.
Essays in public economics
(2023)
This cumulative dissertation uses economic theory and micro-econometric tools and evaluation methods to analyse public policies and their impact on welfare and individual behaviour. In particular, it focuses on policies in two distinct areas that represent fundamental societal challenges in the 21st century: the ageing of society and life in densely-populated urban agglomerations. Together, these areas shape important financial decisions in a person's life, impact welfare, and are driving forces behind many of the challenges in today's societies. The five self-contained research chapters of this thesis analyse the forward looking effects of pension reforms, affordable housing policies as well as a public transport subsidy and its effect on air pollution.
This cumulative doctoral thesis consists of five empirical studies examining various aspects of crisis and change from a management-accounting perspective. Within the first study, a bibliometric analysis is conducted. More precisely, based on publications between the financial crisis (since 2007) and the COVID-19 crisis (starting in 2020), the crisis literature in management accounting is investigated to uncover the most influential aspects of the field and to analyze the theoretical foundations of the literature. Moreover, this investigation also serves to identify future research streams and to provide starting points for future research. Based on a survey, the second study investigates the impact of several management-accounting tools on organizational resilience and its effect on a company’s competitive advantage during a crisis. The results show that their target-oriented use positively influences organizational resilience and contributes to the company’s competitive advantage during the crisis. The third study provides a more detailed view on the relationship between budgeting and risk management and their benefit for companies in times of crisis. For this purpose, the relationship between the relevance of budgeting functions and risk management in the company and the corresponding impact on company performance are investigated. The results show a positive relationship. However, a crisis can also affect the relationship between the company and its shareholders: Thus, the fourth study – based on publicly available data and a survey – examines the consequences of virtual annual general meetings on shareholder rights. The results show that, temporarily, particularly the right to information was severely restricted. For the following year, this problem was fixed, and ultimately, the virtual option was introduced permanently. The crisis has thus brought about a lasting change. But not only crises cause changes: The fifth study, also based on survey data, investigates the changes in the role of management accountants caused by digitalization. More precisely, it investigates how management accountants deal with tasks that are considered outdated and unattractive. The results of the study show that different types of personalities also act differently as far as the willingness to do those unattractive tasks is concerned, and career ambitions also influence that willingness. In addition to this, the results provide insights into the motivation of management accountants to conduct tasks and thus counteract existing assumptions based on stereotypes and clichés circulating within the research community.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Exercise or not?
(2023)
Objective: Individuals’ decisions to engage in exercise are often the result of in-the-moment choices between exercise and a competing behavioral alternative. The purpose of this study was to investigate processes that occur in-the-moment (i.e., situated processes) when individuals are faced with the choice between exercise and a behavioral alternative during a computerized task. These were analyzed against the background of interindividual differences in individuals’ automatic valuation and controlled evaluation of exercise.
Method: In a behavioral alternatives task 101 participants were asked whether they would rather choose an exercise option or a behavioral alternative in 25 trials. Participants’ gaze behavior (first gaze and fixations) was recorded using eye-tracking. An exercise-specific affect misattribution procedure (AMP) was used to assess participants’ automatic valuation of exercise before the task. After the task, self-reported feelings towards exercise (controlled evaluation) and usual weekly exercise volume were assessed. Mixed effects models with random effects for subjects and trials were used for data analysis.
Results: Choosing exercise was positively correlated with individuals’ automatic valuation (r = 0.20, p = 0.05), controlled evaluation (r = 0.58, p < 0.001), and their weekly exercise volume (r = 0.43, p < 0.001). Participants showed no bias in their initial gaze or number of fixations towards the exercise or the non-exercise alternative. However, participants were 1.30 times more likely to fixate on the chosen alternative first and more frequently, but this gaze behavior was not related to individuals’ automatic valuation, controlled evaluation, or weekly exercise volume.
Conclusion: The results suggest that situated processes arising from defined behavioral alternatives may be independent of individuals’ general preferences. Despite one’s best general intention to exercise more, the choice of a non-exercise alternative behavior may seem more appealing in-the-moment and eventually be chosen. New psychological theories of health behavior change should therefore better consider the role of potentially conflicting alternatives when it comes to initiating physical activity or exercise.
Humboldtian science aims at an empirically supported transdisciplinary and at the same time transareal development of a world consciousness. In the development of this world consciousness, not only Europe and the Americas, but also Central Asia and especially China play an important role. The Humboldt Center for Transdisciplinary Studies (HCTS) in Changsha, is attempting to address the fact that China has been largely left out of international Humboldt studies and that Alexander von Humboldt was intensively engaged with Central Asia and China for decades. Therefore, the Humboldt Center in Changsha sets itself the goal of expanding Humboldt Studies to include this important aspect, to stimulate and coordinate special research work, and to build scientific and cultural bridges between Germany and China, Europe and Asia.
Poor dietary quality is a major cause of morbidity, making the promotion of healthy eating a societal priority. Older adults are a critical target group for promoting healthy eating to enable healthy aging. One factor suggested to promote healthy eating is the willingness to try unfamiliar foods, referred to as food neophilia. This two-wave longitudinal study explored the stability of food neophilia and dietary quality and their prospective relationship over three years, analyzing self-reported data from N = 960 older adults (MT1 = 63.4, range = 50–84) participating in the NutriAct Family Study (NFS) in a cross-lagged panel design. Dietary quality was rated using the NutriAct diet score, based on the current evidence for chronic disease prevention. Food neophilia was measured using the Variety Seeking Tendency Scale. The analyses revealed high a longitudinal stability of both constructs and a small positive cross-sectional correlation between them. Food neophilia had no prospective effect on dietary quality, whereas a very small positive prospective effect of dietary quality on food neophilia was found. Our findings give initial insights into the positive relation of food neophilia and a health-promoting diet in aging and underscore the need for more in-depth research, e.g., on the constructs’ developmental trajectories and potential critical windows of opportunity for promoting food neophilia.
The role of biogenic carbonate producers in the evolution of the geometries of carbonate systems has been the subject of numerous research projects. Attempts to classify modern and ancient carbonate systems by their biotic components have led to the discrimination of biogenic carbonate producers broadly into Photozoans, which are characterised by an affinity for warm tropical waters and high dependence on light penetration, and Heterozoans which are generally associated with both cool water environments and nutrient-rich settings with little to no light penetration. These broad categories of carbonate sediment producers have also been recognised to dominate in specific carbonate systems. Photozoans are commonly dominant in flat-topped platforms with steep margins, while Heterozoans generally dominate carbonate ramps. However, comparatively little is known on how these two main groups of carbonate producers interact in the same system and impact depositional geometries responding to changes in environmental conditions such as sea level fluctuation, antecedent slope, sediment transport processes, etc. This thesis presents numerical models to investigate the evolution of Miocene carbonate systems in the Mediterranean from two shallow marine domains: 1) a Miocene flat-topped platform dominated by Photozoans, with a significant component of Hetrozoans in the slope and 2) a Heterozoan distally steepened ramp, with seagrass-influenced (Photozoan) inner ramp. The overarching aim of the three articles comprising this cumulative thesis is to provide a numerical study of the role of Photozoans and Heterozoans in the evolution of carbonate system geometries and how these biotas respond to changes in environmental conditions. This aim was achieved using stratigraphic forward modelling, which provides an approach to quantitatively integrate multi-scale datasets to reconstruct sedimentary processes and products during the evolution of a sedimentary system.
In a Photozoan-dominated carbonate system, such as the Miocene Llucmajor platform in Western Mediterranean, stratigraphic forward modelling dovetailed with a robust set of sensitivity tests reveal how the geometry of the carbonate system is determined by the complex interaction of Heterozoan and Photozoan biotas in response to variable conditions of sea level fluctuation, substrate configuration, sediment transport processes and the dominance of Photozoan over Heterozoan production. This study provides an enhanced understanding of the different carbonate systems that are possible under different ecological and hydrodynamic conditions. The research also gives insight into the roles of different biotic associations in the evolution of carbonate geometries through time and space. The results further show that the main driver of platform progradation in a Llucmajor-type system is the lowstand production of Heterozoan sediments, which form the necessary substratum for Photozoan production.
In Heterozoan systems, sediment production is mainly characterised by high transport deposits, that are prone to redistribution by waves and gravity, thereby precluding the development of steep margins. However, in the Menorca ramp, the occurrence of sediment trapping by seagrass led to the evolution of distal slope steepening. We investigated, through numerical modelling, how such a seagrass-influenced ramp responds to the frequency and amplitude of sea level changes, variable carbonate production between the euphotic and oligophotic zone, and changes in the configuration of the paleoslope. The study reinforces some previous hypotheses and presents alternative scenarios to the established concepts of high-transport ramp evolution. The results of sensitivity experiments show that steep slopes are favoured in ramps that develop in high-frequency sea level fluctuation with amplitudes between 20 m and 40 m. We also show that ramp profiles are significantly impacted by the paleoslope inclination, such that an optimal antecedent slope of about 0.15 degrees is required for the Menorca distally steepened ramp to develop.
The third part presents an experimental case to argue for the existence of a Photozoan sediment threshold required for the development of steep margins in carbonate platforms. This was carried out by developing sensitivity tests on the forward models of the flat-topped (Llucmajor) platform and the distally steepened (Menorca) platform. The results show that models with Photozoan sediment proportion below a threshold of about 40% are incapable of forming steep slopes. The study also demonstrates that though it is possible to develop steep margins by seagrass sediment trapping, such slopes can only be stabilized by the appropriate sediment fabric and/or microbial binding. In the Photozoan-dominated system, the magnitude of slope steepness depends on the proportion of Photozoan sediments in the system. Therefore, this study presents a novel tool for characterizing carbonate systems based on their biogenic components.
In this bachelor’s thesis I implement the automatic theorem prover nanoCoP-Ω. This system is the result of porting arithmetic and equality handling procedures first introduced in the automatic theorem prover with arithmetic leanCoP-Ω into the similar system nanoCoP 2.0. To understand these procedures, I first introduce the mathematical background to both automatic theorem proving and arithmetic expressions. I present the predecessor projects leanCoP, nanoCoP and leanCoP-Ω, out of which nanCoP-Ω was developed. This is followed by an extensive description of the concepts the non-clausal connection calculus needed to be extended by, to allow for proving arithmetic expressions and equalities, as well as of their implementation into nanoCoP-Ω. An extensive comparison between both the runtimes and the number of solved problems of the systems nanoCoP-Ω and leanCoP-Ω was made. I come to the conclusion, that nanoCoP-Ω is considerably faster than leanCoP-Ω for small problems, though less well suited for larger problems. Additionally, I was able to construct a non-theorem that nanoCoP-Ω generates a false proof for. I discuss how this pressing issue could be resolved, as well as some possible optimizations and expansions of the system.
Mountain ranges can fundamentally influence the physical and and chemical processes that shape Earths’ surface. With elevations of up to several kilometers they create climatic enclaves by interacting with atmospheric circulation and hydrologic systems, thus leading to a specific distribution of flora and fauna. As a result, the interiors of many Cenozoic mountain ranges are characterized by an arid climate, internally drained and sediment-filled basins, as well as unique ecosystems that are isolated from the adjacent humid, low-elevation regions along their flanks and forelands. These high-altitude interiors of orogens are often characterized by low relief and coalesced sedimentary basins, commonly referred to as plateaus, tectono-geomorphic entities that result from the complex interactions between mantle-driven geological and tectonic conditions and superposed atmospheric and hydrological processes. The efficiency of these processes and the fate of orogenic plateaus is therefore closely tied to the balance of constructive and destructive processes – tectonic uplift and erosion, respectively. In numerous geological studies it has been shown that mountain ranges are delicate systems that can be obliterated by an imbalance of these underlying forces. As such, Cenozoic mountain ranges might not persist on long geological timescales and will be destroyed by erosion or tectonic collapse. Advancing headward erosion of river systems that drain the flanks of the orogen may ultimately sever the internal drainage conditions and the maintenance of storage of sediments within the plateau, leading to destruction of plateau morphology and connectivity with the foreland. Orogenic collapse may be associated with the changeover from a compressional stress field with regional shortening and topographic growth, to a tensional stress field with regional extensional deformation and ensuing incision of the plateau. While the latter case is well-expressed by active extensional faults in the interior parts of the Tibetan Plateau and the Himalaya, for example, the former has been attributed to have breached the internally drained areas of the high-elevation sectors of the Iranian Plateau.
In the case of the Andes of South America and their internally drained Altiplano-Puna Plateau, signs of both processes have been previously described. However, in the orogenic collapse scenario the nature of the extensional structures had been primarily investigated in the northern and southern terminations of the plateau; in some cases, the extensional faults were even regarded to be inactive. After a shallow earthquake in 2020 within the Eastern Cordillera of Argentina that was associated with extensional deformation, the state of active deformation and the character of the stress field in the central parts of the plateau received renewed interest to explain a series of extensional structures in the northernmost sectors of the plateau in north-western Argentina. This study addresses (1) the issue of tectonic orogenic collapse of the Andes and the destruction of plateau morphology by studying the fill and erosion history of the central eastern Andean Plateau using sedimentological and geochronological data and (2) the kinematics, timing and magnitude of extensional structures that form well-expressed fault scarps in sediments of the regional San Juan del Oro surface, which is an integral part of the Andean Plateau and adjacent morphotectonic provinces to the east.
Importantly, sediment properties and depositional ages document that the San Juan del Oro Surface was not part of the internally-drained Andean Plateau, but rather associated with a foreland-directed drainage system, which was modified by the Andean orogeny and that became successively incorporated into the orogen by the eastward-migration of the Andean deformation front during late Miocene – Pliocene time. Structural and geomorphic observations within the plateau indicate that extensional processes must have been repeatedly active between the late Miocene and Holocene supporting the notion of plateau-wide extensional processes, potentially associated with Mw ~ 7 earthquakes. The close relationship between extensional joints and fault orientations underscores that 3 was oriented horizontally in NW-SE direction and 1 was vertical. This unambiguously documents that the observed deformation is related to gravitational forces that drive the orogenic collapse of the plateau. Applied geochronological analyses suggest that normal faulting in the northern Puna was active at about 3 Ma, based on paired cosmogenic nuclide dating of sediment fill units. Possibly due to regional normal faulting the drainage system within the plateau was modified, promoting fluvial incision.
Background: The characteristics of osteoporosis are decreased bone mass and destruction towards the microarchitecture of bone tissue, which raises the risk of fracture. Psychosocialstress and osteoporosis are linked by sympathetic nervous system, hypothalamic-pituitary-adrenal axis, and other endocrine factors. Psychosocial stress causes a series of effects on the organism, and this long-term depletion at the cellular level is considered to be mitochondrial allostatic load, including mitochondrial dysfunction and oxidative stress. Extracellular vesicles (EVs) are involved in the mitochondrial allostatic load process and may as biomarkers in this setting. As critical participants during cell-to-cell communications, EVs serve as transport vehicles for nucleic acid and proteins, alter the phenotypic and functional characteristics of their target cells, and promote cell-to-cell contact. And hence, they play a significant role in the diagnosis and therapy of many diseases, such as osteoporosis.
Aim: This narrative review attempts to outline the features of EVs, investigate their involvement in both psychosocial stress and osteoporosis, and analyze if EVs can be potential mediators between both.
Methods: The online database from PubMed, Google Scholar, and Science Direct were searched for keywords related to the main topic of this study, and the availability of all the selected studies was verified. Afterward, the findings from the articles were summarized and synthesized.
Results: Psychosocial stress affects bone remodeling through increased neurotransmitters such as glucocorticoids and catecholamines, as well as increased glucose metabolism. Furthermore, psychosocial stress leads to mitochondrial allostatic load, including oxidative stress, which may affect bone remodeling. In vitro and in vivo data suggest EVs might involve in the link between psychosocial stress and bone remodeling through the transfer of bioactive substances and thus be a potential mediator of psychosocial stress leading to osteoporosis.
Conclusions: According to the included studies, psychosocial stress affects bone remodeling, leading to osteoporosis. By summarizing the specific properties of EVs and the function of EVs in both psychosocial stress and osteoporosis, respectively, it has been demonstrated that EVs are possible mediators of both, and have the prospects to be useful in innovative research areas.
In the past decades, scholars and courts have paid considerable attention to the extraterritorial applicability of human rights treaties. By contrast, the extraterritorial application of constitutional rights has received comparable scholarly attention only in the United States. Specifically, there is a paucity of comparative research in this area, which contributes to the prevailing view that human rights law provides the proper framework under which domestic courts should examine extraterritoriality questions under constitutional law.
This article argues that domestic constitutional regimes and their judicial enforcers can and should provide an important counterweight to the deadlocked extraterritoriality debate at the international level. Using two case studies from Germany and the United States, it shows that domestic constitutional courts are sometimes better suited than treaty bodies to guard the normative values of human dignity and universality in an extraterritoriality context. This is most apparent in the case of Germany, which has a long tradition of integration into international multi-level governance systems and "bottom-up" resistance based on fundamental rights within such systems. Recent cases from the Federal Constitutional Court (Bundesverfassungsgericht) about the extraterritorial application of the Basic Law (Grundgesetz) to foreign intelligence gathering and climate change support this theory. However, an independent constitutional approach can also achieve some normative effects in domestic systems that are more isolated from the international human rights system. Thus, the US Supreme Court likewise used domestic constitutional doctrine to sidestep the American government's strictly territorial interpretation of the ICCPR and employ a functional approach to the extraterritorial applicability of fundamental rights in the case of detention of suspected terrorists in the Guantánamo Bay naval base.
The study of these two examples does not purport to be comprehensive or even representative of the world’s diverse array of constitutions and their relationships with international human rights law. However, the independent power of constitutional frameworks in these two disparate cases should all the more provide an impetus for increased comparative research into constitutional extraterritoriality regimes and their value for the project of human rights.
Background: The role of fatty acid (FA) intake and metabolism in type 2 diabetes (T2D) incidence is controversial. Some FAs are not synthesised endogenously and, therefore, these circulating FAs reflect dietary intake, for example, the trans fatty acids (TFAs), saturated odd chain fatty acids (OCFAs), and linoleic acid, an n-6 polyunsaturated fatty acids (PUFA). It remains unclear if intake of TFA influence T2D risk and whether industrial TFAs (iTFAs) and ruminant TFAs (rTFAs) exert the same effect. Unlike even chain saturated FAs, the OCFAs have been inversely associated with T2D risk, but this association is poorly understood. Furthermore, the associations of n-6 PUFAs intake with T2D risk are still debated, while delta-5 desaturase (D5D), a key enzyme in the metabolism of PUFAs, has been consistently related to T2D risk. To better understand these relationships, the FA composition in circulating lipid fractions can be used as biomarkers of dietary intake and metabolism. The exploration of TFAs subtypes in plasma phospholipids and OCFAs and n-6 PUFAs within a wide range of lipid classes may give insights into the pathophysiology of T2D.
Aim: This thesis aimed mainly to analyse the association of TFAs, OCFAs and n-6 PUFAs with self-reported dietary intake and prospective T2D risk, using seven types of TFAs in plasma phospholipids and deep lipidomics profiling data from fifteen lipid classes.
Methods: A prospective case-cohort study was designed within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, including all the participants who developed T2D (median follow-up 6.5 years) and a random subsample of the full cohort (subcohort: n=1248; T2D cases: n=820). The main analyses included two lipid profiles. The first was an assessment of seven TFA in plasma phospholipids, with a modified method for analysis of FA with very low abundances. The second lipid profile was derived from a high-throughout lipid profiling technology, which identified 940 distinct molecular species and allowed to quantify OCFAs and PUFAs composition across 15 lipid classes. Delta-5 desaturase (D5D) activity was estimated as 20:4/20:3-ratio. Using multivariable Cox regression models, we examined the associations of TFA subtypes with incident T2D and class-specific associations of OCFA and n-6 PUFAs with T2D risk.
Results: 16:1n-7t, 18:1n-7t, and c9t11-CLA were positively correlated with the intake of fat-rich dairy foods. iTFA 18:1 isomers were positively correlated with margarine. After adjustment for confounders and other TFAs, higher plasma phospholipid concentrations of two rTFAs were associated with a lower incidence of T2D: 18:1n-7t and t10c12-CLA. In contrast, the rTFA c9t11-CLA was associated with a higher incidence of T2D. rTFA 16:1n-7t and iTFAs (18:1n-6t, 18:1n-9t, 18:2n-6,9t) were not statistically significantly associated with T2D risk.
We observed heterogeneous integration of OCFA in different lipid classes, and the contribution of 15:0 versus 17:0 to the total OCFA abundance differed across lipid classes. Consumption of fat-rich dairy and fiber-rich foods were positively and red meat inversely correlated to OCFA abundance in plasma phospholipid classes. In women only, higher abundances of 15:0 in phosphatidylcholines (PC) and diacylglycerols (DG), and 17:0 in PC, lysophosphatidylcholines (LPC), and cholesterol esters (CE) were inversely associated with T2D risk. In men and women, a higher abundance of 15:0 in monoacylglycerols (MG) was also inversely associated with T2D. Conversely, a higher 15:0 concentration in LPC and triacylglycerols (TG) was associated with higher T2D risk in men. Women with a higher concentration of 17:0 as free fatty acids (FFA) also had higher T2D incidence.
The integration of n-6 PUFAs in lipid classes was also heterogeneous. 18:2 was highly abundant in phospholipids (particularly PC), CE, and TG; 20:3 represented a small fraction of FA in most lipid classes, and 20:4 accounted for a large proportion of circulating phosphatidylinositol (PI) and phosphatidylethanolamines (PE). Higher concentrations of 18:2 were inversely associated with T2D risk, especially within DG, TG, and LPC. However, 18:2 as part of MG was positively associated with T2D risk. Higher concentrations of 20:3 in phospholipids (PC, PE, PI), FFA, CE, and MG were linked to higher T2D incidence. 20:4 was unrelated to risk in most lipid classes, except positive associations were observed for 20:4 enriched in FFA and PE. The estimated D5D activities in PC, PE, PI, LPC, and CE were inversely associated with T2D and explained variance of estimated D5D activity by genomic variation in the FADS locus was only substantial in those lipid classes.
Conclusion: The TFAs' conformation is essential in their relationship to diabetes risk, as indicated by plasma rTFA subtypes concentrations having opposite directions of associations with diabetes risk. Plasma OCFA concentration is linked to T2D risk in a lipid class and sex-specific manner. Plasma n-6 PUFA concentrations are associated differently with T2D incidence depending on the specific FA and the lipid class. Overall, these results highlight the complexity of circulating FAs and their heterogeneous association with T2D risk depending on the specific FA structure, lipid class, and sex. My results extend the evidence of the relationship between diet, lipid metabolism, and subsequent T2D risk. In addition, my work generated several potential new biomarkers of dietary intake and prospective T2D risk.
Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing.
As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface.
The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended.
I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model.
The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks.
The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.
The effects of energy price increases are heterogeneous between households and firms. Financially constrained poorer households, who spend a larger relative share of their income on energy, are particularly affected. In this analysis, we examine the macroeconomic and welfare effects of energy price shocks in the presence of credit-constrained households that have subsistence-level energy demand. Within a Dynamic Stochastic General Equilibrium (DSGE) model calibrated for the German economy, we compare the performance of different policy measures (transfers and energy subsidies) and different financing schemes (income tax vs. debt). Our results show that credit-constrained households prefer debt over tax financing regardless of the compensation measure due to their difficulty to smooth consumption. On the contrary, rich households tend to prefer tax-financed measures as they increase the labor supply of poor households. From an aggregate perspective, tax-financed measures targeting firms effectively cushion aggregate output losses.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
Movement is a mechanism that shapes biodiversity patterns across spatialtemporal scales. Thereby, the movement process affects species interactions, population dynamics and community composition. In this thesis, I disentangled the effects of movement on the biodiversity of zooplankton ranging from the individual to the community level. On the individual movement level, I used video-based analysis to explore the implication of movement behavior on preypredator interactions. My results showed that swimming behavior was of great importance as it determined their survival in the face of predation. The findings also additionally highlighted the relevance of the defense status/morphology of prey, as it not only affected the prey-predator relationship by the defense itself but also by plastic movement behavior. On the community movement level, I used a field mesocosm experiment to explore the role of dispersal (time i.e., from the egg bank into the water body and space i.e., between water bodies) in shaping zooplankton metacommunities. My results revealed that priority effects and taxon-specific dispersal limitation influenced community composition. Additionally, different modes of dispersal also generated distinct community structures. The egg bank and biotic vectors (i.e. mobile links) played significant roles in the colonization of newly available habitat patches. One crucial aspect that influences zooplankton species after arrival in new habitats is the local environmental conditions. By using common garden experiments, I assessed the performance of zooplankton communities in their home vs away environments in a group of ponds embedded within an agricultural landscape. I identified environmental filtering as a driving factor as zooplankton communities from individual ponds developed differently in their home and away environments. On the individual species level, there was no consistent indication of local adaptation. For some species, I found a higher abundance/fitness in their home environment, but for others, the opposite was the case, and some cases were indifferent.
Overall, the thesis highlights the links between movement and biodiversity patterns, ranging from the individual active movement to the community level.
From MOOC to “2M-POC”
(2023)
IFP School develops and produces MOOCs since 2014. After the COVID-19 crisis, the demand of our industrial and international partners to offer continuous training to their employees increased drastically in an energy transition and sustainable mobility environment that finds itself in constant and rapid evolution. Therefore, it is time for a new format of digital learning tools to efficiently and rapidly train an important number of employees. To address this new demand, in a more and more digital learning environment, we have completely changed our initial MOOC model to propose an innovative SPOC business model mixing synchronous and asynchronous modules. This paper describes the work that has been done to transform our MOOCs to a hybrid SPOC model. We changed the format itself from a standard MOOC model of several weeks to small modules of one week average more adapted to our client’s demand. We precisely engineered the exchanges between learners and the social aspect all along the SPOC duration. We propose a multimodal approach with a combination of asynchronous activities like online module, exercises, and synchronous activities like webinars with experts, and after-work sessions. Additionally, this new format increases the number of uses of the MOOC resources by our professors in our own master programs.
With all these actions, we were able to reach a completion rate between 80 and 96% – total enrolled –, compared to the completion rate of 15 to 28% – total enrolled – as to be recorded in our original MOOC format. This is to be observed for small groups (50–100 learners) as SPOC but also for large groups (more than 2500 learners), as a Massive and Multimodal Private Online Course (“2M-POC”). Today a MOOC is not a simple assembly of videos, text, discussions forums and validation exercises but a complete multimodal learning path including social learning, personal followup, synchronous and asynchronous modules. We conclude that the original MOOC format is not at all suitable to propose efficient training to companies, and we must re-engineer the learning path to have a SPOC hybrid and multimodal training compatible with a cost-effective business model.
The evolution of a galaxy is pivotally governed by its pattern of star formation over a given period of time. The star formation rate at any given time is strongly dependent on the amount of cold gas available in the galaxy. Accretion of pristine gas from the Intergalactic medium (IGM) is thought to be one of the primary sources for star-forming gas. This gas first passes through the virial regions of the galaxy before reaching the Interstellar medium (ISM), the hub of star formation. On the other hand, owing to the evolutionary course of young and massive stars, energetic winds are ejected from the ISM to the virial regions of the galaxy. A bunch of interlinked, complex astrophysical processes, arising from the concurrent presence of both infalling as well as outbound gas, play out over a range of timescales in the halo region or the Circumgalactic medium (CGM) of a galaxy. It would not be incorrect to say that the CGM has a stronghold over the gas reserves of a galaxy and thus, plays a backhand, yet, rather pivotal role in shaping many galactic properties, some of which are also readily observable. Observing the multi-phase CGM (via spectral-line ion measurements), however, remains a non-trivial effort even today. Low particle densities as well as the CGM’s vast spatial extent, coupled with likely deviations from a spherical distribution, marr the possibility of obtaining complete, unbiased, high-quality spectral information tracing the full extent of the gaseous halo. This often incomplete information leads to multiple inferences about the CGM properties that give rise to multiple contradicting models. In this regard, computer simulations offer a neat solution towards testing and, subsequently, falsifying many of these existing CGM models. Thanks to their controlled environments, simulations are able to not only effortlessly transcend several orders of magnitude in time and space, but also get around many of the observational limitations and provide some unique views on many CGM properties. In this thesis, I focus on effectively using different computer simulations to understand the role of CGM in various astrophysical contexts, namely, the effect of Local Group (LG) environment, major merger events and satellite galaxies. In Chapter 2, I discuss the approach used for modeling various phases of the simulated z = 0 LG CGM in Hestia constrained simulations. Each of the three realizations contain a Milky Way (MW)–Andromeda (M31) galaxy pair, along with their corresponding sets of satellite galaxies, all embedded within the larger cosmological context. For characterizing the different temperature–density phases within the CGM, I model five tracer ions with cloudy ionization modeling. The cold and cool–ionized CGM (H i and Si iii respectively) in Hestia is very clumpy and distributed close to the galactic centers, while the warm-hot and hot CGM (O vi, O vii and O viii) is tenuous and volume-filling. On comparing the H i and Si iii column densities for the simulated M31 with observational measurements from Project AMIGA survey and other low-z galaxies, I found that Hestia galaxies produced less gas in the outer CGM, unlike observations. My carefully designed observational bias model subsequently revealed the possibility that some MW gas clouds might be incorrectly associated with the M31 CGM in observations, and hence, may be partly responsible for giving rise to the detected mismatch between simulated data and observations. In Chapter 3, I present results from four zoom–in, major merger, gas–rich simulations and the subsequent role of the gas, originally situated in the CGM, in influencing some of the galactic observables. The progenitor parameters are selected such that the post–merger remnants are MW–mass galaxies. We generally see a very clear gas bridge joining the merging galaxies in case of multiple passage mergers while such a bridge is mostly absent when a direct collision occurs. On the basis of particle–to–galaxy distance computations and tracer particle analysis, I found that about 33–48 percent of the cold gas contributing to the merger–induced star formation in the bridge originated from the CGM regions. In Chapter 4, I used a sample of 234 MW-mass, L* galaxies from the TNG50 cosmological simulations, with an aim of characterizing the impact of their global satellite populations on the extended cold CGM properties of their host L* halos. On the basis of halo mass and number of satellite galaxies (N_sats ), I categorized the sample into low and high mass bins, and subsequently into bottom, inter and top quartiles respectively. After confirming that satellites indeed influence the extended cold halo gas density profiles of the host galaxies, I investigated the effects of different satellite population parameters on the host halo cold CGMs. My analysis showed that there is hardly any cold gas associated with the satellite population of the lowest mass halos. The stellar mass of the most massive satellite (M_*mms ) impacted the cold gas in low mass bin halos the most, while N_sats (followed by M_*mms ) was the most influential factor for the high mass halos. In any case, how easily cold gas was stripped off the most massive satellite did not play much role. The number of massive (Stellar mass, M* > 10^8 M_solar) satellites as well as the M_*mms associated with a galaxy are two of the most crucial parameters determining how much cold gas ultimately finds its way from the satellites to the host halo. Low mass galaxies are found rather lacking on both these fronts unlike their high mass counterparts. This work highlights some aspects of the complex gas physics that constitute the basic essence of a low-z CGM. My analysis proved the importance of a cosmological environment, local surroundings and merger history in defining some key observable properties of a galactic CGM. Furthermore, I found that different satellite properties were responsible for affecting the cold–dense CGM of the low and high-mass parent galaxies. Finally, the LG emerged as an exciting prospect for testing and pinning down several intricate details about the CGM.
Economic agents often irrationally base their decision-making on irrelevant information. This research analyzes whether men and women react to futile information about past outcomes. For this purpose, we run a laboratory experiment (Study 1) and use field data (Study 2). In both studies, the behavior of men is consistent with falsely assumed negative autocorrelation, often referred to as gambler’s fallacy Women’s behavior aligns with falsely assumed positive autocorrelation, a notion of the hot hand fallacy. On the aggregate, the two fallacies cancel out. Even when individuals are, on average, rational, the biases in the decision-making of subgroups might cause inefficient outcomes. In a mediation analysis, we find that a) the agents stated perceived probabilities of future outcomes are not blurred by irrelevant information and b) about 40 % of the observed biases are driven by differences in the perceived attractiveness of available choices caused by the irrelevant information.
Transposable elements (TEs) are loci that can replicate and multiply within the genome of their host. Within the host, TEs through transposition are responsible for variation on genomic architecture and gene regulation across all vertebrates. Genome assemblies have increased in numbers in recent years. However, to explore in deep the variations within different genomes, such as SNPs (single nucleotide polymorphism), INDELs (Insertion-deletion), satellites and transposable elements, we need high-quality genomes. Studies of molecular markers in the past 10 years have limitations to correlate with biological differences because molecular markers rely on the accuracy of the genomic resources. This has generated that a substantial part of the studies of TE in recent years have been on high quality genomic resources such as Drosophila, zebrafinch and maize. As testudine have a slow mutation rate lower only to crocodilians, with more than 300 species, adapted to different environments all across the globe, the testudine clade can help us to study variation. Here we propose Testudines as a clade to study variation and the abundance of TE on different species that diverged a long time ago. We investigated the genomic diversity of sea turtles, identifying key genomic regions associated to gene family duplication, specific expansion of particular TE families for Dermochelyidae and that are important for phenotypic differentiation, the impact of environmental changes on their populations, and the dynamics of TEs within different lineages. In chapter 1, we identify that despite high levels of genome synteny within sea turtles, we identified that regions of reduced collinearity and microchromosomes showed higher concentrations of multicopy gene families, as well as genetic distances between species, indicating their potential importance as sources of variation underlying phenotypic differentiation. We found that differences in the ecological niches occupied by leatherback and green turtles have led to contrasting evolutionary paths for their olfactory receptor genes. We identified in leatherback turtles a long-term low population size. Nonetheless, we identify no correlation between the regions of reduced collinearity with abundance of TEs or an accumulation of a particular TE group. In chapter 2, we identified that sea turtle genomes contain a significant proportion of TEs, with differences in TE abundance between species, and the discovery of a recent expansion of Penelope-like elements (PLEs) in the highly conserved sea turtle genome provides new insights into the dynamics of TEs within Testudines. In chapter 3, we compared the proportion of TE across the Testudine clade, and we identified that the proportion of transposable elements within the clade is stable, regardless of the quality of the assemblies. However, we identified that the proportion of TEs orders has correlation with genome quality depending of their expanded abundancy. For retrotransposon, a highly abundant element for this clade, we identify no correlation. However, for DNA elements a rarer element on this clade, correlate with the quality of the assemblies.
Here we confirm that high-quality genomes are fundamental for the study of transposable element evolution and the conservation within the clade. The detection and abundance of specific orders of TEs are influenced by the quality of the genomes. We identified that a reduction in the population size on D. coriacea had left signals of long-term low population sizes on their genomes. On the same note we identified an expansion of TE on D. coriacea, not present in any other member of the available genomes of Testudines, strongly suggesting that it is a response of deregulation of TE on their genomes as consequences of the low population sizes.
Here we have identified important genomic regions and gene families for phenotypic differentiation and highlighted the impact of environmental changes on the populations of sea turtles. We stated that accurate classification and analysis of TE families are important and require high-quality genome assemblies. Using TE analysis we manage to identify differences in highly syntenic species. These findings have significant implications for conservation and provide a foundation for further research into genome evolution and gene function in turtles and other vertebrates. Overall, this study contributes to our understanding of evolutionary change and adaptation mechanisms.
Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms.
Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response.
Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases.
Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation.
In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss.
The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events.
The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase.
Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices.
The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline.
Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used.
Enacted in 2009, the National Policy on Climate Change (PNMC) is a milestone in the institutionalisation of climate action in Brazil. It sets greenhouse gas (GHG) emission reduction targets and a set of principles and directives that are intended to lay the foundations for a cross-sectoral and multilevel climate policy in the country. However, after more than a decade since its establishment, the PNMC has experienced several obstacles related to its governance, such as coordination, planning and implementation issues. All of these issues pose threats to the effectiveness of GHG mitigation actions in the country.
By looking at the intragovernmental and intergovernmental relationships that have taken place during the lifetime of the PNMC and its sectoral plans on agriculture (the Sectoral Plan for Mitigation and Adaptation to Climate Change for the Consolidation of a Low-Carbon Economy in Agriculture [ABC Plan]), transport and urban mobility (the Sectoral Plan for Transportation and Urban Mobility for Mitigation and Adaption of Climate Change [PSTM]), this exploratory qualitative research investigates the Brazilian climate change governance guided by the following relevant questions: how are climate policy arrangements organised and coordinated among governmental actors to mitigate GHG emissions in Brazil? What might be the reasons behind how such arrangements are established? What are the predominant governance gaps of the different GHG mitigation actions examined? Why do these governance gaps occur?
Theoretically grounded in the literature on multilevel governance and coordination of public policies, this study employs a novel analytical framework that aims to identify and discuss the occurrence of four types of governance gaps (i.e. politics, institutions and processes, resources and information) in the three GHG mitigation actions (cases) examined (i.e. the PNMC, ABC Plan and PSTM). The research results are twofold. First, they reveal that Brazil has struggled to organise and coordinate governmental actors from different policy constituencies and different levels of government in the implementation of the GHG mitigation actions examined. Moreover, climate policymaking has mostly been influenced by the Ministry of Environment (MMA) overlooking the multilevel and cross-sectoral approaches required for a country’s climate policy to mitigate and adapt to climate change, especially if it is considered an economy-wide Nationally Determined Contribution (NDC), as the Brazilian one is.
Second, the study identifies a greater manifestation of gaps in politics (e.g. lack of political will in supporting climate action), institutions and processes (e.g. failures in the design of institutions and policy instruments, coordination and monitoring flaws, and difficulties in building climate federalism) in all cases studied. It also identifies that there have been important advances in the production of data and information for decision-making and, to a lesser extent, in the allocation of technical and financial resources in the cases studied; however, it is necessary to highlight the limitation of these improvements due to turf wars, a low willingness to share information among federal government players, a reduced volume of financial resources and an unequal distribution of capacities among the federal ministries and among the three levels of government.
A relevant finding is that these gaps tend to be explained by a combination of general and sectoral set aspects. Regarding the general aspects, which are common to all cases examined, the following can be mentioned: i) unbalanced policy capabilities existing among the different levels of government, ii) a limited (bureaucratic) practice to produce a positive coordination mode within cross-sectoral policies, iii) the socioeconomic inequalities that affect the way different governments and economic sectors perceive the climate issue (selective perception) and iv) the reduced dialogue between national and subnational governments on the climate agenda (poor climate federalism). The following sectoral aspects can be mentioned: i) the presence of path dependencies that make the adoption of transformative actions harder and ii) the absence of perceived co-benefits that the climate agenda can bring to each economic sector (e.g. reputational gains, climate protection and access to climate financial markets).
By addressing the theoretical and practical implications of the results, this research provides key insights to tackle the governance gaps identified and to help Brazil pave the way to achieving its NDCs and net-zero targets. At the theoretical level, this research and the current country’s GHG emissions profile suggest that the Brazilian climate policy is embedded in a cross-sectoral and multilevel arena, which requires the effective involvement of different levels of political and bureaucratic powers and the consideration of the country’s socioeconomic differences. Thus, the research argues that future improvements of the Brazilian climate policy and its governance setting must frame climate policy as an economic development agenda, the ramifications of which go beyond the environmental sector. An initial consequence of this new perspective may be a shift in the political and technical leadership from the MMA to the institutions of the centre of government (Executive Office of the President of Brazil) and those in charge of the country’s economic policy (Ministry of Economy). This change could provide greater capacity for coordination, integration and enforcement as well as for addressing certain expected gaps (e.g. financial and technical resources). It could also lead to greater political prioritisation of the agenda at the highest levels of government. Moreover, this shift of the institutional locus could contribute to greater harmonisation between domestic development priorities and international climate politics. Finally, the research also suggests that this approach would reduce bureaucratic elitism currently in place due to climate policy being managed by Brazilian governmental institutions, which is still a theme of a few ministries and a reason for the occurrence of turf wars.
Digitalisation in industry – also called “Industry 4.0” – is seen by numerous actors as an opportunity to reduce the environmental impact of the industrial sector. The scientific assessments of the effects of digitalisation in industry on environmental sustainability, however, are ambivalent. This cumulative dissertation uses three empirical studies to examine the expected and observed effects of digitalisation in industry on environmental sustainability. The aim of this dissertation is to identify opportunities and risks of digitalisation at different system levels and to derive options for action in politics and industry for a more sustainable design of digitalisation in industry. I use an interdisciplinary, socio-technical approach and look at selected countries of the Global South (Study 1) and the example of China (all studies). In the first study (section 2, joint work with Marcel Matthess), I use qualitative content analysis to examine digital and industrial policies from seven different countries in Africa and Asia for expectations regarding the impact of digitalisation on sustainability and compare these with the potentials of digitalisation for sustainability in the respective country contexts. The analysis reveals that the documents express a wide range of vague expectations that relate more to positive indirect impacts of information and communication technology (ICT) use, such as improved energy efficiency and resource management, and less to negative direct impacts of ICT, such as electricity consumption through ICT. In the second study (section 3, joint work with Marcel Matthess, Grischa Beier and Bing Xue), I conduct and analyse interviews with 18 industry representatives of the electronics industry from Europe, Japan and China on digitalisation measures in supply chains using qualitative content analysis. I find that while there are positive expectations regarding the effects of digital technologies on supply chain sustainability, their actual use and observable effects are still limited. Interview partners can only provide few examples from their own companies which show that sustainability goals have already been pursued through digitalisation of the supply chain or where sustainability effects, such as resource savings, have been demonstrably achieved. In the third study (section 4, joint work with Peter Neuhäusler, Melissa Dachrodt and Marcel Matthess), I conduct an econometric panel data analysis. I examine the relationship between the degree of Industry 4.0, energy consumption and energy intensity in ten manufacturing sectors in China between 2006 and 2019. The results suggest that overall, there is no significant relationship between the degree of Industry 4.0 and energy consumption or energy intensity in manufacturing sectors in China. However, differences can be found in subgroups of sectors. I find a negative correlation of Industry 4.0 and energy intensity in highly digitalised sectors, indicating an efficiency-enhancing effect of Industry 4.0 in these sectors. On the other hand, there is a positive correlation of Industry 4.0 and energy consumption for sectors with low energy consumption, which could be explained by the fact that digitalisation, such as the automation of previously mainly labour-intensive sectors, requires energy and also induces growth effects. In the discussion section (section 6) of this dissertation, I use the classification scheme of the three levels macro, meso and micro, as well as of direct and indirect environmental effects to classify the empirical observations into opportunities and risks, for example, with regard to the probability of rebound effects of digitalisation at the three levels. I link the investigated actor perspectives (policy makers, industry representatives), statistical data and additional literature across the system levels and consider political economy aspects to suggest fields of action for more sustainable (digitalised) industries. The dissertation thus makes two overarching contributions to the academic and societal discourse. First, my three empirical studies expand the limited state of research at the interface between digitalisation in industry and sustainability, especially by considering selected countries in the Global South and the example of China. Secondly, exploring the topic through data and methods from different disciplinary contexts and taking a socio-technical point of view, enables an analysis of (path) dependencies, uncertainties, and interactions in the socio-technical system across different system levels, which have often not been sufficiently considered in previous studies. The dissertation thus aims to create a scientifically and practically relevant knowledge basis for a value-guided, sustainability-oriented design of digitalisation in industry.
The aesthetic phenomenon of the uncanny in literature and art is a spatial and gendered aesthetic concept, which is expressed in the spatial characteristics of a literary or photographic narrative. The intention of this thesis is to evaluate the entanglement of the uncanny, space, domesticity and femininity in the context of Gothic literature and photography. These four objects can only be read in their interplay with each other and how they each function as structural principles in the framework of Gothic fiction and photography. The literary texts, Charlotte Perkins Gilman’s “The Yellow Wall-Paper” (1892) and Shirley Jackson’s “The Lovely House” (1950) and The Haunting of Hill House (1959) as well as Francesca Woodman’s self-portraits that will be discussed further share one particular quality; they use the haunted house motif to express the protagonist’s psychological state by transferring mental hauntings onto the narrative’s spatial layer. The establishment of a connection between the concepts at hand, the uncanny, domesticity, spatiality and femininity, is the basis for the first half of the thesis. What follows is an overview of how domestic politics and gendered perceptions of and behaviors in spaces are expressed in the Gothic mode in particular. In the literary analysis two ways in which the Freudian uncanny constitutes itself in the haunted house narrative, first the house as the site of repetition and second the house as a stand-in for the maternal body, are examined. Drawing from Gernot Böhme’s and Martina Löw’s theoretic work on space and atmosphere the thesis focuses on the different aesthetic strategies that produce the uncanny atmosphere associated with the Gothic haunted house. The female subjects at the narratives’ center are in the ambiguous process of disappearing or becoming, this (dis)appearing act is facilitated by their haunted surroundings. In the case of the unnamed narrator in “Wall-Paper” her suppressed rage at her husband is mirrored in the strangled woman trapped inside the yellow wallpaper. Once she recognizes her doppelganger the union of her two selves takes place in the short story’s dramatic climax. In Shirley Jackson’s literary works the haunted houses, protagonists in themselves, entrap, transform, and ultimately devour their female daughter-victims. The haunted houses are symbols, means and places of the continuous tradition of female entrapment within the domestic sphere, be it as wives, mothers or daughters. In Francesca Woodman’s self-portraits the themes of creation/destruction and becoming/disappearing within the ruinous (post)domestic sphere are acted out by the fragmented and blurry female figure who intriguingly oscillates between self-empowerment and submission to destruction.
The North Pamir, part of the India-Asia collision zone, essentially formed during the late Paleozoic to late Triassic–early Jurassic. Coeval to the subduction of the Turkestan ocean—during the Carboniferous Hercynian orogeny in the Tien Shan—a portion of the Paleo-Tethys ocean subducted northward and lead to the formation and obduction of a volcanic arc. This Carboniferous North Pamir arc is of Andean style in the western Darvaz segment and trends towards an intraoceanic arc in the eastern, Oytag segment. A suite of arc-volcanic rocks and intercalated, marine sediments together with intruded voluminous plagiogranites (trondhjemite and tonalite) and granodiorites was uplifted and eroded during the Permian, as demonstrated by widespread sedimentary unconformities. Today it constitutes a major portion of the North Pamir.
In this work, the first comprehensive Uranium-Lead (U-Pb) laser-ablation inductively-coupled-plasma mass-spectrometry (LA-ICP-MS) radiometric age data are presented along with geochemical data from the volcanic and plutonic rocks of the North Pamir volcanic arc. Zircon U-Pb data indicate a major intrusive phase between 340 and 320 Ma. The magmatic rocks show an arc-signature, with more primitive signatures in the Oytag segment compared to the Darvaz segment. Volcanic rocks in the Chinese North Pamir were indirectly dated by determining the age of ocean floor alteration. We investigate calcite filled vesicles and show that oxidative sea water and the basaltic host rock are major trace element sources. The age of ocean floor alteration, within a range of 25 Ma, constrains the extrusion age of the volcanic rocks. In the Chinese Pamir, arc-volcanic basalts have been dated to the Visean-Serpukhovian boundary. This relates the North Pamir volcanic arc to coeval units in the Tien Shan. Our findings further question the idea of a continuous Tarim-Tajik continent in the Paleozoic.
From the Permian (Guadalupian) on, a progressive sea-retreat led to continental conditions in the northeastern Pamir. Large parts of Central Asia were affected by transcurrent tectonics, while subduction of the Paleo-Tethys went on south of the accreted North Pamir arc, likely forming an accretionary wedge, representing an early stage of the later Karakul-Mazar tectonic unit. Graben systems dissected the Permian carbonate platforms, that formed on top of the uplifted Carboniferous arc in the central and western North Pamir. A continental graben formed in the eastern North Pamir. Zircon U-Pb dating suggest initiation of volcanic activity at ~260 Ma. Extensional tectonics prevailed throughout the Triassic, forming the Hindukush-North Pamir rift system. New geochemistry and zircon U-Pb data tie volcanic rocks, found in the Chinese Pamir, to coeval arc-related plutonic rocks found within the Karakul-Mazar arc-accretionary complex. The sedimentary environment in the continental North Pamir rift evolved from an alluvial plain, lake dominated environment in the Guadalupian to a coarser-clastic, alluvial, braided river dominated in the Triassic. Volcanic activity terminated in the early Jurassic. We conducted Potassium-Argon (K-Ar) fine-fraction dating on the Shala Tala thrust fault, a major structure juxtaposing Paleozoic marine units of lower greenschist to amphibolite facies conditions against continental Permian deposits. Fault slip under epizonal conditions is dated to 204.8 ± 3.7 Ma (2σ), implying Rhaetian nappe emplacement. This pinpoints the Central–North Pamir collision, since the Shala Tala thrust was a back-thrust at that time.
“Financial Analysis” is an online course designed for professionals consisting of three MOOCs, offering a professionally and institutionally recognized certificate in finance. The course is open but not free of charge and attracts mostly professionals from the banking industry. The primary objective of this study is to identify indicators that can predict learners at high risk of failure. To achieve this, we analyzed data from a previous course that had 875 enrolled learners and involve in the course during Fall 2021. We utilized correspondence analysis to examine demographic and behavioral variables.
The initial results indicate that demographic factors have a minor impact on the risk of failure in comparison to learners’ behaviors on the course platform. Two primary profiles were identified: (1) successful learners who utilized all the documents offered and spent between one to two hours per week, and (2) unsuccessful learners who used less than half of the proposed documents and spent less than one hour per week. Between these groups, at-risk students were identified as those who used more than half of the proposed documents and spent more than two hours per week. The goal is to identify those in group 1 who may be at risk of failing and those in group 2 who may succeed in the current MOOC, and to implement strategies to assist all learners in achieving success.
Supporting reflection in preservice during university-based training is, without doubt, a crucial aspect in attaining teacher professionalism. Therefore, an on-campus seminar designed to relate theory to practice and vice versa – the so-called ‘Lehr-Lern-Labor-Seminar (LLLS)’ – was implemented over the course of five terms to stimulate reflective skills of English and Physics teacher trainees. Investigations on the effectiveness of three types of the LLLS (no video and two types of video-supported reflections) compared to a parallel group (PG) and a control group (CG) occurred in a mixed methods quasi-experimental study. Reflective skills were elicited with vignettes, relevant covariates with questionnaires. Reflective development was then traced in the dimensions depth and breadth employing a qualitative content analysis. MANCOVA (Multivariate Analysis of Covariance) and regression analyses revealed a substantive increase of reflective depth for English and Physics teacher trainees and breadth development for English LLLS-participants in contrast to both, a PG and a CG, even when controlling for the subjects’ individual prerequisites.
How to reuse inclusive stem Moocs in blended settings to engage young girls to scientific careers
(2023)
The FOSTWOM project (2019–2022), an ERASMUS+ funding, gave METID (Politecnico di Milano) and the MOOC Técnico (Instituto Superior Técnico, University of Lisbon), together with other partners, the opportunity to support the design and creation of gender-inclusive MOOCs. Among other project outputs, we designed a toolkit and a framework that enabled the production of two MOOCs for undergraduate and graduate students in Science, Technology, Engineering and Maths (STEM) and used them as academic content free of gender stereotypes about intellectual ability. In this short paper, the authors aim to 1) briefly share the main outputs of the project; 2) tell the story of how the FOSTWOM approach together with 3) a motivational strategy, the Heroine’s Learning Journey, proved to be effective in the context of rural and marginal areas in Brazil, with young girls as a specific target audience.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
Watershed management requires an understanding of key hydrochemical processes. The Pra Basin is one of the five major river basins in Ghana with a population of over 4.2 million people. Currently, water resources management faces challenges due to surface water pollution caused by the unregulated release of untreated household and industrial waste into aquatic ecosystems and illegal mining activities. This has increased the need for groundwater as the most reliable water supply. Our understanding of groundwater recharge mechanisms and chemical evolution in the basin has been inadequate, making effective management difficult. Therefore, the main objective of this work is to gain insight into the processes that determine the hydrogeochemical evolution of groundwater quality in the Pra Basin. The combined use of stable isotope, hydrochemistry, and water level data provides the basis for conceptualizing the chemical evolution of groundwater in the Pra Basin. For this purpose, the origin and evaporation rates of water infiltrating into the unsaturated zone were evaluated. In addition, Chloride Mass Balance (CMB) and Water Table Fluctuations (WTF) were considered to quantify groundwater recharge for the basin. Indices such as water quality index (WQI), sodium adsorption ratio (SAR), Wilcox diagram, and salinity (USSL) were used in this study to determine the quality of the resource for use as drinking water and for irrigation purposes. Due to the heterogeneity of the hydrochemical data, the statistical techniques of hierarchical cluster and factor analysis were applied to subdivide the data according to their spatial correlation. A conceptual hydrogeochemical model was developed and subsequently validated by applying combinatorial inverse and reaction pathway-based geochemical models to determine plausible mineral assemblages that control the chemical composition of the groundwater. The interactions between water and rock determine the groundwater quality in the Pra Basin. The results underline that the groundwater is of good quality and can be used for drinking water and irrigation purposes. It was demonstrated that there is a large groundwater potential to meet the entire Pra Basin’s current and future water demands. The main recharge area was identified as the northern zone, while the southern zone is the discharge area. The predominant influence of weathering of silicate minerals plays a key role in the chemical evolution of the groundwater. The work presented here provides fundamental insights into the hydrochemistry of the Pra Basin and provides data important to water managers for informed decision-making in planning and allocating water resources for various purposes. A novel inverse modelling approach was used in this study to identify different mineral compositions that determine the chemical evolution of groundwater in the Pra Basin. This modelling technique has the potential to simulate the composition of groundwater at the basin scale with large hydrochemical heterogeneity, using average water composition to represent established spatial groupings of water chemistry.
Understanding hydrological processes is of fundamental importance for the Vietnamese national food security and the livelihood of the population in the Vietnamese Mekong Delta (VMD). As a consequence of sparse data in this region, however, hydrologic processes, such as the controlling processes of precipitation, the interaction between surface and groundwater, and groundwater dynamics, have not been thoroughly studied. The lack of this knowledge may negatively impact the long-term strategic planning for sustainable groundwater resources management and may result in insufficient groundwater recharge and freshwater scarcity. It is essential to develop useful methods for a better understanding of hydrological processes in such data-sparse regions. The goal of this dissertation is to advance methodologies that can improve the understanding of fundamental hydrological processes in the VMD, based on the analyses of stable water isotopes and monitoring data. The thesis mainly focuses on the controlling processes of precipitation, the mechanism of surface–groundwater interaction, and the groundwater dynamics. These processes have not been fully addressed in the VMD so far. The thesis is based on statistical analyses of the isotopic data of Global Network of Isotopes in Precipitation (GNIP), of meteorological and hydrological data from Vietnamese agencies, and of the stable water isotopes and monitoring data collected as part of this work.
First, the controlling processes of precipitation were quantified by the combination of trajectory analysis, multi-factor linear regression, and relative importance analysis (hereafter, a model‐based statistical approach). The validity of this approach is confirmed by similar, but mainly qualitative results obtained in other studies. The total variation in precipitation isotopes (δ18O and δ2H) can be better explained by multiple linear regression (up to 80%) than single-factor linear regression (30%). The relative importance analysis indicates that atmospheric moisture regimes control precipitation isotopes rather than local climatic conditions. The most crucial factor is the upstream rainfall along the trajectories of air mass movement. However, the influences of regional and local climatic factors vary in importance over the seasons. The developed model‐based statistical approach is a robust tool for the interpretation of precipitation isotopes and could also be applied to understand the controlling processes of precipitation in other regions.
Second, the concept of the two-component lumped-parameter model (LPM) in conjunction with stable water isotopes was applied to examine the surface–groundwater interaction in the VMD. A calibration framework was also set up to evaluate the behaviour, parameter identifiability, and uncertainties of two-component LPMs. The modelling results provided insights on the subsurface flow conditions, the recharge contributions, and the spatial variation of groundwater transit time. The subsurface flow conditions at the study site can be best represented by the linear-piston flow distribution. The contributions of the recharge sources change with distance to the river. The mean transit time (mTT) of riverbank infiltration increases with the length of the horizontal flow path and the decreasing gradient between river and groundwater. River water infiltrates horizontally mainly via the highly permeable aquifer, resulting in short mTTs (<40 weeks) for locations close to the river (<200 m). The vertical infiltration from precipitation takes place primarily via a low‐permeable overlying aquitard, resulting in considerably longer mTTs (>80 weeks). Notably, the transit time of precipitation infiltration is independent of the distance to the river. All these results are hydrologically plausible and could be quantified by the presented method for the first time. This study indicates that the highly complex mechanism of surface–groundwater interaction at riverbank infiltration systems can be conceptualized by exploiting two‐component LPMs. It is illustrated that the model concept can be used as a tool to investigate the hydrological functioning of mixing processes and the flow path of multiple water components in riverbank infiltration systems.
Lastly, a suite of time series analysis approaches was applied to examine the groundwater dynamics in the VMD. The assessment was focused on the time-variant trends of groundwater levels (GWLs), the groundwater memory effect (representing the time that an aquifer holds water), and the hydraulic response between surface water and multi-layer alluvial aquifers. The analysis indicates that the aquifers act as low-pass filters to reduce the high‐frequency signals in the GWL variations, and limit the recharge to the deep groundwater. The groundwater abstraction has exceeded groundwater recharge between 1997 and 2017, leading to the decline of groundwater levels (0.01-0.55 m/year) in all considered aquifers in the VMD. The memory effect varies according to the geographical location, being shorter in shallow aquifers and flood-prone areas and longer in deep aquifers and coastal regions. Groundwater depth, season, and location primarily control the variation of the response time between the river and alluvial aquifers. These findings are important contributions to the hydrogeological literature of a little-known groundwater system in an alluvial setting. It is suggested that time series analysis can be used as an efficient tool to understand groundwater systems where resources are insufficient to develop a physical-based groundwater model.
This doctoral thesis demonstrates that important aspects of hydrological processes can be understood by statistical analysis of stable water isotope and monitoring data. The approaches developed in this thesis can be easily transferred to regions in similar tropical environments, particularly those in alluvial settings. The results of the thesis can be used as a baseline for future isotope-based studies and contribute to the hydrogeological literature of little-known groundwater systems in the VMD.
Aptamers are single-stranded DNA (ssDNA) or RNA molecules that can bind specifically and with high affinity to target molecules due to their unique three-dimensional structure. For this reason, they are often compared to antibodies and sometimes even referred to as “chemical antibodies”. They are simple and inexpensive to synthesize, easy to modify, and smaller than conventional antibodies. Enzymes, especially hydrolases, are interesting targets in this context. This class of enzymes is capable of hydrolytically cleaving various macromolecules such as proteins, as well as smaller molecules such as antibiotics. Hence, they play an important role in many biological processes including diseases and their treatment. Hydrolase detection as well as the understanding of their function is therefore of great importance for diagnostics and therapy. Due to their various desirable features compared to antibodies, aptamers are being discussed as alternative agents for analytical and diagnostic use in various applications. The use of aptamers in therapy is also frequently investigated, as the binding of aptamers can have effects on the catalytic activity, protein-protein interactions, or proteolytic cascades. Aptamers are generated by an in vitro selection process. Potential aptamer candidates are selected from a pool of enriched nucleic acid sequences with affinity to the target, and their binding affinity and specificity is investigated. This is one of the most important steps in aptamer generation to obtain specific aptamers with high affinity for use in analytical and diagnostic applications. The binding properties or binding domains and their effects on enzyme functions form the basis for therapeutic applications.
In this work, the binding properties of DNA aptamers against two different hydrolases were investigated. In view of their potential utility for analytical methods, aptamers against human urokinase (uPA) and New Delhi metallo-β-lactamase-1 (NDM-1) were evaluated for their binding affinity and specificity using different methods. Using the uPA aptamers, a protocol for measuring the binding kinetics of an aptamer-protein-interaction by surface plasmon resonance spectroscopy (SPR) was developed. Based on the increased expression of uPA in different types of cancer, uPA is discussed as a prognostic and diagnostic tumor marker. As uPA aptamers showed different binding sites on the protein, microtiter plate-based aptamer sandwich assay systems for the detection of uPA were developed. Because of the function of urokinase in cancer cell proliferation and metastasis, uPA is also discussed as a therapeutic target. In this regard, the different binding sites of aptamers showed different effects on uPA function. In vitro experiments demonstrated both inhibition of uPA binding to its receptor as well as the inhibition of uPA catalytic activity for different aptamers. Thus, in addition to their specificity and affinity for their targets, the utility of the aptamers for potential diagnostic and therapeutic applications was demonstrated. First, as an alternative inhibitor of human urokinase for therapeutic purposes, and second, as valuable recognition molecules for the detection of urokinase, as a prognostic and diagnostic marker for cancer, and for NDM-1 to detect resistance to carbapenem antibiotics.
Cells are built from a variety of macromolecules and metabolites. Both, the proteome and the metabolome are highly dynamic and responsive to environmental cues and developmental processes. But it is not their bare numbers, but their interactions that enable life. The protein-protein (PPI) and protein-metabolite interactions (PMI) facilitate and regulate all aspects of cell biology, from metabolism to mitosis. Therefore, the study of PPIs and PMIs and their dynamics in a cell-wide context is of great scientific interest. In this dissertation, I aim to chart a map of the dynamic PPIs and PMIs across metabolic and cellular transitions. As a model system, I study the shift from the fermentative to the respiratory growth, known as the diauxic shift, in the budding yeast Saccharomyces cerevisiae. To do so, I am applying a co-fractionation mass spectrometry (CF-MS) based method, dubbed protein metabolite interactions using size separation (PROMIS). PROMIS, as well as comparable methods, will be discussed in detail in chapter 1.
Since PROMIS was developed originally for Arabidopsis thaliana, in chapter 2, I will describe the adaptation of PROMIS to S. cerevisiae. Here, the obtained results demonstrated a wealth of protein-metabolite interactions, and experimentally validated 225 previously predicted PMIs. Applying orthogonal, targeted approaches to validate the interactions of a proteogenic dipeptide, Ser-Leu, five novel protein-interactors were found. One of those proteins, phosphoglycerate kinase, is inhibited by Ser-Leu, placing the dipeptide at the regulation of glycolysis.
In chapter 3, I am presenting PROMISed, a novel web-tool designed for the analysis of PROMIS- and other CF-MS-datasets. Starting with raw fractionation profiles, PROMISed enables data pre-processing, profile deconvolution, scores differences in fractionation profiles between experimental conditions, and ultimately charts interaction networks. PROMISed comes with a user-friendly graphic interface, and thus enables the routine analysis of CF-MS data by non-computational biologists.
Finally, in chapter 4, I applied PROMIS in combination with the isothermal shift assay to the diauxic shift in S. cerevisiae to study changes in the PPI and PMI landscape across this metabolic transition. I found a major rewiring of protein-protein-metabolite complexes, exemplified by the disassembly of the proteasome in the respiratory phase, the loss of interaction of an enzyme involved in amino acid biosynthesis and its cofactor, as well as phase and structure specific interactions between dipeptides and enzymes of central carbon metabolism.
In chapter 5, I am summarizing the presented results, and discuss a strategy to unravel the potential patterns of dipeptide accumulation and binding specificities. Lastly, I recapitulate recently postulated guidelines for CF-MS experiments, and give an outlook of protein interaction studies in the near future.