Refine
Has Fulltext
- yes (5966) (remove)
Year of publication
Document Type
- Postprint (2346)
- Doctoral Thesis (1738)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (168)
- Master's Thesis (61)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5966) (remove)
Keywords
- climate change (74)
- Klimawandel (51)
- machine learning (41)
- morphology (40)
- information structure (39)
- MOOC (37)
- syntax (37)
- e-learning (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (641)
- Institut für Biochemie und Biologie (557)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (475)
- Institut für Geowissenschaften (472)
- Extern (460)
- Institut für Chemie (428)
- Department Linguistik (237)
- Humanwissenschaftliche Fakultät (207)
- Department Psychologie (205)
The deformation style of mountain belts is greatly influenced by the upper plate architecture created during preceding deformation phases. The Mesozoic Salta Rift extensional phase has created a dominant structural and lithological framework that controls Cenozoic deformation and exhumation patterns in the Central Andes. Studying the nature of these pre-existing anisotropies is a key to understanding the spatiotemporal distribution of exhumation and its controlling factors. The Eastern Cordillera in particular, has a structural grain that is in part controlled by Salta Rift structures and their orientation relative to Andean shortening. As a result, there are areas in which Andean deformation prevails and areas where the influence of the Salta Rift is the main control on deformation patterns.
Between 23 and 24°S, lithological and structural heterogeneities imposed by the Lomas de Olmedo sub-basin (Salta Rift basin) affect the development of the Eastern Cordillera fold-and-thrust belt. The inverted northern margin of the sub-basin now forms the southern boundary of the intermontane Cianzo basin. The former western margin of the sub-basin is located at the confluence of the Subandean Zone, the Santa Barbara System and the Eastern Cordillera. Here, the Salta Rift basin architecture is responsible for the distribution of these morphotectonic provinces. In this study we use a multi-method approach consisting of low-temperature (U-Th-Sm)/He and apatite fission track thermochronology, detrital geochronology, structural and sedimentological analyses to investigate the Mesozoic structural inheritance of the Lomas de Olmedo sub-basin and Cenozoic exhumation patterns.
Characterization of the extension-related Tacurú Group as an intermediate succession between Paleozoic basement and the syn-rift infill of the Lomas de Olmedo sub-basin reveals a Jurassic maximum depositional age. Zircon (U-Th-Sm)/He cooling ages record a pre-Cretaceous onset of exhumation for the rift shoulders in the northern part of the sub-basin, whereas the western shoulder shows a more recent onset (140–115 Ma). Variations in the sedimentary thickness of syn- and post-rift strata document the evolution of accommodation space in the sub-basin. While the thickness of syn-rift strata increases rapidly toward the northern basin margin, the post-rift strata thickness decreases toward the margin and forms a condensed section on the rift shoulder.
Inversion of Salta Rift structures commenced between the late Oligocene and Miocene (24–15 Ma) in the ranges surrounding the Cianzo basin. The eastern and western limbs of the Cianzo syncline, located in the hanging wall of the basin-bounding Hornocal fault, show diachronous exhumation. At the same time, western fault blocks of Tilcara Range, south of the Cianzo basin, began exhuming in the late Oligocene to early Miocene (26–16 Ma). Eastward propagation to the frontal thrust and to the Paleozoic strata east of the Tilcara Range occurred in the middle Miocene (22–10 Ma) and the late Miocene–early Pliocene (10–4 Ma), respectively.
Traditional ways of reducing flood risk have encountered limitations in a climate-changing and rapidly urbanizing world. For instance, there has been a demanding requirement for massive investment in order to maintain a consistent level of security as well as increased flood exposure of people and property due to a false sense of security arising from the flood protection infrastructure. Against this background, nature-based solutions (NBS) have gained popularity as a sustainable and alternative way of dealing with diverse societal challenges such as climate change and biodiversity loss. In particular, their ability to reduce flood risks while also offering ecological benefits has recently received global attention. Diverse co-benefits of NBS that favor both humans and nature are viewed as promising a wide endorsement of NBS. However, people’s perceptions of NBS are not always positive. Local resistance to NBS projects as well as decision-makers’ and practitioners’ unwillingness to adopt NBS have been pointed out as a bottleneck to the successful realization and mainstreaming of NBS. In this regard, there has been a growing necessity to investigate people’s perceptions of NBS. Current research has lacked an integrative perspective of both attitudinal and contextual factors that guide perceptions of NBS; it not only lacks empirical evidence, but a few existing ones are rather conflicting without having underlying theories. This has led to the overarching research question of this dissertation, "What shapes people’s perceptions of NBS in the context of flooding?" The dissertation aims to answer the following sub-questions in the three papers that make up this dissertation: 1. What are the topics reflected in the previous literature influencing perceptions of NBS as a means to reduce hydro-meteorological risks? (Paper I) 2. What are the stimulating and hampering attitudinal and contextual factors for mainstreaming NBS for flood risk management? How are NBS conceptualized? (Paper II) 3. How are public attitudes toward the NBS projects shaped? How do risk-and place-related factors shape individual attitudes toward NBS? (Paper III) This dissertation follows an integrative approach of considering “place” and “risk”, as well as the surrounding context, by analyzing attitudinal (i.e., individual) and contextual (i.e., systemic) factors. “Place” is mainly concerned with affective elements (e.g., bond to locality and natural environment) whereas “risk” is related to cognitive elements (e.g., threat appraisal). The surrounding context provides systemic drivers and barriers with the possibility of interfering the influence of place and risk for perceptions of NBS. To empirically address the research questions, the current status of the knowledge about people’s perceptions of NBS for flood risks was investigated by conducting a systematic review (Paper I). Based on these insights, a case study of South Korea was used to demonstrate key contextual and attitudinal factors for mainstreaming NBS through the lens of experts (Paper II). Lastly, by conducting a citizen survey, it investigated the relationship between the previously discussed concepts in Papers I and II using structural equation modeling, focusing on the core concepts, namely risk and place (Paper III). As a result, Paper I identified the key topics relating to people’s perceptions, including the perceived value of co-benefits, perceived effectiveness of risk reduction effectiveness, participation of stakeholders, socio-economic and place-specific conditions, environmental attitude, and uncertainty of NBS. Paper II confirmed Paper I's findings regarding attitudinal factors. In addition, several contextual hampering or stimulating factors were found to be similar to those of any emerging technologies (i.e., path dependence, lack of operational and systemic capacity). Among all, one of the distinctive features in NBS contexts, at least in the South Korean case, is the politicization of NBS, which can lead to polarization of ideas and undermine the decision-making process. Finally, Paper III provides a framework with the core topics (i.e., place and risk) that were considered critical in Paper I and Paper II. This place-based risk appraisal model (PRAM) connects people at risk and places where hazards (i.e., floods) and interventions (i.e., NBS) take place. The empirical analysis shows that, among the place-related variables, nature bonding was a positive predictor of the perceived risk-reduction effectiveness of NBS, and place identity was a negative predictor of supportive attitude. Among the risk-related variables, threat appraisal had a negative effect on perceived risk reduction effectiveness and supportive attitude, while well-communicated information, trust in flood risk management, and perceived co-benefit were positive predictors. This dissertation proves that the place and risk attributes of NBS shape people’s perceptions of NBS. In order to optimize the NBS implementation, it is necessary to consider the meanings and values held in place before project implementation and how these attributes interact with individual and/or community risk profiles and other contextual factors. With the increasing necessity of using NBS to lower flood risks, these results make important suggestions for the future NBS project strategy and NBS governance.
Mountain ranges can fundamentally influence the physical and and chemical processes that shape Earths’ surface. With elevations of up to several kilometers they create climatic enclaves by interacting with atmospheric circulation and hydrologic systems, thus leading to a specific distribution of flora and fauna. As a result, the interiors of many Cenozoic mountain ranges are characterized by an arid climate, internally drained and sediment-filled basins, as well as unique ecosystems that are isolated from the adjacent humid, low-elevation regions along their flanks and forelands. These high-altitude interiors of orogens are often characterized by low relief and coalesced sedimentary basins, commonly referred to as plateaus, tectono-geomorphic entities that result from the complex interactions between mantle-driven geological and tectonic conditions and superposed atmospheric and hydrological processes. The efficiency of these processes and the fate of orogenic plateaus is therefore closely tied to the balance of constructive and destructive processes – tectonic uplift and erosion, respectively. In numerous geological studies it has been shown that mountain ranges are delicate systems that can be obliterated by an imbalance of these underlying forces. As such, Cenozoic mountain ranges might not persist on long geological timescales and will be destroyed by erosion or tectonic collapse. Advancing headward erosion of river systems that drain the flanks of the orogen may ultimately sever the internal drainage conditions and the maintenance of storage of sediments within the plateau, leading to destruction of plateau morphology and connectivity with the foreland. Orogenic collapse may be associated with the changeover from a compressional stress field with regional shortening and topographic growth, to a tensional stress field with regional extensional deformation and ensuing incision of the plateau. While the latter case is well-expressed by active extensional faults in the interior parts of the Tibetan Plateau and the Himalaya, for example, the former has been attributed to have breached the internally drained areas of the high-elevation sectors of the Iranian Plateau.
In the case of the Andes of South America and their internally drained Altiplano-Puna Plateau, signs of both processes have been previously described. However, in the orogenic collapse scenario the nature of the extensional structures had been primarily investigated in the northern and southern terminations of the plateau; in some cases, the extensional faults were even regarded to be inactive. After a shallow earthquake in 2020 within the Eastern Cordillera of Argentina that was associated with extensional deformation, the state of active deformation and the character of the stress field in the central parts of the plateau received renewed interest to explain a series of extensional structures in the northernmost sectors of the plateau in north-western Argentina. This study addresses (1) the issue of tectonic orogenic collapse of the Andes and the destruction of plateau morphology by studying the fill and erosion history of the central eastern Andean Plateau using sedimentological and geochronological data and (2) the kinematics, timing and magnitude of extensional structures that form well-expressed fault scarps in sediments of the regional San Juan del Oro surface, which is an integral part of the Andean Plateau and adjacent morphotectonic provinces to the east.
Importantly, sediment properties and depositional ages document that the San Juan del Oro Surface was not part of the internally-drained Andean Plateau, but rather associated with a foreland-directed drainage system, which was modified by the Andean orogeny and that became successively incorporated into the orogen by the eastward-migration of the Andean deformation front during late Miocene – Pliocene time. Structural and geomorphic observations within the plateau indicate that extensional processes must have been repeatedly active between the late Miocene and Holocene supporting the notion of plateau-wide extensional processes, potentially associated with Mw ~ 7 earthquakes. The close relationship between extensional joints and fault orientations underscores that 3 was oriented horizontally in NW-SE direction and 1 was vertical. This unambiguously documents that the observed deformation is related to gravitational forces that drive the orogenic collapse of the plateau. Applied geochronological analyses suggest that normal faulting in the northern Puna was active at about 3 Ma, based on paired cosmogenic nuclide dating of sediment fill units. Possibly due to regional normal faulting the drainage system within the plateau was modified, promoting fluvial incision.
Sulfur is essential for the functionality of some important biomolecules in humans. Biomolecules like the Iron-sulfur clusters, tRNAs, Molybdenum cofactor, and some vitamins. The trafficking of sulfur involves proteins collectively called sulfurtransferase. Among these are TUM1, MOCS3, and NFS1.
This research investigated the role of TUM1 for molybdenum cofactor biosynthesis and cytosolic tRNA thiolation in humans. The rhodanese-like protein MOCS3 and the L-cysteine desulfurase (NFS1) have been previously demonstrated to interact with TUM1. These interactions suggested a dual function of TUM1 in sulfur transfer for Moco biosynthesis and cytosolic tRNA thiolation. TUM1 deficiency has been implicated to be responsible for a rare inheritable disorder known as mercaptolactate cysteine disulfiduria (MCDU), which is associated with a mental disorder. This mental disorder is similar to the symptoms of sulfite oxidase deficiency which is characterised by neurological disorders. Therefore, the role of TUM1 as a sulfurtransferase in humans was investigated, in CRISPR/Cas9 generated TUM1 knockout HEK 293T cell lines.
For the first time, TUM1 was implicated in Moco biosynthesis in humans by quantifying the intermediate product cPMP and Moco using HPLC. Comparing the TUM1 knockout cell lines to the wild-type, accumulation and reduction of cPMP and Moco were observed respectively. The effect of TUM1 knockout on the activity of a Moco-dependent enzyme, Sulfite oxidase, was also investigated. Sulfite oxidase is essential for the detoxification of sulfite to sulfate. Sulfite oxidase activity and protein abundance were reduced due to less availability of Moco. This shows that TUM1 is essential for efficient sulfur transfer for Moco biosynthesis. Reduction in cystathionin -lyase in TUM1 knockout cells was quantified, a possible coping mechanism of the cell against sulfite production through cysteine catabolism.
Secondly, the involvement of TUM1 in tRNA thio-modification at the wobble Uridine-34 was reported by quantifying the amount of mcm5s2U and mcm5U via HPLC. The reduction and accumulation of mcm5s2U and mcm5U in TUM1 knockout cells were observed in the nucleoside analysis. Herein, exogenous treatment with NaHS, a hydrogen sulfide donor, rescued the Moco biosynthesis, cytosolic tRNA thiolation, and cell proliferation deficits in TUM1 knockout cells.
Further, TUM1 was shown to impact mitochondria bioenergetics through the measurement of the oxygen consumption rate and extracellular acidification rate (ECAR) via the seahorse cell Mito stress analyzer. Reduction in total ATP production was also measured. This reveals how important TUM1 is for H2S biosynthesis in the mitochondria of HEK 293T.
Finally, the inhibition of NFS1 in HEK 293T and purified NFS1 protein by 2-methylene 3-quinuclidinone was demonstrated via spectrophotometric and radioactivity quantification. Inhibition of NFS1 by MQ further affected the iron-sulfur cluster-dependent enzyme aconitase activity.
Research within the framework of Basic Psychological Need Theory (BPNT) finds strong associations between basic need frustration and depressive symptoms. This study examined the role of rumination as an underlying mechanism in the association between basic psychological need frustration and depressive symptoms. A cross-sectional sample of N = 221 adults (55.2% female, mean age = 27.95, range = 18–62, SD = 10.51) completed measures assessing their level of basic psychological need frustration, rumination, and depressive symptoms. Correlational analyses and multiple mediation models were conducted. Brooding partially mediated the relation between need frustration and depressive symptoms. BPNT and Response Styles Theory are compatible and can further advance knowledge about depression vulnerabilities.
Cells are built from a variety of macromolecules and metabolites. Both, the proteome and the metabolome are highly dynamic and responsive to environmental cues and developmental processes. But it is not their bare numbers, but their interactions that enable life. The protein-protein (PPI) and protein-metabolite interactions (PMI) facilitate and regulate all aspects of cell biology, from metabolism to mitosis. Therefore, the study of PPIs and PMIs and their dynamics in a cell-wide context is of great scientific interest. In this dissertation, I aim to chart a map of the dynamic PPIs and PMIs across metabolic and cellular transitions. As a model system, I study the shift from the fermentative to the respiratory growth, known as the diauxic shift, in the budding yeast Saccharomyces cerevisiae. To do so, I am applying a co-fractionation mass spectrometry (CF-MS) based method, dubbed protein metabolite interactions using size separation (PROMIS). PROMIS, as well as comparable methods, will be discussed in detail in chapter 1.
Since PROMIS was developed originally for Arabidopsis thaliana, in chapter 2, I will describe the adaptation of PROMIS to S. cerevisiae. Here, the obtained results demonstrated a wealth of protein-metabolite interactions, and experimentally validated 225 previously predicted PMIs. Applying orthogonal, targeted approaches to validate the interactions of a proteogenic dipeptide, Ser-Leu, five novel protein-interactors were found. One of those proteins, phosphoglycerate kinase, is inhibited by Ser-Leu, placing the dipeptide at the regulation of glycolysis.
In chapter 3, I am presenting PROMISed, a novel web-tool designed for the analysis of PROMIS- and other CF-MS-datasets. Starting with raw fractionation profiles, PROMISed enables data pre-processing, profile deconvolution, scores differences in fractionation profiles between experimental conditions, and ultimately charts interaction networks. PROMISed comes with a user-friendly graphic interface, and thus enables the routine analysis of CF-MS data by non-computational biologists.
Finally, in chapter 4, I applied PROMIS in combination with the isothermal shift assay to the diauxic shift in S. cerevisiae to study changes in the PPI and PMI landscape across this metabolic transition. I found a major rewiring of protein-protein-metabolite complexes, exemplified by the disassembly of the proteasome in the respiratory phase, the loss of interaction of an enzyme involved in amino acid biosynthesis and its cofactor, as well as phase and structure specific interactions between dipeptides and enzymes of central carbon metabolism.
In chapter 5, I am summarizing the presented results, and discuss a strategy to unravel the potential patterns of dipeptide accumulation and binding specificities. Lastly, I recapitulate recently postulated guidelines for CF-MS experiments, and give an outlook of protein interaction studies in the near future.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Lanthanide based ceria nanomaterials are important practical materials due to their redox properties that are useful in technology and life sciences. This PhD thesis examined various properties and potential for catalytic and bio-applications of Ln3+-doped ceria nanomaterials. Ce1-xGdxO2-y: Eu3+, gadolinium doped ceria (GDC) (0 ≤ x ≤ 0.4) nanoparticles were synthesized by flame spray pyrolysis (FSP) and studied, followed by 15 % CexZr1-xO2-y: Eu3+|YSZ (0 ≤ x ≤ 1) nanocomposites. Furthermore, Ce1-xYb xO2-y (0.004 ≤ x ≤ 0.22) nanoparticles were synthesized by thermal decomposition and characterized. Finally, CeO2-y: Eu3+ nanoparticles were synthesized by a microemulsion method, biofunctionalized and characterized. The studies undertaken presents a novel approach to structurally elucidate ceria-based nanomaterials by way of Eu3+ and Yb3+ spectroscopy and processing the spectroscopic data with the multi-way decomposition method PARAFAC. Data sets of the three variables: excitation wavelength, emission wavelength and time were used to perform the deconvolution of spectra.
GDC nanoparticles from FSP are nano-sized and of roughly cubic shape and crystal structure (Fm3̅m). Raman data revealed four vibrational modes exhibited by Gd3+ containing samples whereas CeO2-y: Eu3+ displays only two. The room temperature, time-resolved emission spectra recorded at λexcitation = 464 nm show that Gd3+ doping results in significantly altered emission spectra compared to pure ceria. The PARAFAC analysis for the pure ceria samples reveals two species; a high-symmetry species and a low-symmetry species. The GDC samples yield two low-symmetry spectra in the same experiment. High-resolution emission spectra recorded at 4 K after probing the 5D0-7F0 transition revealed additional variation in the low symmetry Eu3+ sites in pure ceria and GDC. The data of the Gd3+-containing samples indicates that the average charge density around the Eu3+ ions in the lattice is inversely related to Gd3+ and oxygen vacancy concentration.
The particle crystallites of the 773 K and 1273 K annealed Yb3+ -ceria nanostructure materials are nano-sized and have a cubic fluorite structure with four Raman vibrational modes. Elemental maps clearly show that cluster formation occurs for 773 K annealed with high Yb3+ ion concentration from 15 mol % in the ceria lattice. These clusters are destroyed with annealing to 1273 K. The emission spectra observed from room temperature and 4 K measurements for the Ce1-xYb xO2-y samples have a manifold that corresponds to the 2F5/2-2F7/2 transition of Yb3+ ions. Some small shifts are observed in the Stark splitting pattern and are induced by the variations of the crystal field influenced by where the Yb3+ ions are located in the crystal lattices in the samples. Upon mixing ceria with high Yb3+ concentrations, the 2F5/2-2F7/2 transition is also observed in the Stark splitting pattern, but the spectra consist of two broad high background dominated peaks. Annealing the nanomaterials at 1273 K for 2 h changes the spectral signature as new peaks emerge. The deconvolution yielded luminescence decay kinetics as well as the accompanying luminescence spectra of three species for each of the low Yb3+ doped ceria samples annealed at 773 K and one species for the 1273 K annealed samples. However, the ceria samples with high Yb3+ concentration annealed at the two temperatures yielded one species with lower decay times as compared to the Yb3+ doped ceria samples after PARAFAC analysis.
Through the calcination of the nanocomposites at two high temperatures, the evolution of the emission patterns from specific Eu3+ lattice sites to indicate structural changes for the nanocomposites was followed. The spectroscopy results effectively complemented the data obtained from the conventional techniques. Annealing the samples at 773 K, resulted in amorphous, unordered domains whereas the TLS of the 1273 K nanocomposites reveal two distinct sites, with most red shifted Eu3+ species coming from pure Eu3+ doped ZrO2 on the YSZ support.
Finally, for Eu3+ doped ceria, successful transfer from hydrophobic to water phase and subsequent biocompatibility was achieved using ssDNA. PARAFAC analysis for the Eu3+ in nanoparticles dispersed in toluene and water revealed one Eu3+ species, with slightly differing surface properties for the nanoparticles as far as the luminescence kinetics and solvent environments were concerned. Several functionalized nanoparticles conjugated onto origami triangles after hybridization were visualized by atomic force microscopy (AFM). Putting all into consideration, Eu3+ and Yb3+ spectroscopy was used to monitor the structural changes and determining the feasibility of the nanoparticle transfer into water. PARAFAC proves to be a powerful tool to analyze lanthanide spectra in crystalline solid materials and in solutions, which are characterized by numerous Stark transitions and where measurements usually yield a superposition of different emission contributions to any given spectrum.
The effects of energy price increases are heterogeneous between households and firms. Financially constrained poorer households, who spend a larger relative share of their income on energy, are particularly affected. In this analysis, we examine the macroeconomic and welfare effects of energy price shocks in the presence of credit-constrained households that have subsistence-level energy demand. Within a Dynamic Stochastic General Equilibrium (DSGE) model calibrated for the German economy, we compare the performance of different policy measures (transfers and energy subsidies) and different financing schemes (income tax vs. debt). Our results show that credit-constrained households prefer debt over tax financing regardless of the compensation measure due to their difficulty to smooth consumption. On the contrary, rich households tend to prefer tax-financed measures as they increase the labor supply of poor households. From an aggregate perspective, tax-financed measures targeting firms effectively cushion aggregate output losses.
Supporting reflection in preservice during university-based training is, without doubt, a crucial aspect in attaining teacher professionalism. Therefore, an on-campus seminar designed to relate theory to practice and vice versa – the so-called ‘Lehr-Lern-Labor-Seminar (LLLS)’ – was implemented over the course of five terms to stimulate reflective skills of English and Physics teacher trainees. Investigations on the effectiveness of three types of the LLLS (no video and two types of video-supported reflections) compared to a parallel group (PG) and a control group (CG) occurred in a mixed methods quasi-experimental study. Reflective skills were elicited with vignettes, relevant covariates with questionnaires. Reflective development was then traced in the dimensions depth and breadth employing a qualitative content analysis. MANCOVA (Multivariate Analysis of Covariance) and regression analyses revealed a substantive increase of reflective depth for English and Physics teacher trainees and breadth development for English LLLS-participants in contrast to both, a PG and a CG, even when controlling for the subjects’ individual prerequisites.
Economic agents often irrationally base their decision-making on irrelevant information. This research analyzes whether men and women react to futile information about past outcomes. For this purpose, we run a laboratory experiment (Study 1) and use field data (Study 2). In both studies, the behavior of men is consistent with falsely assumed negative autocorrelation, often referred to as gambler’s fallacy Women’s behavior aligns with falsely assumed positive autocorrelation, a notion of the hot hand fallacy. On the aggregate, the two fallacies cancel out. Even when individuals are, on average, rational, the biases in the decision-making of subgroups might cause inefficient outcomes. In a mediation analysis, we find that a) the agents stated perceived probabilities of future outcomes are not blurred by irrelevant information and b) about 40 % of the observed biases are driven by differences in the perceived attractiveness of available choices caused by the irrelevant information.
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
This technical report presents the results of student projects which were prepared during the lecture “Operating Systems II” offered by the “Operating Systems and Middleware” group at HPI in the Summer term of 2020. The lecture covered ad- vanced aspects of operating system implementation and architecture on topics such as Virtualization, File Systems and Input/Output Systems. In addition to attending the lecture, the participating students were encouraged to gather practical experience by completing a project on a closely related topic over the course of the semester. The results of 10 selected exceptional projects are covered in this report.
The students have completed hands-on projects on the topics of Operating System Design Concepts and Implementation, Hardware/Software Co-Design, Reverse Engineering, Quantum Computing, Static Source-Code Analysis, Operating Systems History, Application Binary Formats and more. It should be recognized that over the course of the semester all of these projects have achieved outstanding results which went far beyond the scope and the expec- tations of the lecture, and we would like to thank all participating students for their commitment and their effort in completing their respective projects, as well as their work on compiling this report.
Leveraging two cohort-specific pension reforms, this paper estimates the forward-looking effects of an exogenous increase in the working horizon on (un)employment behaviour for individuals with a long remaining statutory working life. Using difference-in-differences and regression discontinuity approaches based on administrative and survey data, I show that a longer legal working horizon increases individuals’ subjective expectations about the length of their work life, raises the probability of employment, decreases the probability of unemployment, and increases the intensity of job search among the unemployed. Heterogeneity analyses show that the demonstrated employment effects are strongest for women and in occupations with comparatively low physical intensity, i.e., occupations that can be performed at older ages.
Carbon dioxide removal from the atmosphere is becoming an important option to achieve net zero climate targets. This paper develops a welfare and public economics perspective on optimal policies for carbon removal and storage in non-permanent sinks like forests, soil, oceans, wood products or chemical products. We derive a new metric for the valuation of non-permanent carbon storage, the social cost of carbon removal (SCC-R), which embeds also the conventional social cost of carbon emissions. We show that the contribution of CDR is to create new carbon sinks that should be used to reduce transition costs, even if the stored carbon is released to the atmosphere eventually. Importantly, CDR does not raise the ambition of optimal temperature levels unless initial atmospheric carbon stocks are excessively high. For high initial atmospheric carbon stocks, CDR allows to reduce the optimal temperature below initial levels. Finally, we characterize three different policy regimes that ensure an optimal deployment of carbon removal: downstream carbon pricing, upstream carbon pricing, and carbon storage pricing. The policy regimes differ in their informational and institutional requirements regarding monitoring, liability and financing.
Self-efficacy reflects the self-belief that one can persistently perform difficult and novel tasks while coping with adversity. As such beliefs reflect how individuals behave, think, and act, they are key for successful entrepreneurial activities. While existing literature mainly analyzes the influence of the task-related construct of entrepreneurial self-efficacy, we take a different perspective and investigate, based on a representative sample of 1,405 German business founders, how the personality characteristic of generalized self-efficacy influences start-up performance as measured by a broad set of business outcomes up to 19 months after business creation. Outcomes include start-up survival and entrepreneurial income, as well as growth-oriented outcomes such as job creation and innovation. We find statistically significant and economically important positive effects of high scores of self-efficacy on start-up survival and entrepreneurial income, which become even stronger when focusing on the growth-oriented outcome of innovation. Furthermore, we observe that generalized self-efficacy is similarly distributed between female and male business founders, with effects being partly stronger for female entrepreneurs. Our findings are important for policy instruments that are meant to support firm growth by facilitating the design of more target-oriented offers for training, coaching, and entrepreneurial incubators.
National Action Plans (NAPs) have been increas-ingly adopted world-wide after the Vienna Dec-laration in 1993, where it was urged to consider the improvement and promotion of Human Rights. In this paper, we discuss their usefulness and success by analysing the challenges present-ed during NAP processes as well as the benefits this set of actions entails: The challenges for their implementation outweigh its actual benefits. Nevertheless, NAPs have great potential. Based on new research, we elaborate a set of recom-mendations for improving the design and imple-mentation of national action planning. In order to effectively bring NAP into practice, we consider it crucial to plan and analyse every state local circumstances in detail. The latter is important, since the implementation of a concrete set of actions is intended to directly transform and improve the local living conditions of the people. In a long-term perspective, we defend the benefit of NAP’s implementation for complying obliga-tions set up by HR treaties.
The last years have been affected by Covid-19 and the international emergency mecha-nism to deal with health-related threats. The effects of this period manifested differ-ently worldwide, depending on matters such as international relations, national policies, power dynamics etc. Additionally, the impact of this time will likely have long-term effects which are yet to be known. This paper gives a critical overview of the Public Health Emergency of International Concern (PHEIC) mechanism in the context of Covid-19. It does so by explaining the legal framework for states of emergency, specifically in the context of a PHEIC, while considering its restrictions and limitations on human rights. It further outlines issues in the manifestation of global protections and limitations on human rights during Covid-19. Lastly, considering the likelihood of future PHEICs and the known systemic obstructions, this paper offers ways to im-prove this mechanism from a holistic, non-zero-sum perspective.
International migration has been an increasing phenomenon during the past decades and has involved all the regions of the globe. Together with fertility and mortality rates, net migration rates represent the components that fully define the demographic evolution of the population in a country. Therefore, being able to capture the patterns of international migration flows and to produce projections of how they might change in the future is of relevant importance for demographic studies and for designing policies informed on the potential scenarios. Existing forecasting methods do not account explicitly for the main drivers and processes shaping international migration flows: existing migrant communities at the destination country, termed diasporas, would reduce the costs of migration and facilitate the settling for new migrants, ultimately producing a positive feedback; accounting for the heterogeneity in the type of migration flows, e.g. return and transit Ćows, becomes critical in some specific bilateral migration channels; in low- to middle- income countries economic development could relax poverty constraint and result in an increase of emigration rates.
Economic conditions at both origin and destination are identified as major drivers of international migration. At the same time, climate change impacts have already appeared on natural and human-made systems such as the economic productivity. These economic impacts might have already produced a measurable effect on international migration flows. Studies that provide a quantification of the number of migration moves that might have been affected by climate change are usually specific to small regions, do not provide a mechanistic understanding of the pathway leading from climate change to migration and restrict their focus to the effective induced flows, disregarding the impact that climate change might have had in inhibiting other flows.
Global climate change is likely to produce impacts on the economic development of the countries during the next decades too. Understanding how these impacts might alter future global migration patterns is relevant for preparing future societies and understanding whether the response in migration flows would reduce or increase population's exposure to climate change impacts.
This doctoral research aims at investigating these questions and fill the research gaps outlined above. First, I have built a global bilateral international migration model which accounts explicitly for the diaspora feedback, distinguishes between transit and return flows, and accounts for the observed non-linear effects that link emigration rates to income levels in the country of origin. I have used this migration model within a population dynamic model where I account also for fertility and mortality rates, producing hindcasts and future projections of international migration flows, covering more than 170 countries. Results show that the model reproduces past patterns and trends well. Future projections highlight the fact that,depending on the assumptions regarding future evolution of income levels and between-country inequality, migration at the end of the century might approach net zero or be still high in many countries. The model, parsimonious in the explanatory variables that includes, represents a versatile tool for assessing the impacts of different socioeconomic scenarios on international migration.
I consider then a counterfactual past without climate change impacts on the economic productivity. By prescribing these counterfactual economic conditions to the migration model I produce counterfactual migration flows for the past 30 years. I compare the counterfactual migration flows to factual ones, where historical economic conditions are used to produce migration flows. This provides an estimation of the recent international migration flows attributed to climate change impacts. Results show that a counterfactual world without climate change would have seen less migration globally. This effect becomes larger if I consider separately the increase and decrease in migration moves: a Ągure of net change in the migration flows is not representative of the effective magnitude of the climate change impact on migration. Indeed, in my results climate change produces a divergent effect on richer and poorer countries: by slowing down the economic development, climate change might have reduced international mobility from and to countries of the Global South, and increased it from and to richer countries in the Global North.
I apply the same methodology to a scenario of future 3℃ global warming above pre-industrial conditions. I Ąnd that climate change impacts, acting by reorganizing the relative economic attractiveness of destination countries or by affecting the economic growth in the origin, might produce a substantial effect in international migration flows, inhibiting some moves and inducing others.
Overall my results suggest that climate change might have had and might have in the future a significant effect on global patterns of international migration. It also emerges clearly that, for a comprehensive understanding of the effects of climate change on international migration, we need to go beyond net effects and consider separately induced and inhibited flows.
What is it good for?
(2023)
Military conflicts and wars affect a country’s development in various dimensions. Rising inflation rates are a potentially important economic effect associated with conflict. High inflation can undermine investment, weigh on private consumption, and threaten macroeconomic stability. Furthermore, these effects are not necessarily restricted to the locality of the conflict, but can also spill over to other countries. Therefore, to understand how conflict affects the economy and to make a more comprehensive assessment of the costs of armed conflict, it is important to take inflationary effects into account. To disentangle the conflict-inflation-nexus and to quantify this relationship, we conduct a panel analysis for 175 countries over the period 1950–2019. To capture indirect inflationary effects, we construct a distance based spillover index. In general, the results of our analysis confirm a statistically significant positive direct association between conflicts and inflation rates. This finding is robust across various model specifications. Moreover, our results indicate that conflict induced inflation is not solely driven by increasing money supply. Furthermore, we document a statistically significant positive indirect association between conflicts and inflation rates in uninvolved countries.
“They Took to the Sea”
(2023)
The sea and maritime spaces have long been neglected in the field of Jewish studies despite their relevance in the context of Jewish religious texts and historical narratives. The images of Noah’s arche, king Salomon’s maritime activities or the miracle of the parting of the Red Sea immediately come into mind, however, only illustrate a few aspects of Jewish maritime activities. Consequently, the relations of Jews and the sea has to be seen in a much broader spatial and temporal framework in order to understand the overall importance of maritime spaces in Jewish history and culture.
Almost sixty years after Samuel Tolkowsky’s pivotal study on maritime Jewish history and culture and the publication of his book “They Took to the Sea” in 1964, this volume of PaRDeS seeks to follow these ideas, revisit Jewish history and culture from different maritime perspectives and shed new light on current research in the field, which brings together Jewish and maritime studies.
The articles in this volume therefore reflect a wide range of topics and illustrate how maritime perspectives can enrich our understanding of Jewish history and culture and its entanglement with the sea – especially in modern times. They study different spaces and examine their embedded narratives and functions. They follow in one way or another the discussions which evolved in the last decades, focused on the importance of spatial dimensions and opened up possibilities for studying the production and construction of spaces, their influences on cultural practices and ideas, as well as structures and changes of social processes. By taking these debates into account, the articles offer new insights into Jewish history and culture by taking us out to “sea” and inviting us to revisit Jewish history and culture from different maritime perspectives.
A new solid-state material, N-butyl pyridinium diiodido argentate(I), is synthesized using a simple and effective one-pot approach. In the solid state, the compound exhibits 1D ([AgI2](-))(n) chains that are stabilized by the N-butyl pyridinium cation. The 1D structure is further manifested by the formation of long, needle-like crystals, as revealed from electron microscopy. As the general composition is derived from metal halide-based ionic liquids, the compound has a low melting point of 100-101 degrees C, as confirmed by differential scanning calorimetry. Most importantly, the compound has a conductivity of 10(-6) S cm(-1) at room temperature. At higher temperatures the conductivity increases and reaches to 10(-4 )S cm(-1) at 70 degrees C. In contrast to AgI, however, the current material has a highly anisotropic 1D arrangement of the ionic domains. This provides direct and tuneable access to fast and anisotropic ionic conduction. The material is thus a significant step forward beyond current ion conductors and a highly promising prototype for the rational design of highly conductive ionic solid-state conductors for battery or solar cell applications.
A degree course in IT and business administration solely for women (FIW) has been offered since 2009 at the HTW Berlin – University of Applied Sciences. This contribution discusses student motivations for enrolling in such a women only degree course and gives details of our experience over recent years. In particular, the approach to attracting new female students is described and the composition of the intake is discussed. It is shown that the women-only setting together with other factors can attract a new clientele for computer science.
The integration of MOOCs into the Moroccan Higher Education (MHE) took place in 2013 by developing different partnerships and projects at national and international levels. As elsewhere, the Covid-19 crisis has played an important role in accelerating distance education in MHE. However, based on our experience as both university professors and specialists in educational engineering, the effective execution of the digital transition has not yet been implemented. Thus, in this article, we present a retrospective feedback of MOOCs in Morocco, focusing on the policies taken by the government to better support the digital transition in general and MOOCs in particular. We are therefore seeking to establish an optimal scenario for the promotion of MOOCs, which emphasizes the policies to be considered, and which recalls the importance of conducting a delicate articulation taking into account four levels, namely environmental, institutional, organizational and individual. We conclude with recommendations that are inspired by the Moroccan academic contex that focus on the major role that MOOCs plays for university students and on maintaining lifelong learning.
Early sensitivity to prosodic phrase boundary cues: Behavioral evidence from German-learning infants
(2023)
This dissertation seeks to shed light on the relation of phrasal prosody and developmental speech perception in German-learning infants. Three independent empirical studies explore the role of acoustic correlates of major prosodic boundaries, specifically pitch change, final lengthening, and pause, in infant boundary perception. Moreover, it was examined whether the sensitivity to prosodic phrase boundary markings changes during the first year of life as a result of perceptual attunement to the ambient language (Aslin & Pisoni, 1980).
Using the headturn preference procedure six- and eight-month-old monolingual German-learning infants were tested on their discrimination of two different prosodic groupings of the same list of coordinated names either with or without an internal IPB after the second name, that is, [Moni und Lilli] [und Manu] or [Moni und Lilli und Manu]. The boundary marking was systematically varied with respect to single prosodic cues or specific cue combinations.
Results revealed that six- and eight-month-old German-learning infants successfully detect the internal prosodic boundary when it is signaled by all the three main boundary cues pitch change, final lengthening, and pause. For eight-, but not for six-month-olds, the combination of pitch change and final lengthening, without the occurrence of a pause, is sufficient. This mirrors an adult-like perception by eight-months (Holzgrefe-Lang et al., 2016). Six-month-olds detect a prosodic phrase boundary signaled by final lengthening and pause. The findings suggest a developmental change in German prosodic boundary cue perception from a strong reliance on the pause cue at six months to a differentiated sensitivity to the more subtle cues pitch change and final lengthening at eight months. Neither for six- nor for eight-month-olds the occurrence of pitch change or final lengthening as single cues is sufficient, similar to what has been observed for adult speakers of German (Holzgrefe-Lang et al., 2016).
The present dissertation provides new scientific knowledge on infants’ sensitivity to individual prosodic phrase boundary cues in the first year of life. Methodologically, the studies are pathbreaking since they used exactly the same stimulus materials – phonologically thoroughly controlled lists of names – that have also been used with adults (Holzgrefe-Lang et al., 2016) and with infants in a neurophysiological paradigm (Holzgrefe-Lang, Wellmann, Höhle, & Wartenburger, 2018), allowing for comparisons across age (six/ eight months and adults) and method (behavioral vs. neurophysiological methods). Moreover, materials are suited to be transferred to other languages allowing for a crosslinguistic comparison. Taken together with a study with similar French materials (van Ommen et al., 2020) the observed change in sensitivity in German-learning infants can be interpreted as a language-specific one, from an initial language-general processing mechanism that primarily focuses on the presence of pauses to a language-specific processing that takes into account prosodic properties available in the ambient language. The developmental pattern is discussed as an interplay of acoustic salience, prosodic typology (prosodic regularity) and cue reliability.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
Divergent thinking is the ability to produce numerous and diverse responses to questions or tasks, and it is used as a predictor of creative achievement. It plays a significant role in the business organization’s innovation process and the recognition of new business opportunities. Drawing upon the cumulative process model of creativity in entrepreneurship, we hypothesize that divergent thinking has a lasting effect on post-launch entrepreneurial outcomes related to innovation and growth, but that this relation might not always be linear. Additionally, we hypothesize that domain-specific experience has a moderating role in this relation. We test our hypotheses based on a representative longitudinal sample of 457 German business founders, which we observe up until 40 months after start-up. We find strong relative effects for innovation and growth outcomes. For survival we find conclusive evidence for non-linearities in the effects of divergent thinking. Additionally, we show that such effects are moderated by the type of domain-specific experience that entrepreneurs gathered pre-launch, as it shapes the individual’s ideational abilities to fit into more sophisticated strategies regarding entrepreneurial creative achievement. Our findings have relevant policy implications in characterizing and identifying business start-ups with growth and innovation potential, allowing a more efficient allocation of public and private funds.
We analyze the impact of women’s managerial representation on the gender pay gap among employees on the establishment level using German Linked-Employer-Employee-Data from the years 2004 to 2018. For identification of a causal effect we employ a panel model with establishment fixed effects and industry-specific time dummies. Our results show that a higher share of women in management significantly reduces the gender pay gap within the firm. An increase in the share of women in first-level management e.g. from zero to above 33 percent decreases the adjusted gender pay gap from a baseline of 15 percent by 1.2 percentage points, i.e. to roughly 14 percent. The effect is stronger for women in second-level than first-level management, indicating that women managers with closer interactions with their subordinates have a higher impact on the gender pay gap than women on higher management levels. The results are similar for East and West Germany, despite the lower gender pay gap and more gender egalitarian social norms in East Germany. From a policy perspective, we conclude that increasing the number of women in management positions has the potential to reduce the gender pay gap to a limited extent. However, further policy measures will be needed in order to fully close the gender gap in pay.
Layered structures are ubiquitous in nature and industrial products, in which individual layers could have different mechanical/thermal properties and functions independently contributing to the performance of the whole layered structure for their relevant application. Tuning each layer affects the performance of the whole layered system.
Pores are utilized in various disciplines, where low density, but large surfaces are demanded. Besides, open and interconnected pores would act as a transferring channel for guest chemical molecules. The shape of pores influences compression behavior of the material. Moreover, introducing pores decreases the density and subsequently the mechanical strength. To maintain defined mechanical strength under various stress, porous structure can be reinforced by adding reinforcement agent such as fiber, filler or layered structure to bear the mechanical stress on demanded application.
In this context, this thesis aimed to generate new functions in bilayer systems by combining layers having different moduli and/or porosity, and to develop suitable processing techniques to access these structures.
Manufacturing processes of layered structures employ often organic solvents mostly causing environmental pollution. In this regard, the studied bilayer structures here were manufactured by processes free of organic solvents.
In this thesis, three bilayer systems were studied to answer the individual questions.
First, while various methods of introducing pores in melt-phase are reported for one-layer constructs with simple geometry, can such methods be applied to a bilayer structure, giving two porous layers?
This was addressed with Bilayer System 1. Two porous layers were obtained from melt-blending of two different polyurethanes (PU) and polyvinyl alcohol (PVA) in a co-continuous phase followed by sequential injection molding and leaching the PVA phase in deionized water. A porosity of 50 ± 5% with a high interconnectivity was obtained, in which the pore sizes in both layers ranged from 1 µm to 100 µm with an average of 22 µm in both layers. The obtained pores were tailored by applying an annealing treatment at relevant high temperatures of 110 °C and 130 °C, which allowed the porosity to be kept constant. The disadvantage of this system is that a maximum of 50% porosity could be reached and removal of leaching material in the weld line section of both layers is not guaranteed. Such a construct serves as a model for bilayer porous structure for determining structure-property relationships with respect to the pore size, porosity and mechanical properties of each layer. This fabrication method is also applicable to complex geometries by designing a relevant mold for injection molding.
Secondly, utilizing scCO2 foaming process at elevated temperature and pressure is considered as a green manufacturing process. Employing this method as a post-treatment can alter the history orientation of polymer chains created by previous fabrication methods. Can a bilayer structure be fabricated by a combination of sequential injection molding and scCO2 foaming process, in which a porous layer is supported by a compact layer?
Such a construct (Bilayer System 2) was generated by sequential injection molding of a PCL (Tm ≈ 58 °C) layer and a PLLA (Tg ≈ 58 °C) layer. Soaking this structure in the autoclave with scCO2 at T = 45 °C and P = 100 bar led to the selective foaming of PCL with a porosity of 80%, while the PLA layer was kept compact. The scCO2 autoclave led to the formation of a porous core and skin layer of the PCL, however, the degree of crystallinity of PLLA layer increased from 0 to 50% at the defined temperature and pressure. The microcellular structure of PCL as well as the degree of crystallinity of PLLA were controlled by increasing soaking time.
Thirdly, wrinkles on surfaces in micro/nano scale alter the properties, which are surface-related. Wrinkles are formed on a surface of a bilayer structure having a compliant substrate and a stiff thin film. However, the reported wrinkles were not reversible. Moreover, dynamic wrinkles in nano and micro scale have numerous examples in nature such as gecko foot hair offering reversible adhesion and an ability of lotus leaves for self-cleaning altering hydrophobicity of the surface. It was envisioned to imitate this biomimetic function on the bilayer structure, where self-assembly on/off patterns would be realized on the surface of this construct.
In summary, developing layered constructs having different properties/functions in the individual layer or exhibiting a new function as the consequence of layered structure can give novel insight for designing layered constructs in various disciplines such as packaging and transport industry, aerospace industry and health technology.
Thai MOOC academy
(2023)
Thai MOOC Academy is a national digital learning platform that has been serving as a mechanism for promoting lifelong learning in Thailand since 2017. It has recently undergone significant improvements and upgrades, including the implementation of a credit bank system and a learner’s eportfolio system interconnected with the platform. Thai MOOC Academy is introducing a national credit bank system for accreditation and management, which allows for the transfer of expected learning outcomes and educational qualifications between formal education, non-formal education, and informal education. The credit bank system has five distinct features, including issuing forgery-prevented certificates, recording learning results, transferring external credits within the same wallet, accumulating learning results, and creating a QR code for verification purposes. The paper discusses the features and future potential of Thai MOOC Academy, as it is extended towards a sandbox for the national credit bank system in Thailand.
Long COVID patients show symptoms, such as fatigue, muscle weakness and pain. Adequate diagnostics are still lacking. Investigating muscle function might be a beneficial approach. The holding capacity (maximal isometric Adaptive Force; AFisomax) was previously suggested to be especially sensitive for impairments. This longitudinal, non-clinical study aimed to investigate the AF in long COVID patients and their recovery process. AF parameters of elbow and hip flexors were assessed in 17 patients at three time points (pre: long COVID state, post: immediately after first treatment, end: recovery) by an objectified manual muscle test. The tester applied an increasing force on the limb of the patient, who had to resist isometrically for as long as possible. The intensity of 13 common symptoms were queried. At pre, patients started to lengthen their muscles at ~50% of the maximal AF (AFmax), which was then reached during eccentric motion, indicating unstable adaptation. At post and end, AFisomax increased significantly to ~99% and 100% of AFmax, respectively, reflecting stable adaptation. AFmax was statistically similar for all three time points. Symptom intensity decreased significantly from pre to end. The findings revealed a substantially impaired maximal holding capacity in long COVID patients, which returned to normal function with substantial health improvement. AFisomax might be a suitable sensitive functional parameter to assess long COVID patients and to support therapy process
The Adaptive Force (AF) reflects the neuromuscular capacity to adapt to external loads during holding muscle actions and is similar to motions in real life and sports. The maximal isometric AF (AFisomax) was considered to be the most relevant parameter and was assumed to have major importance regarding injury mechanisms and the development of musculoskeletal pain. The aim of this study was to investigate the behavior of different torque parameters over the course of 30 repeated maximal AF trials. In addition, maximal holding vs. maximal pushing isometric muscle actions were compared. A side consideration was the behavior of torques in the course of repeated AF actions when comparing strength and endurance athletes. The elbow flexors of n = 12 males (six strength/six endurance athletes, non-professionals) were measured 30 times (120 s rest) using a pneumatic device. Maximal voluntary isometric contraction (MVIC) was measured pre and post. MVIC, AFisomax, and AFmax (maximal torque of one AF measurement) were evaluated regarding different considerations and statistical tests. AFmax and AFisomax declined in the course of 30 trials [slope regression (mean ± standard deviation): AFmax = −0.323 ± 0.263; AFisomax = −0.45 ± 0.45]. The decline from start to end amounted to −12.8% ± 8.3% (p < 0.001) for AFmax and −25.41% ± 26.40% (p < 0.001) for AFisomax. AF parameters declined more in strength vs. endurance athletes. Thereby, strength athletes showed a rather stable decline for AFmax and a plateau formation for AFisomax after 15 trials. In contrast, endurance athletes reduced their AFmax, especially after the first five trials, and remained on a rather similar level for AFisomax. The maximum of AFisomax of all 30 trials amounted 67.67% ± 13.60% of MVIC (p < 0.001, n = 12), supporting the hypothesis of two types of isometric muscle action (holding vs. pushing). The findings provided the first data on the behavior of torque parameters after repeated isometric–eccentric actions and revealed further insights into neuromuscular control strategies. Additionally, they highlight the importance of investigating AF parameters in athletes based on the different behaviors compared to MVIC. This is assumed to be especially relevant regarding injury mechanisms.
Recent research suggests that design thinking practices may foster the development of needed capabilities in new digitalised landscapes. However, existing publications represent individual contributions, and we lack a holistic understanding of the value of design thinking in a digital world. No review, to date, has offered a holistic retrospection of this research. In response, in this bibliometric review, we aim to shed light on the intellectual structure of multidisciplinary design thinking literature related to capabilities relevant to the digital world in higher education and business settings, highlight current trends and suggest further studies to advance theoretical and empirical underpinnings. Our study addresses this aim using bibliometric methods—bibliographic coupling and co-word analysis as they are particularly suitable for identifying current trends and future research priorities at the forefront of the research. Overall, bibliometric analyses of the publications dealing with the related topics published in the last 10 years (extracted from the Web of Science database) expose six trends and two possible future research developments highlighting the expanding scope of the design thinking scientific field related to capabilities required for the (more sustainable and human-centric) digital world. Relatedly, design thinking becomes a relevant approach to be included in higher education curricula and human resources training to prepare students and workers for the changing work demands. This paper is well-suited for education and business practitioners seeking to embed design thinking capabilities in their curricula and for design thinking and other scholars wanting to understand the field and possible directions for future research.
Digital technology offers significant political, economic, and societal opportunities. At the same time, the notion of digital sovereignty has become a leitmotif in German discourse: the state’s capacity to assume its responsibilities and safeguard society’s – and individuals’ – ability to shape the digital transformation in a self-determined way. The education sector is exemplary for the challenge faced by Germany, and indeed Europe, of harnessing the benefits of digital technology while navigating concerns around sovereignty. It encompasses education as a core public good, a rapidly growing field of business, and growing pools of highly sensitive personal data. The report describes pathways to mitigating the tension between digitalization and sovereignty at three different levels – state, economy, and individual – through the lens of concrete technical projects in the education sector: the HPI Schul-Cloud (state sovereignty), the MERLOT data spaces (economic sovereignty), and the openHPI platform (individual sovereignty).
How to reuse inclusive stem Moocs in blended settings to engage young girls to scientific careers
(2023)
The FOSTWOM project (2019–2022), an ERASMUS+ funding, gave METID (Politecnico di Milano) and the MOOC Técnico (Instituto Superior Técnico, University of Lisbon), together with other partners, the opportunity to support the design and creation of gender-inclusive MOOCs. Among other project outputs, we designed a toolkit and a framework that enabled the production of two MOOCs for undergraduate and graduate students in Science, Technology, Engineering and Maths (STEM) and used them as academic content free of gender stereotypes about intellectual ability. In this short paper, the authors aim to 1) briefly share the main outputs of the project; 2) tell the story of how the FOSTWOM approach together with 3) a motivational strategy, the Heroine’s Learning Journey, proved to be effective in the context of rural and marginal areas in Brazil, with young girls as a specific target audience.
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
Academia-industry collaborations are beneficial when both sides bring strengths to the partnership and the collaboration outcome is of mutual benefit. These types of collaboration projects are seen as a low-risk learning opportunity for both parties. In this paper, government initiatives that can change the business landscape and academia-industry collaborations that can provide upskilling opportunities to fill emerging business needs are discussed. In light of Japan’s push for next-level modernization, a Japanese software company took a positive stance towards building new capabilities outside what it had been offering its customers. Consequently, an academic research group is laying out infrastructure for learning analytics research. An existing learning analytics dashboard was modularized to allow the research group to focus on natural language processing experiments while the software company explores a development framework suitable for data visualization techniques and artificial intelligence development. The results of this endeavor demonstrate that companies working with academia can creatively explore collaborations outside typical university-supported avenues.
“One video fit for all”
(2023)
Online learning in mathematics has always been challenging, especially for mathematics in STEM education. This paper presents how to make “one fit for all” lecture videos for mathematics in STEM education. In general, we do believe that there is no such thing as “one fit for all” video. The curriculum requires a high level of prior knowledge in mathematics from high school to get a good understanding, and the variation of prior knowledge levels among STEM education students is often high. This creates challenges for both online teaching and on-campus teaching. This article presents experimenting and researching on a video format where students can get a real-time feeling, and which fits their needs regarding their existing prior knowledge. They have the possibility to ask and receive answers during the video without having to feel that they must jump into different sources, which helps to reduce unnecessary distractions. The fundamental video format presented here is that of dynamic branching videos, which has to little degree been researched in education related studies. The reason might be that this field is quite new for higher education, and there is relatively high requirement on the video editing skills from the teachers’ side considering the platforms that are available so far. The videos are implemented for engineering students who take the Linear Algebra course at the Norwegian University of Science and Technology in spring 2023. Feedback from the students gathered via anonymous surveys so far (N = 21) is very positive. With the high suitability for online teaching, this video format might lead the trend of online learning in the future. The design and implementation of dynamic videos in mathematics in higher education was presented for the first time at the EMOOCs conference 2023.
This paper investigates private university students’ language learning activities in MOOC platforms and their attitude toward it. The study explores the development of MOOC use in Chinese private universities, with a focus on two modes: online et blended. We conducted empirical studies with students learning French and Japanese as a second foreign language, using questionnaires (N = 387) and interviews (N = 20) at a private university in Wuhan. Our results revealed that the majority of students used the MOOC platform more than twice a week and focused on the MOOC video, materials and assignments. However, we also found that students showed less interest in online communication (forums). Those who worked in the blended learning mode, especially Japanese learning students, had a more positive attitude toward MOOCs than other students.
The TU Delft Extension School for Continuing Education develops and delivers MOOCs, programs and other online courses for lifelong learners and professionals worldwide focused on Science, Engineering & Design. At the beginning of 2022, we started a project to examine whether creating an online course had any impact on TU Delft campus education. Through a survey, we collected feedback from 68 TU Delft lecturers involved in developing and offering online courses and programs for lifelong learners and professionals. The lecturers reported on the impact of developing an online course on a personal and curricular level. The results showed that the developed online materials, and the acquired skills and experiences from creating online courses, were beneficial for campus education, especially during the transition to remote emergency teaching in the COVID-19 lockdown periods. In this short paper, we will describe the responses in detail and map the benefits and challenges experienced by lecturers when implementing their online course materials and newly acquired educational skills on campus. Finally, we will explore future possibilities to extend the reported, already relevant, impact of MOOCs and of other online courses on campus education.
Design Thinking is a human-centered approach to innovation that has become increasingly popular globally over the last decade. While the spread of Design Thinking is well understood and documented in the Western cultural contexts, particularly in Europe and the US due to the popularity of the Stanford-Potsdam Design Thinking education model, this is not the case when it comes to non-Western cultural contexts. This thesis fills a gap identified in the literature regarding how Design Thinking emerged, was perceived, adopted, and practiced in the Arab world. The culture in that part of the world differs from that of the Western context, which impacts the mindset of people and how they interact with Design Thinking tools and methods.
A mixed-methods research approach was followed in which both quantitative and qualitative methods were employed. First, two methods were used in the quantitative phase: a social media analysis using Twitter as a source of data, and an online questionnaire. The results and analysis of the quantitative data informed the design of the qualitative phase in which two methods were employed: ten semi-structured interviews, and participant observation of seven Design Thinking training events.
According to the analyzed data, the Arab world appears to have had an early, though relatively weak, and slow, adoption of Design Thinking since 2006. Increasing adoption, however, has been witnessed over the last decade, especially in Saudi Arabia, the United Arab Emirates and Egypt. The results also show that despite its limited spread, Design Thinking has been practiced the most in education, information technology and communication, administrative services, and the non-profit sectors. The way it is being practiced, though, is not fully aligned with how it is being practiced and taught in the US and Europe, as most people in the region do not necessarily believe in all mindset attributes introduced by the Stanford-Potsdam tradition.
Practitioners in the Arab world also seem to shy away from the 'wild side' of Design Thinking in particular, and do not fully appreciate the connection between art-design, and science-engineering. This questions the role of the educational institutions in the region since -according to the findings- they appear to be leading the movement in promoting and developing Design Thinking in the Arab world. Nonetheless, it is notable that people seem to be aware of the positive impact of applying Design Thinking in the region, and its potential to bring meaningful transformation. However, they also seem to be concerned about the current cultural, social, political, and economic challenges that may challenge this transformation. Therefore, they call for more awareness and demand to create Arabic, culturally appropriate programs to respond to the local needs. On another note, the lack of Arabic content and local case studies on Design Thinking were identified by several interviewees and were also confirmed by the participant observation as major challenges that are slowing down the spread of Design Thinking or sometimes hampering capacity building in the region. Other challenges that were revealed by the study are: changing the mindset of people, the lack of dedicated Design Thinking spaces, and the need for clear instructions on how to apply Design Thinking methods and activities. The concept of time and how Arabs deal with it, gender management during trainings, and hierarchy and power dynamics among training participants are also among the identified challenges. Another key finding revealed by the study is the confirmation of التفكير التصميمي as the Arabic term to be most widely adopted in the region to refer to Design Thinking, since four other Arabic terms were found to be associated with Design Thinking.
Based on the findings of the study, the thesis concludes by presenting a list of recommendations on how to overcome the mentioned challenges and what factors should be considered when designing and implementing culturally-customized Design Thinking training in the Arab region.
Innovat MOOC
(2023)
The COVID-19 pandemic has revealed the importance for university teachers to have adequate pedagogical and technological competences to cope with the various possible educational scenarios (face-to-face, online, hybrid, etc.), making use of appropriate active learning methodologies and supporting technologies to foster a more effective learning environment. In this context, the InnovaT project has been an important initiative to support the development of pedagogical and technological competences of university teachers in Latin America through several trainings aiming to promote teacher innovation. These trainings combined synchronous online training through webinars and workshops with asynchronous online training through the MOOC “Innovative Teaching in Higher Education.” This MOOC was released twice. The first run took place right during the lockdown of 2020, when Latin American teachers needed urgent training to move to emergency remote teaching overnight. The second run took place in 2022 with the return to face-to-face teaching and the implementation of hybrid educational models. This article shares the results of the design of the MOOC considering the constraints derived from the lockdowns applied in each country, the lessons learned from the delivery of such a MOOC to Latin American university teachers, and the results of the two runs of the MOOC.
The massive growth of MOOCs in 2011 laid the groundwork for the achievement of SDG 4. With the various benefits of MOOCs, there is also anticipation that online education should focus on more interactivity and global collaboration. In this context, the Global MOOC and Online Education Alliance (GMA) established a diverse group of 17 world-leading universities and three online education platforms from across 14 countries on all six continents in 2020. Through nearly three years of exploration, GMA has gained experience and achieved progress in fostering global cooperation in higher education. First, in joint teaching, GMA has promoted in-depth cooperation between members inside and outside the alliance. Examples include promoting the exchange of high-quality MOOCs, encouraging the creation of Global Hybrid Classroom, and launching Global Hybrid Classroom Certificate Programs. Second, in capacity building and knowledge sharing, GMA has launched Online Education Dialogues and the Global MOOC and Online Education Conference, inviting global experts to share best practices and attracting more than 10 million viewers around the world. Moreover, GMA is collaborating with international organizations to support teachers’ professional growth, create an online learning community, and serve as a resource for further development. Third, in public advocacy, GMA has launched the SDG Hackathon and Global Massive Open Online Challenge (GMOOC) and attracted global learners to acquire knowledge and incubate their innovative ideas within a cross-cultural community to solve real-world problems that all humans face and jointly create a better future. Based on past experiences and challenges, GMA will explore more diverse cooperation models with more partners utilizing advanced technology, provide more support for digital transformation in higher education, and further promote global cooperation towards building a human community with a shared future.
Founded in 2013, OpenClassrooms is a French online learning company that offers both paid courses and free MOOCs on a wide range of topics, including computer science and education. In 2021, in partnership with the EDA research unit, OpenClassrooms shared a database to solve the problem of how to increase persistence in their paid courses, which consist of a series of MOOCs and human mentoring. Our statistical analysis aims to identify reasons for dropouts that are due to the course design rather than demographic predictors or external factors.We aim to identify at-risk students, i.e. those who are on the verge of dropping out at a specific moment. To achieve this, we use learning analytics to characterize student behavior. We conducted data analysis on a sample of data related to the “Web Designers” and “Instructional Design” courses. By visualizing the student flow and constructing speed and acceleration predictors, we can identify which parts of the course need to be calibrated and when particular attention should be paid to these at-risk students.
This qualitative study explores the impact of Personalized Learning Experience (PLE) courses at a higher education institution from the perspective of undergraduate students. The PLE program requires students to take at least one of their elective courses in the form of MOOCs during their undergraduate studies. Drawing on interviews with six students across different faculties, the study identified four key themes that encapsulate the effects of PLE courses: (1) Certificate driven learning with a focus on occupation skill enhancement, (2) diverse course offerings to enhance personal and academic development, (3) learning flexibility, and (4) student satisfaction. The findings suggest that PLE courses offered through MOOC platforms allow students to broaden their academic horizons, gain valuable skills, and tailor their education to better align with their interests and goals. Furthermore, this study highlights the potential benefits of incorporating PLE courses in higher education institutions, emphasizing their role in promoting a more dynamic and student-centered learning environment.
This research paper provides an overview of the current state of MOOCs (massive open online courses) and universities in Austria, focusing on the national MOOC platform iMooX.at. The study begins by presenting the results of an analysis of the performance agreements of 22 Austrian public universities for the period 2022–2024, with a specific focus on the mention of MOOC activities and iMooX. The authors find that 12 of 22 (55 %) Austrian public universities use at least one of these terms, indicating a growing interest in MOOCs and online learning. Additionally, the authors analyze internal documentation data to share insights into how many universities in Austria have produced and/or used a MOOC on the iMooX platform since its launch in 2014. These findings provide a valuable measure of the current usage and monitoring of MOOCs and iMooX among Austrian higher education institutions. Overall, this research contributes to a better understanding of the current state of MOOCs and their integration within Austrian higher education.
Modularization describes the transformation of MOOCs from a comprehensive academic course format into smaller, more manageable learning offerings. It can be seen as one of the prerequisites for the successful implementation of MOOC-based micro-credentials in professional education and training. This short paper reports on the development and application of a modularization framework for Open Online Courses. Using the example of eGov-Campus, a German MOOC provider for the public sector linked to both academia and formal professional development, the structural specifications for modularized MOOC offerings and a methodology for course transformation as well as associated challenges in technology, organization and educational design are outlined. Following on from this, future prospects are discussed under the headings of individualization, certification and integration.
To implement OERs at HEIs sustainably, not just technical infrastructure is required, but also well-trained staff. The University of Graz is in charge of an OER training program for university staff as part of the collaborative project Open Education Austria Advanced (OEAA) with the aim of ensuring long-term competence growth in the use and creation of OERs. The program consists of a MOOC and a guided blended learning format that was evaluated to find out which accompanying teaching and learning concepts can best facilitate targeted competence development. The evaluation of the program shows that learning videos, self-study assignments and synchronous sessions are most useful for the learning process. The results indicate that the creation of OERs is a complex process that can be undergone more effectively in the guided program.
As Thailand moves towards becoming an innovation-driven economy, the need for human capital development has become crucial. Work-based skill MOOCs, offered on Thai MOOC, a national digital learning platform launched by Thailand Cyber University Project, ministry of Higher Education, Science, Research and Innovation, provide an effective way to overcome this challenge. This paper discusses the challenges faced in designing an instruction for work-based skill MOOCs that can serve as a foundation model for many more to come. The instructional design of work-based skill courses in Thai MOOC involves four simple steps, including course selection, learning from accredited providers, course requirements completion, and certification of acquired skills. The development of such courses is ongoing at the higher education level, vocational level, and pre-university level, which serve as a foundation model for many more work-based skill MOOC that will be offered on Thai MOOC soon. The instructional design of work-based skills courses should focus on the development of currently demanded professional competencies and skills, increasing the efficiency of work in the organization, creativity, and happiness in life that meets the human resources needs of industries in the 4.0 economy era in Thailand. This paper aims to present the challenges of designing instruction for work-based skill MOOCs and suggests effective ways to design instruction to enhance workforce development in Thailand.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
This research paper aims to introduce a novel practitioner-oriented and research-based taxonomy of video genres. This taxonomy can serve as a scaffolding strategy to support educators throughout the entire educational system in creating videos for pedagogical purposes. A taxonomy of video genres is essential as videos are highly valued resources among learners. Although the use of videos in education has been extensively researched and well-documented in systematic research reviews, gaps remain in the literature. Predominantly, researchers employ sophisticated quantitative methods and similar approaches to measure the performance of videos. This trend has led to the emergence of a strong learning analytics research tradition with its embedded literature. This body of research includes analysis of performance of videos in online courses such as Massive Open Online Courses (MOOCs). Surprisingly, this same literature is limited in terms of research outlining approaches to designing and creating educational videos, which applies to both video-based learning and online courses. This issue results in a knowledge gap, highlighting the need for developing pedagogical tools and strategies for video making. These can be found in frameworks, guidelines, and taxonomies, which can serve as scaffolding strategies. In contrast, there appears to be very few frameworks available for designing and creating videos for pedagogica purposes, apart from a few well-known frameworks. In this regard, this research paper proposes a novel taxonomy of video genres that educators can utilize when creating videos intended for use in either video-based learning environments or online courses. To create this taxonomy, a large number of videos from online courses were collected and analyzed using a mixed-method research design approach.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
Digital technologies have enabled a variety of learning offers that opened new challenges in terms of recognition of formal, informal and non-formal learning, such as MOOCs.
This paper focuses on how providing relevant data to describe a MOOC is conducive to increase the transparency of information and, ultimately, the flexibility of European higher education.
The EU-funded project ECCOE took up these challenges and developed a solution by identifying the most relevant descriptors of a learning opportunity with a view to supporting a European system for micro-credentials. Descriptors indicate the specific properties of a learning opportunity according to European standards. They can provide a recognition framework also for small volumes of learning (micro-credentials) to support the integration of non-formal learning (MOOCs) into formal learning (e.g. institutional university courses) and to tackle skills shortage, upskilling and reskilling by acquiring relevant competencies. The focus on learning outcomes can facilitate the recognition of skills and competences of students and enhance both virtual and physical mobility and employability.
This paper presents two contexts where ECCOE descriptors have been adopted: the Politecnico di Milano MOOC platform (Polimi Open Knowledge – POK), which is using these descriptors as the standard information to document the features of its learning opportunities, and the EU-funded Uforest project on urban forestry, which developed a blended training program for students of partner universities whose MOOCs used the ECCOE descriptors.
Practice with ECCOE descriptors shows how they can be used not only to detail MOOC features, but also as a compass to design the learning offer. In addition, some rules of thumb can be derived and applied when using specific descriptors.
Loss of expertise in the fields of Nuclear- and Radio-Chemistry (NRC) is problematic at a scientific and social level. This has been addressed by developing a MOOC, in order to let students in scientific matters discover all the benefits of NRC to society and improving their awareness of this discipline. The MOOC “Essential Radiochemistry for Society” includes current societal challenges related to health, clean and sustainable energy for safety and quality of food and agriculture.
NRC teachers belonging to CINCH network were invited to use the MOOC in their teaching, according to various usage models: on the basis of these different experiences, some usage patterns were designed, describing context characteristics (number and age of students, course), activities’ scheduling and organization, results and students’ feedback, with the aim of encouraging the use of MOOCs in university teaching, as an opportunity for both lecturers and students. These models were the basis of a “toolkit for teachers”. By experiencing digital teaching resources created by different lecturers, CINCH teachers took a first meaningful step towards understanding the worth of Open Educational Resources (OER) and the importance of their creation, adoption and sharing for knowledge progress. In this paper, the entire path from MOOC concept to MOOC different usage models, to awareness-raising regarding OER is traced in conceptual stages.
In 2020, the project “iMooX – The MOOC Platform as a Service for all Austrian Universities” was launched. It is co-financed by the Austrian Ministry of Education, Science and Research. After half of the funding period, the project management wants to assess and share results and outcomes but also address (potential) additional “impacts” of the MOOC platform. Building upon work on OER impact assessment, this contribution describes in detail how the specific iMooX.at approach of impact measurement was developed. Literature review, stakeholder analysis, and problem-based interviews were the base for developing a questionnaire addressing the defined key stakeholder “MOOC creators”. The article also presents the survey results in English for the first time but focuses more on the development, strengths, and weaknesses of the selected methods. The article is seen as a contribution to the further development of impact assessment for MOOC platforms.
This short paper sets out to propose a novel and interesting learning design that facilitates for cooperative learning in which students do not conduct traditional group work in an asynchronous online education setting. This learning design will be explored in a Small Private Online Course (SPOC) among teachers and school managers at a teacher education. Such an approach can be made possible by applying specific criteria commonly used to define collaborative learning. Collaboration can be defined, among other things, as a structured way of working among students that includes elements of co-laboring. The cooperative learning design involves adapting various traditional collaborative learning approaches for use in an online learning environment. A critical component of this learning design is that students work on a self-defined case project related to their professional practices. Through an iterative process, students will receive ongoing feedback and formative assessments from instructors and follow students at specific points, meaning that co-constructing of knowledge and learning takes place as the SPOC progresses. This learning design can contribute to better learning experiences and outcomes for students, and be a valuable contribution to current research discussions on learning design in Massive Open Online Courses (MOOCs).
From MOOC to “2M-POC”
(2023)
IFP School develops and produces MOOCs since 2014. After the COVID-19 crisis, the demand of our industrial and international partners to offer continuous training to their employees increased drastically in an energy transition and sustainable mobility environment that finds itself in constant and rapid evolution. Therefore, it is time for a new format of digital learning tools to efficiently and rapidly train an important number of employees. To address this new demand, in a more and more digital learning environment, we have completely changed our initial MOOC model to propose an innovative SPOC business model mixing synchronous and asynchronous modules. This paper describes the work that has been done to transform our MOOCs to a hybrid SPOC model. We changed the format itself from a standard MOOC model of several weeks to small modules of one week average more adapted to our client’s demand. We precisely engineered the exchanges between learners and the social aspect all along the SPOC duration. We propose a multimodal approach with a combination of asynchronous activities like online module, exercises, and synchronous activities like webinars with experts, and after-work sessions. Additionally, this new format increases the number of uses of the MOOC resources by our professors in our own master programs.
With all these actions, we were able to reach a completion rate between 80 and 96% – total enrolled –, compared to the completion rate of 15 to 28% – total enrolled – as to be recorded in our original MOOC format. This is to be observed for small groups (50–100 learners) as SPOC but also for large groups (more than 2500 learners), as a Massive and Multimodal Private Online Course (“2M-POC”). Today a MOOC is not a simple assembly of videos, text, discussions forums and validation exercises but a complete multimodal learning path including social learning, personal followup, synchronous and asynchronous modules. We conclude that the original MOOC format is not at all suitable to propose efficient training to companies, and we must re-engineer the learning path to have a SPOC hybrid and multimodal training compatible with a cost-effective business model.
This work explores the use of different generative AI tools in the design of MOOC courses. Authors in this experience employed a variety of AI-based tools, including natural language processing tools (e.g. Chat-GPT), and multimedia content authoring tools (e.g. DALLE-2, Midjourney, Tome.ai) to assist in the course design process. The aim was to address the unique challenges of MOOC course design, which includes to create engaging and effective content, to design interactive learning activities, and to assess student learning outcomes. The authors identified positive results with the incorporation of AI-based tools, which significantly improved the quality and effectiveness of MOOC course design. The tools proved particularly effective in analyzing and categorizing course content, identifying key learning objectives, and designing interactive learning activities that engaged students and facilitated learning. Moreover, the use of AI-based tools, streamlined the course design process, significantly reducing the time required to design and prepare the courses. In conclusion, the integration of generative AI tools into the MOOC course design process holds great potential for improving the quality and efficiency of these courses. Researchers and course designers should consider the advantages of incorporating generative AI tools into their design process to enhance their course offerings and facilitate student learning outcomes while also reducing the time and effort required for course development.
At the beginning of 2020, with COVID-19, courts of justice worldwide had to move online to continue providing judicial service. Digital technologies materialized the court practices in ways unthinkable shortly before the pandemic creating resonances with judicial and legal regulation, as well as frictions. A better understanding of the dynamics at play in the digitalization of courts is paramount for designing justice systems that serve their users better, ensure fair and timely dispute resolutions, and foster access to justice. Building on three major bodies of literature —e-justice, digitalization and organization studies, and design research— Designing for Digital Justice takes a nuanced approach to account for human and more-than-human agencies.
Using a qualitative approach, I have studied in depth the digitalization of Chilean courts during the pandemic, specifically between April 2020 and September 2022. Leveraging a comprehensive source of primary and secondary data, I traced back the genealogy of the novel materializations of courts’ practices structured by the possibilities offered by digital technologies. In five (5) cases studies, I show in detail how the courts got to 1) work remotely, 2) host hearings via videoconference, 3) engage with users via social media (i.e., Facebook and Chat Messenger), 4) broadcast a show with judges answering questions from users via Facebook Live, and 5) record, stream, and upload judicial hearings to YouTube to fulfil the publicity requirement of criminal hearings. The digitalization of courts during the pandemic is characterized by a suspended normativity, which makes innovation possible yet presents risks. While digital technologies enabled the judiciary to provide services continuously, they also created the risk of displacing traditional judicial and legal regulation.
Contributing to liminal innovation and digitalization research, Designing for Digital Justice theorizes four phases: 1) the pre-digitalization phase resulting in the development of regulation, 2) the hotspot of digitalization resulting in the extension of regulation, 3) the digital innovation redeveloping regulation (moving to a new, preliminary phase), and 4) the permanence of temporal practices displacing regulation. Contributing to design research Designing for Digital Justice provides new possibilities for innovation in the courts, focusing at different levels to better address tensions generated by digitalization. Fellow researchers will find in these pages a sound theoretical advancement at the intersection of digitalization and justice with novel methodological references. Practitioners will benefit from the actionable governance framework Designing for Digital Justice Model, which provides three fields of possibilities for action to design better justice systems. Only by taking into account digital, legal, and social factors can we design better systems that promote access to justice, the rule of law, and, ultimately social peace.
The Security Operations Center (SOC) represents a specialized unit responsible for managing security within enterprises. To aid in its responsibilities, the SOC relies heavily on a Security Information and Event Management (SIEM) system that functions as a centralized repository for all security-related data, providing a comprehensive view of the organization's security posture. Due to the ability to offer such insights, SIEMS are considered indispensable tools facilitating SOC functions, such as monitoring, threat detection, and incident response.
Despite advancements in big data architectures and analytics, most SIEMs fall short of keeping pace. Architecturally, they function merely as log search engines, lacking the support for distributed large-scale analytics. Analytically, they rely on rule-based correlation, neglecting the adoption of more advanced data science and machine learning techniques.
This thesis first proposes a blueprint for next-generation SIEM systems that emphasize distributed processing and multi-layered storage to enable data mining at a big data scale. Next, with the architectural support, it introduces two data mining approaches for advanced threat detection as part of SOC operations.
First, a novel graph mining technique that formulates threat detection within the SIEM system as a large-scale graph mining and inference problem, built on the principles of guilt-by-association and exempt-by-reputation. The approach entails the construction of a Heterogeneous Information Network (HIN) that models shared characteristics and associations among entities extracted from SIEM-related events/logs. Thereon, a novel graph-based inference algorithm is used to infer a node's maliciousness score based on its associations with other entities in the HIN. Second, an innovative outlier detection technique that imitates a SOC analyst's reasoning process to find anomalies/outliers. The approach emphasizes explainability and simplicity, achieved by combining the output of simple context-aware univariate submodels that calculate an outlier score for each entry.
Both approaches were tested in academic and real-world settings, demonstrating high performance when compared to other algorithms as well as practicality alongside a large enterprise's SIEM system.
This thesis establishes the foundation for next-generation SIEM systems that can enhance today's SOCs and facilitate the transition from human-centric to data-driven security operations.
xMOOCs
(2023)
The World Health Organization designed OpenWHO.org to provide an inclusive and accessible online environment to equip learners across the globe with critical up-to-date information and to be able to effectively protect themselves in health emergencies. The platform thus focuses on the eXtended Massive Open Online Course (xMOOC) modality – contentfocused and expert-driven, one-to-many modelled, and self-paced for scalable learning. In this paper, we describe how OpenWHO utilized xMOOCs to reach mass audiences during the COVID-19 pandemic; the paper specifically examines the accessibility, language inclusivity and adaptability of hosted xMOOCs. As of February 2023, OpenWHO had 7.5 million enrolments across 200 xMOOCs on health emergency, epidemic, pandemic and other public health topics available across 65 languages, including 46 courses targeted for the COVID-19 pandemic. Our results suggest that the xMOOC modality allowed OpenWHO to expand learning during the pandemic to previously underrepresented groups, including women, participants ages 70 and older, and learners younger than age 20. The OpenWHO use case shows that xMOOCs should be considered when there is a need for massive knowledge transfer in health emergency situations, yet the approach should be context-specific according to the type of health emergency, targeted population and region. Our evidence also supports previous calls to put intervention elements that contribute to removing barriers to access at the core of learning and health information dissemination. Equity must be the fundamental principle and organizing criteria for public health work.
With the growing number of online learning resources, it becomes increasingly difficult and overwhelming to keep track of the latest developments and to find orientation in the plethora of offers. AI-driven services to recommend standalone learning resources or even complete learning paths are discussed as a possible solution for this challenge. To function properly, such services require a well-defined set of metadata provided by the learning resource. During the last few years, the so-called MOOChub metadata format has been established as a de-facto standard by a group of MOOC providers in German-speaking countries. This format, which is based on schema.org, already delivers a quite comprehensive set of metadata. So far, this set has been sufficient to list, display, sort, filter, and search for courses on several MOOC and open educational resources (OER) aggregators. AI recommendation services and further automated integration, beyond a plain listing, have special requirements, however. To optimize the format for proper support of such systems, several extensions and modifications have to be applied. We herein report on a set of suggested changes to prepare the format for this task.
“How can a course structure be redesigned based on empirical data to enhance the learning effectiveness through a student-centered approach using objective criteria?”, was the research question we asked. “Digital Twins for Virtual Commissioning of Production Machines” is a course using several innovative concepts including an in-depth practical part with online experiments, called virtual labs. The teaching-learning concept is continuously evaluated. Card Sorting is a popular method for designing information architectures (IA), “a practice of effectively organizing, structuring, and labeling the content of a website or application into a structuref that enables efficient navigation” [11]. In the presented higher education context, a so-called hybrid card sort was used, in which each participants had to sort 70 cards into seven predefined categories or create new categories themselves. Twelve out of 28 students voluntarily participated in the process and short interviews were conducted after the activity. The analysis of the category mapping creates a quantitative measure of the (dis-)similarity of the keywords in specific categories using hierarchical clustering (HCA). The learning designer could then interpret the results to make decisions about the number, labeling and order of sections in the course.
This paper presents a new design for MOOCs for professional development of skills needed to meet the UN Sustainable Development Goals – the CoMOOC or Co-designed Massive Open Online Collaboration. The CoMOOC model is based on co-design with multiple stakeholders including end-users within the professional communities the CoMOOC aims to reach. This paper shows how the CoMOOC model could help the tertiary sector deliver on the UN Sustainable Development Goals (UNSDGs) – including but not limited to SDG 4 Education – by providing a more effective vehicle for professional development at a scale that the UNSDGs require. Interviews with professionals using MOOCs, and design-based research with professionals have informed the development of the Co-MOOC model. This research shows that open, online, collaborative learning experiences are highly effective for building professional community knowledge. Moreover, this research shows that the collaborative learning design at the heart of the CoMOOC model is feasible cross-platform Research with teachers working in crisis contexts in Lebanon, many of whom were refugees, will be presented to show how this form of large scale, co-designed, online learning can support professionals, even in the most challenging contexts, such as mass displacement, where expertise is urgently required.
“Financial Analysis” is an online course designed for professionals consisting of three MOOCs, offering a professionally and institutionally recognized certificate in finance. The course is open but not free of charge and attracts mostly professionals from the banking industry. The primary objective of this study is to identify indicators that can predict learners at high risk of failure. To achieve this, we analyzed data from a previous course that had 875 enrolled learners and involve in the course during Fall 2021. We utilized correspondence analysis to examine demographic and behavioral variables.
The initial results indicate that demographic factors have a minor impact on the risk of failure in comparison to learners’ behaviors on the course platform. Two primary profiles were identified: (1) successful learners who utilized all the documents offered and spent between one to two hours per week, and (2) unsuccessful learners who used less than half of the proposed documents and spent less than one hour per week. Between these groups, at-risk students were identified as those who used more than half of the proposed documents and spent more than two hours per week. The goal is to identify those in group 1 who may be at risk of failing and those in group 2 who may succeed in the current MOOC, and to implement strategies to assist all learners in achieving success.
Challenges and proposals for introducing digital certificates in higher education infrastructures
(2023)
Questions about the recognition of MOOCs within and outside higher education were already being raised in the early 2010s. Today, recognition decisions are still made more or less on a case-by-case basis. However, digital certification approaches are now emerging that could automate recognition processes. The technical development of the required machinereadable documents and infrastructures is already well advanced in some cases. The DigiCerts consortium has developed a solution based on a collective blockchain. There are ongoing and open discussions regarding the particular technology, but the institutional implementation of digital certificates raises further questions. A number of workshops have been held at the Institute for Interactive Systems at Technische Hochschule Lübeck, which have identified the need for new responsibilities for issuing certificates. It has also become clear that all members of higher education institutions need to develop skills in the use of digital certificates.
Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms.
Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response.
Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases.
Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation.
In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss.
The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events.
The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase.
Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices.
The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline.
Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used.
Starch is a biopolymer for which, despite its simple composition, understanding the precise mechanism behind its formation and regulation has been challenging. Several approaches and bioanalytical tools can be used to expand the knowledge on the different parts involved in the starch metabolism. In this sense, a comprehensive analysis targeting two of the main groups of molecules involved in this process: proteins, as effectors/regulators of the starch metabolism, and maltodextrins as starch components and degradation products, was conducted in this research work using potato plants (Solanum tuberosum L. cv. Desiree) as model of study. On one side, proteins physically interacting to potato starch were isolated and analyzed through mass spectrometry and western blot for their identification. Alternatively, starch interacting proteins were explored in potato tubers from transgenic plants having antisense inhibition of starch-related enzymes and on tubers stored under variable environmental conditions. Most of the proteins recovered from the starch granules corresponded to previously described proteins having a specific role in the starch metabolic pathway. Another set of proteins could be grouped as protease inhibitors, which were found weakly interacting to starch. Variations in the protein profile obtained after electrophoresis separation became clear when tubers were stored under different temperatures, indicating a differential expression of proteins in response to changing environmental conditions.
On the other side, since maltodextrin metabolism is thought to be involved in both starch initiation and degradation, soluble maltooligosaccharide content in potato tubers was analyzed in this work under diverse experimental variables. For this, tuber disc samples from wild type and transgenic lines strongly repressing either the plastidial or cytosolic form of the -glucan phosphorylase and phosphoglucomutase were incubated with glucose, glucose-6-phosphate, and glucose-1-phosphate solutions to evaluate the influence of such enzymes on the conversion of the carbon sources into soluble maltodextrins, in comparison to wild-type samples. Relative maltodextrin amounts analyzed through capillary electrophoresis equipped with laser-induced fluorescence (CE-LIF) revealed that tuber discs could immediately uptake glucose-1-phosphate and use it to produce maltooligosaccharides with a degree of polymerization of up to 30 (DP30), in contrast to transgenic tubers with strong repression of the plastidial glucan phosphorylase. The results obtained from the maltodextrin analysis support previous indications that a specific transporter for glucose-1-phosphate may exist in both the plant cells and the plastidial membranes, thereby allowing a glucose-6-phosphate independent transport. Furthermore, it confirms that the plastidial glucan phosphorylase is responsible for producing longer maltooligosaccharides in the plastids by catalyzing a glucan polymerization reaction when glucose-1-phosphate is available. All these findings contribute to a better understanding of the role of the plastidial glucan phosphorylase as a key enzyme directly involved in the synthesis and degradation of glucans and their implication on starch metabolism.
Its properties make copper one of the world’s most important functional metals. Numerous megatrends are increasing the demand for copper. This requires the prospection and exploration of new deposits, as well as the monitoring of copper quality in the various production steps. A promising technique to perform these tasks is Laser Induced Breakdown Spectroscopy (LIBS). Its unique feature, among others, is the ability to measure on site without sample collection and preparation. In this work, copper-bearing minerals from two different deposits are studied. The first set of field samples come from a volcanogenic massive sulfide (VMS) deposit, the second part from a stratiform sedimentary copper (SSC) deposit. Different approaches are used to analyze the data. First, univariate regression (UVR) is used. However, due to the strong influence of matrix effects, this is not suitable for the quantitative analysis of copper grades. Second, the multivariate method of partial least squares regression (PLSR) is used, which is more suitable for quantification. In addition, the effects of the surrounding matrices on the LIBS data are characterized by principal component analysis (PCA), alternative regression methods to PLSR are tested and the PLSR calibration is validated using field samples.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
Desperados at Sea
(2023)
Pirates are fortune-seeking fighters at sea. Their exploits fire the imaginations of their victims and admirers, drawing a veil over individuals who rarely bear a real name and pursue their adventurous occupations as buccaneers, filibusters, freebooters, privateers, pirates, or corsairs. Piracy, corsairing, and contraband trade were epidemic among the Egyptians and the Phoenicians, the Greeks and the Vikings, the Spaniards and the Ottomans, the Muslims, and the Christians. And the Jews.
Portal Wissen = Learning
(2023)
Changing through learning is one of the most important characteristics we humans have. We are born and can – it seems – do nothing. We have to comprehend, copy, and acquire everything: grasping and walking, eating and speaking. Of course, we also have to read and do number work. In the meantime, we know: We will never be able to finish this. At best, we learn for a lifetime. If we stop, it harms us. The Greek philosopher Plato said more than 2,400 years ago, “There is no shame in not knowing something. The shame is in not being willing to learn.”
As humans we are also capable of learning; thanks to more and more knowledge about the world around us, we have moved from the Stone Age into the digital age. That this development is not a finish line either, but that we still have a long way to go, is shown by man-made climate change – and above all by our inability as a global community to translate what research teaches us into appropriate actions. Let us dare to hope that we also comprehend this.
What we tend to ignore in the intensive discussion about the multi-layered levels of learning: We are by no means the only learners. Many, if not all, living beings on our planet learn, some more in a more purposeful and complex and more cognitive way than others. And for some time now, machines have also been able to learn more or less independently. Artificial intelligence sends its regards.
The significance of learning for human beings can hardly be overestimated. Science has also understood this and has discovered the learning processes and conditions in almost all contexts for itself, no matter whether it is about our own learning processes and conditions or those around us. We have investigated some of these for the current issue of “Portal Wissen”.
Psycholinguist Natalie Boll-Avetisyan has developed a box that can be used to detect language learning disorders already in young children. The behavioral biologists Jana Eccard and Valeria Mazza investigated the behavior of small rodents and found out that they do not only develop different personality traits but they also described how they learn to adapt them different environmental conditions. Computer linguist David Schlangen examines the question what machines have to learn so that our communication with them works even better.
Since research is ultimately always a learning process that strives to understand something yet unknown, this time all texts are somehow along the motto of the title theme: It is about what the history of past centuries reveals about “military cultures of violence” and the question of what lessons we should learn from natural hazards for the future.
We talked with a legal scholar who looks beyond the university’s backyard and wants to make law comprehensible to everyone. We also talked with a philosopher who analyzes why “having an opinion” means something different today than 100 years ago. We report about an AI-based genome analysis that can change healthcare sustainably. Furthermore, it is about the job profile “YouTuber”, minor cosmopolitanisms, and wildlife management in Africa. When you have finished reading, you will have learnt something. Promised! Enjoy your read!
“Israel am Meere”
(2023)
For Jews in Germany, the period following the Nazis’ rise to power in January 1933 was a period of decision-making on many levels: How should they respond to the persecution? If they decided to emigrate, many more decisions had to be made: How does one leave a country, and where should one go? A key moment in the process and in the cultural practice of emigration is the beginning of the sea voyage – when the need for departure and the hope for a new arrival jointly create a period of liminality. Looking at reports from sea voyages of exploration and emigration from the 1930s, this contribution discusses the question whether, and in what ways, such reflections can be read in the context of religious experiences and in the search for Jewish identities in times of turmoil.
“Creating a Maritime Future”
(2023)
This article explores the importance of the port city of Hamburg in the evolving discourses on the creation of a maritime future, a vision which became influential in the 1930s, 1940s and 1950s. While some Jewish representatives in the city aimed at preserving and intertwining Hanseatic and Jewish traditions in order to secure a Jewish presence in the port city under the pressure of the Nazi regime and thereafter, others wanted to create new emigration opportunities, especially to Mandatory Palestine, and create a Jewish maritime future in Eretz Israel. Different Zionist organizations supported the newly evolving maritime ideas, such as the “conquest of the sea”, and promoted the image of a Jewish seafaring nation. Despite the difficulties in the 1940s, these concepts gained influence post-1945 and led to the foundation of the fishery kibbutz “Zerubavel” in Blankenese/Hamburg. However, the idea of a Hanseatic Jewish future also remained influential and illustrates how differently a “Jewish maritime future” was imagined and used to link past, present and future.
Jacob Brandon Maduro’s Memoirs and Related Observations (Havana, 1953) speak to the lasting yet malleable legacy of Jewish Caribbean/Atlantic mercantile communities that defined early modern settlement in the Americas. A close reading of the Memoirs, alongside relevant archival records and community narratives, lends new perspectives to scholarship on Port Jewries and the Atlantic Diaspora. Specifically concerned with Jacob’s adoption of such leading intellectual and political tropes as the Monroe doctrine, José Martí’s Nuestra America, and a Zionism that evolved from an ideology to a reality, the Memoirs reveal a narrative at once defined by the tremendous upheavals of the first half of the 20th century, and an enduring sense of Jewish diasporic peoplehood defined through a Port Jew paradigm whereby the preservation of Jewish ethnicity is understood as synonymous with the championing of modernity.
Background: Physical fitness is a key aspect of children’s ability to perform activities of daily living, engage in leisure activities, and is associated with important health characteristics. As such, it shows multi-directional associations with weight status as well as executive functions, and varies according to a variety of moderating factors, such as the child’s gender, age, geographical location, and socioeconomic conditions and context. The assessment and monitoring of children’s physical fitness has gained attention in recent decades, as has the question of how to promote physical fitness through the implementation of a variety of programs and interventions. However, these programs and interventions rarely focus on children with deficits in their physical fitness. Due to their deficits, these children are at the highest risk of suffering health impairments compared to their more average fit peers. In efforts to promote physical fitness, schools could offer promising and viable approaches to interventions, as they provide access to large youth populations while providing useful infrastructure. Evidence suggests that school-based physical fitness interventions, particularly those that include supplementary physical education, are useful for promoting and improving physical fitness in children with normal fitness. However, there is little evidence on whether these interventions have similar or even greater effects on children with deficits in their physical fitness. Furthermore, the question arises whether these measures help to sustainably improve the development/trajectories of physical fitness in these children.
The present thesis aims to elucidate the following four objectives: (1) to evaluate the effects of a 14 week intervention with 2 x 45 minutes per week additional remedial physical education on physical fitness and executive function in children with deficits in their physical fitness; (2) to assess moderating effects of body height and body mass on physical fitness components in children with physical fitness deficits; (3) to assess moderating effects of age and skeletal growth on physical fitness in children with physical fitness deficits; and (4) to analyse moderating effects of different physical fitness components on executive function in children with physical fitness deficits.
Methods: Using physical fitness data from the EMOTIKON study, 76 third graders with physical fitness deficits were identified in 11 schools in Brandenburg state that met the requirements for implementing a remedial physical education intervention (i.e., employing specially trained physical education teachers). The fitness intervention was implemented in a cross-over design and schools were randomly assigned to either an intervention-control or control-intervention group. The remedial physical education intervention consisted of a 14 week, 2 x 45 minutes per week remedial physical education curriculum supplemented by a physical exercise homework program. Assessments were conducted at the beginning and end of each intervention and control period, and further assessments were conducted at the beginning and end of each school year until the end of sixth grade. Physical fitness as the primary outcome was assessed using fitness tests implemented in the EMOTIKON study (i.e., lower body muscular strength (standing long jump), speed (20 m sprint), cardiorespiratory fitness (6 min run), agility (star run), upper body muscular strength (ball push test), and balance (one leg balance)). Executive functions as a secondary outcome were assessed using attention and psychomotor processing speed (digit symbol substitution test), mental flexibility and fine motor skills (trail making test), and inhibitory control (Simon task). Anthropometric measures such as body height, body mass, maturity offset, and body composition parameters, as well as socioeconomic information were recorded as potential moderators.
Results: (1) The evaluation of possible effects of the remedial physical education intervention on physical fitness and executive functions of children with deficits in their physical fitness did not reveal any detectable intervention-related improvements in physical fitness or executive functions. The implemented analysis strategies also showed moderating effects of body mass index (BMI) on performance in 6 min run, star run, and standing long jump, with children with a lower BMI performing better, moderating effects of proximity to Berlin on performance in the 6 min run and standing long jump, better performances being found in children living closer to Berlin, and overall gendered differences in executive function test performance, with boys performing better compared to girls. (2) Analysing moderating effects of body height and body mass on physical fitness performance, better overall physical fitness performance was found for taller children. For body mass, a negative effect was found on performance in the 6 min run (linear), standing long jump (linear), and 20 m sprint (quadratic), with better performance associated with lighter children, and a positive effect of body mass on performance in the ball push test, with heavier children performing better. In addition, the analysis revealed significant interactions between body height and body mass on performance in 6 min run and 20 m sprint, with higher body mass being associated with performance improvements in larger children, while higher body mass was associated with performance declines in smaller children. In addition, the analysis revealed overall age-related improvements in physical fitness and was able to show that children with better overall physical fitness also elicit greater age-related improvements. (3) In the analysis of moderating effects of age and maturity offset on physical fitness performances, two unrotated principal components of z-transformed age and maturity offset values were calculated (i.e., relative growth = (age + maturity offset)/2; growth delay = (age - maturity offset)) to avoid colinearity. Analysing these constructs revealed positive effects of relative growth on performances in star run, 20 m sprint, and standing long jump, with children of higher relative growth performing better. For growth delay, positive effects were found on performances in 6 min run and 20 m sprint, with children having larger growth delays showing better performances. Further, the model revealed gendered differences in 6 min run and 20 m sprint performances with girls performing better than boys. (4) Analysing the effects of physical fitness tests on executive function revealed a positive effect of star run and one leg balance performance and a negative effect of 6 min run performance on reaction speed in the Simon task. However, these effects were not detectable when individual differences were accounted for. Then these effects showed overall positive effects, with better performances being associated with faster reaction speeds. In addition, the analysis revealed a positive correlation between overall reaction speed and effects of the 6 min run, suggesting that children with greater effects of 6 min run had faster overall reaction speeds. Negative correlations were found between star run effects and age effects on Simon task reaction speed, meaning that children with larger star run effects had smaller age effects, and between 6 min run effects and star run effects on Simon task reaction speed, meaning that children with larger 6 min run effects tended to have smaller star run effects on Simon task reaction speed and vice versa.
Conclusions: (1) The lack of detectable intervention-related effects could have been caused by an insufficient intervention period, by the implementation of comprehensive and thus non- specific exercises, or by both. Accordingly, longer intervention periods and/or more specific exercises may have been more beneficial and could have led to detectable improvements in physical fitness and/or executive function. However, it remains unclear whether these interventions can benefit children with deficits in physical fitness, as it is possible that their deficits are not caused by a mere lack of exercise, but rather depend on the socioeconomic conditions of the children and their families and areas. Therefore, further research is needed to assess the moderation of physical fitness in children with physical fitness deficits and, in particular, the links between children’s environment and their physical fitness trajectories. (2) Findings from this work suggest that using BMI as a composite of body height and body mass may not be able to capture the variation associated with these parameters and their interactions. In particular, because of their multidirectional associations, further research would help elucidate how BMI and its subcomponents influence physical fitness and how they vary between children with and without physical fitness deficits. (3) The assessment of growth- related changes indicated negative effects associated with the growth spurt approaching age of peak height velocity, and furthermore showed significant differences in these effects between children. Thus, these effects and possible interindividual differences should be considered in the assessment of the development of physical fitness in children. (4) Furthermore, this work has shown that the associations between physical fitness and executive functions vary between children and may be moderated by children’s socioeconomic conditions and the structure of their daily activities. Further research is needed to explore these associations using approaches that account for individual variance.
As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs.
Reliable and robust data processing is one of the hardest requirements for systems in fields such as medicine, security, automotive, aviation, and space, to prevent critical system failures caused by changes in operating or environmental conditions. In particular, Signal Integrity (SI) effects such as crosstalk may distort the signal information in sensitive mixed-signal designs. A challenge for hardware systems used in the space are radiation effects. Namely, Single Event Effects (SEEs) induced by high-energy particle hits may lead to faulty computation, corrupted configuration settings, undesired system behavior, or even total malfunction.
Since these applications require an extra effort in design and implementation, it is beneficial to master the standard cell design process and corresponding design flow methodologies optimized for such challenges. Especially for reliable, low-noise differential signaling logic such as Current Mode Logic (CML), a digital design flow is an orthogonal approach compared to traditional manual design. As a consequence, mandatory preliminary considerations need to be addressed in more detail. First of all, standard cell library concepts with suitable cell extensions for reliable systems and robust space applications have to be elaborated. Resulting design concepts at the cell level should enable the logical synthesis for differential logic design or improve the radiation-hardness. In parallel, the main objectives of the proposed cell architectures are to reduce the occupied area, power, and delay overhead. Second, a special setup for standard cell characterization is additionally required for a proper and accurate logic gate modeling. Last but not least, design methodologies for mandatory design flow stages such as logic synthesis and place and route need to be developed for the respective hardware systems to keep the reliability or the radiation-hardness at an acceptable level.
This Thesis proposes and investigates standard cell-based design methodologies and techniques for reliable and robust hardware systems implemented in a conventional semi-conductor technology. The focus of this work is on reliable differential logic design and robust radiation-hardening-by-design circuits. The synergistic connections of the digital design flow stages are systematically addressed for these two types of hardware systems. In more detail, a library for differential logic is extended with single-ended pseudo-gates for intermediate design steps to support the logic synthesis and layout generation with commercial Computer-Aided Design (CAD) tools. Special cell layouts are proposed to relax signal routing. A library set for space applications is similarly extended by novel Radiation-Hardening-by-Design (RHBD) Triple Modular Redundancy (TMR) cells, enabling a one fault correction. Therein, additional optimized architectures for glitch filter cells, robust scannable and self-correcting flip-flops, and clock-gates are proposed. The circuit concepts and the physical layout representation views of the differential logic gates and the RHBD cells are discussed. However, the quality of results of designs depends implicitly on the accuracy of the standard cell characterization which is examined for both types therefore. The entire design flow is elaborated from the hardware design description to the layout representations. A 2-Phase routing approach together with an intermediate design conversion step is proposed after the initial place and route stage for reliable, pure differential designs, whereas a special constraining for RHBD applications in a standard technology is presented.
The digital design flow for differential logic design is successfully demonstrated on a reliable differential bipolar CML application. A balanced routing result of its differential signal pairs is obtained by the proposed 2-Phase-routing approach. Moreover, the elaborated standard cell concepts and design methodology for RHBD circuits are applied to the digital part of a 7.5-15.5 MSPS 14-bit Analog-to-Digital Converter (ADC) and a complex microcontroller architecture. The ADC is implemented in an unhardened standard semiconductor technology and successfully verified by electrical measurements. The overhead of the proposed hardening approach is additionally evaluated by design exploration of the microcontroller application. Furthermore, the first obtained related measurement results of novel RHBD-∆TMR flip-flops show a radiation-tolerance up to a threshold Linear Energy Transfer (LET) of 46.1, 52.0, and 62.5 MeV cm2 mg-1 and savings in silicon area of 25-50 % for selected TMR standard cell candidates.
As a conclusion, the presented design concepts at the cell and library levels, as well as the design flow modifications are adaptable and transferable to other technology nodes. In particular, the design of hybrid solutions with integrated reliable differential logic modules together with robust radiation-tolerant circuit parts is enabled by the standard cell concepts and design methods proposed in this work.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Watershed management requires an understanding of key hydrochemical processes. The Pra Basin is one of the five major river basins in Ghana with a population of over 4.2 million people. Currently, water resources management faces challenges due to surface water pollution caused by the unregulated release of untreated household and industrial waste into aquatic ecosystems and illegal mining activities. This has increased the need for groundwater as the most reliable water supply. Our understanding of groundwater recharge mechanisms and chemical evolution in the basin has been inadequate, making effective management difficult. Therefore, the main objective of this work is to gain insight into the processes that determine the hydrogeochemical evolution of groundwater quality in the Pra Basin. The combined use of stable isotope, hydrochemistry, and water level data provides the basis for conceptualizing the chemical evolution of groundwater in the Pra Basin. For this purpose, the origin and evaporation rates of water infiltrating into the unsaturated zone were evaluated. In addition, Chloride Mass Balance (CMB) and Water Table Fluctuations (WTF) were considered to quantify groundwater recharge for the basin. Indices such as water quality index (WQI), sodium adsorption ratio (SAR), Wilcox diagram, and salinity (USSL) were used in this study to determine the quality of the resource for use as drinking water and for irrigation purposes. Due to the heterogeneity of the hydrochemical data, the statistical techniques of hierarchical cluster and factor analysis were applied to subdivide the data according to their spatial correlation. A conceptual hydrogeochemical model was developed and subsequently validated by applying combinatorial inverse and reaction pathway-based geochemical models to determine plausible mineral assemblages that control the chemical composition of the groundwater. The interactions between water and rock determine the groundwater quality in the Pra Basin. The results underline that the groundwater is of good quality and can be used for drinking water and irrigation purposes. It was demonstrated that there is a large groundwater potential to meet the entire Pra Basin’s current and future water demands. The main recharge area was identified as the northern zone, while the southern zone is the discharge area. The predominant influence of weathering of silicate minerals plays a key role in the chemical evolution of the groundwater. The work presented here provides fundamental insights into the hydrochemistry of the Pra Basin and provides data important to water managers for informed decision-making in planning and allocating water resources for various purposes. A novel inverse modelling approach was used in this study to identify different mineral compositions that determine the chemical evolution of groundwater in the Pra Basin. This modelling technique has the potential to simulate the composition of groundwater at the basin scale with large hydrochemical heterogeneity, using average water composition to represent established spatial groupings of water chemistry.
Continental rifts are key geodynamic regions where the complex interplay of magmatism and faulting activity can be studied to understand the driving forces of extension and the formation of new divergent plate boundaries. Well-preserved rift morphology can provide a wealth of information on the growth, interaction, and linkage of normal-fault systems through time. If rift basins are preserved over longer geologic time periods, sedimentary archives generated during extensional processes may mirror tectonic and climatic influences on erosional and sedimentary processes that have varied over time. Rift basins are furthermore strategic areas for hydrocarbon and geothermal energy exploration, and they play a central role in species dispersal and evolution as well as providing or inhibiting hydrologic connectivity along basins at emerging plate boundaries.
The Cenozoic East African rift system (EARS) is one of the most important continental extension zones, reflecting a range of evolutionary stages from an early rift stage with isolated basins in Malawi to an advanced stage of continental extension in southern Afar. Consequently, the EARS is an ideal natural laboratory that lends itself to the study of different stages in the breakup of a continent. The volcanically and seismically active eastern branch of the EARS is characterized by multiple, laterally offset tectonic and magmatic segments where adjacent extensional basins facilitate crustal extension either across a broad deformation zone or via major transfer faulting. The Broadly Rifted Zone (BRZ) in southern Ethiopia is an integral part of the eastern branch of the EARS; in this region, rift segments of the southern Ethiopian Rift (sMER) and northern Kenyan Rift (nKR) propagate in opposite directions in a region with one of the earliest manifestations of volcanism and extensional tectonism in East Africa. The basin margins of the Chew-Bahir Basin and the Gofa Province, characterized by a semi-arid climate and largely uniform lithology, provide ideal conditions for studying the tectonic and geomorphologic features of this complex kinematic transfer zone, but more importantly, this area is suitable for characterizing and quantifying the overlap between the propagating structures of the sMER and nKR and the resulting deformation patterns of the BRZ transfer zones.
In this study, I have combined data from thermochronology, thermal modeling, morphometry, paleomagnetic analysis, geochronology, and geomorphological field observations with information from published studies to reconstruct the spatiotemporal relationship between volcanism and fault activity in the BRZ and quantify the deformation patterns of the overlapping rift segments. I present the following results: (1) new thermochronological data from the en-échelon basin margins and footwall blocks of the rift flanks and morphometric results verified in the field to link different phases of magmatism and faulting during extension and infer geomorphological landscape features related to the current tectonic interaction between the nKR and the sMER; (2) temporally constrained paleomagnetic data from the BRZ overlap zone between the Ethiopian and Kenyan rifts to quantitatively determine block rotation between the two segments. Combining the collected data, time-temperature histories of thermal modeling results from representative samples show well-defined deformation phases between 25–20 Ma, 15–9Ma, and ~5 Ma to the present. Each deformation phase is characterized by the onset of rapid cooling (>2°C/Ma) of the crust associated with uplift or exhumation of the rift shoulder. After an initial, spatially very diffuse phase of extension, the rift has gradually evolved into a system of connected structures formed in an increasingly focused rift zone during the last 5 Ma. Regarding the morphometric analysis of the rift structures, it can be shown that normalized slope indices of the river courses, spatial arrangement of knickpoints in the river longitudinal profiles of the footwall blocks, local relief values, and the average maximum values of the slope of the river profiles indicate a gradual increase in the extension rate from north (Sawula basin: mature) to south (Chew Bahir: young). The complexity of the structural evolution of the BRZ overlap zone between nKR and sMER is further emphasized by the documentation of crustal blocks around a vertical axis. A comparison of the mean directions obtained for the Eo-Oligocene (Ds=352.6°, Is=-17.0°, N=18, α95=5.5°) and Miocene (Ds=2.9°, Is=0.9°, N=9, α95=12.4°) volcanics relative to the pole for stable South Africa and with respect to the corresponding ages of the analyzed units record a significant counterclockwise rotation of ~11.1°± 6.4° and insignificant CCW rotation of ~3.2° ± 11.5°, respectively.
The Andes reflect Cenozoic deformation and uplift along the South American margin in the context of regional shortening associated with the interaction between the subducting Nazca plate and the overriding continental South American plate. Simultaneously, multiple levels of uplifted marine terraces constitute laterally continuous geomorphic features related to the accumulation of permanent forearc deformation in the coastal realm. However, the mechanisms responsible for permanent coastal uplift and the persistency of current/decadal deformation patterns over millennial timescales are still not fully understood. This dissertation presents a continental-scale database of last interglacial terrace elevations and uplift rates along the South American coast that provides the basis for an analysis of a variety of mechanisms that are possibly responsible for the accumulation of permanent coastal uplift. Regional-scale mapping and analysis of multiple, late Pleistocene terrace levels in central Chile furthermore provide valuable insights regarding the persistency of current seismic asperities, the role of upper-plate faulting, and the impact of bathymetric ridges on permanent forearc deformation.
The database of last interglacial terrace elevations reveals an almost continuous signal of background-uplift rates along the South American coast at ~0.22 mm/yr that is modified by various short- to long-wavelength changes. Spatial correlations with crustal faults and subducted bathymetric ridges suggest long-term deformation to be affected by these features, while the latitudinal variability of climate forcing factors has a profound impact on the generation and preservation of marine terraces. Systematic wavelength analyses and comparisons of the terrace-uplift rate signal with different tectonic parameters reveal short-wavelength deformation to result from crustal faulting, while intermediate- to long-wavelength deformation might indicate various extents of long-term seismotectonic segments on the megathrust, which are at least partially controlled by the subduction of bathymetric anomalies. The observed signal of background-uplift rate is likely accumulated by moderate earthquakes near the Moho, suggesting multiple, spatiotemporally distinct phases of uplift that manifest as a continuous uplift signal over millennial timescales.
Various levels of late Pleistocene marine terraces in the 2015 M8.3 Illapel-earthquake area reveal a range of uplift rates between 0.1 and 0.6 mm/yr and indicate decreasing uplift rates since ~400 ka. These glacial-cycle uplift rates do not correlate with current or decadal estimates of coastal deformation suggesting seismic asperities not to be persistent features on the megathrust that control the accumulation of permanent forearc deformation over long timescales of 105 years. Trench-parallel, crustal normal faults modulate the characteristics of permanent forearc-deformation; upper-plate extension likely represents a second-order phenomenon resulting from subduction erosion and subsequent underplating that lead to regional tectonic uplift and local gravitational collapse of the forearc. In addition, variable activity with respect to the subduction of the Juan Fernández Ridge can be detected in the upper plate over the course of multiple interglacial periods, emphasizing the role of bathymetric anomalies in causing local increases in terrace-uplift rate. This thesis therefore provides new insights into the current understanding of subduction-zone processes and the dynamics of coastal forearc deformation, whose different interacting forcing factors impact the topographic and geomorphic evolution of the western South American coast.
This paper studies the effect of public child care on mothers’ career trajectories. To this end, we combine county-level data on child care coverage with detailed individual-level information from the German social security records and exploit a set of German reforms leading to a substantial temporal and spatial variation in child care coverage for children under the age of three. We conduct an event study approach that investigates the labor market outcomes of mothers in the years around the birth of their first child. We thereby explore career trajectories, both in terms of quantity and quality of employment. We find that public child care improves maternal labor supply in the years immediately following childbirth. However, the results on quality-related outcomes suggest that the effect of child care provision does not reach far beyond pure employment effects. These results do not change for mothers with different ‘career costs of children’.
The evolution of a galaxy is pivotally governed by its pattern of star formation over a given period of time. The star formation rate at any given time is strongly dependent on the amount of cold gas available in the galaxy. Accretion of pristine gas from the Intergalactic medium (IGM) is thought to be one of the primary sources for star-forming gas. This gas first passes through the virial regions of the galaxy before reaching the Interstellar medium (ISM), the hub of star formation. On the other hand, owing to the evolutionary course of young and massive stars, energetic winds are ejected from the ISM to the virial regions of the galaxy. A bunch of interlinked, complex astrophysical processes, arising from the concurrent presence of both infalling as well as outbound gas, play out over a range of timescales in the halo region or the Circumgalactic medium (CGM) of a galaxy. It would not be incorrect to say that the CGM has a stronghold over the gas reserves of a galaxy and thus, plays a backhand, yet, rather pivotal role in shaping many galactic properties, some of which are also readily observable. Observing the multi-phase CGM (via spectral-line ion measurements), however, remains a non-trivial effort even today. Low particle densities as well as the CGM’s vast spatial extent, coupled with likely deviations from a spherical distribution, marr the possibility of obtaining complete, unbiased, high-quality spectral information tracing the full extent of the gaseous halo. This often incomplete information leads to multiple inferences about the CGM properties that give rise to multiple contradicting models. In this regard, computer simulations offer a neat solution towards testing and, subsequently, falsifying many of these existing CGM models. Thanks to their controlled environments, simulations are able to not only effortlessly transcend several orders of magnitude in time and space, but also get around many of the observational limitations and provide some unique views on many CGM properties. In this thesis, I focus on effectively using different computer simulations to understand the role of CGM in various astrophysical contexts, namely, the effect of Local Group (LG) environment, major merger events and satellite galaxies. In Chapter 2, I discuss the approach used for modeling various phases of the simulated z = 0 LG CGM in Hestia constrained simulations. Each of the three realizations contain a Milky Way (MW)–Andromeda (M31) galaxy pair, along with their corresponding sets of satellite galaxies, all embedded within the larger cosmological context. For characterizing the different temperature–density phases within the CGM, I model five tracer ions with cloudy ionization modeling. The cold and cool–ionized CGM (H i and Si iii respectively) in Hestia is very clumpy and distributed close to the galactic centers, while the warm-hot and hot CGM (O vi, O vii and O viii) is tenuous and volume-filling. On comparing the H i and Si iii column densities for the simulated M31 with observational measurements from Project AMIGA survey and other low-z galaxies, I found that Hestia galaxies produced less gas in the outer CGM, unlike observations. My carefully designed observational bias model subsequently revealed the possibility that some MW gas clouds might be incorrectly associated with the M31 CGM in observations, and hence, may be partly responsible for giving rise to the detected mismatch between simulated data and observations. In Chapter 3, I present results from four zoom–in, major merger, gas–rich simulations and the subsequent role of the gas, originally situated in the CGM, in influencing some of the galactic observables. The progenitor parameters are selected such that the post–merger remnants are MW–mass galaxies. We generally see a very clear gas bridge joining the merging galaxies in case of multiple passage mergers while such a bridge is mostly absent when a direct collision occurs. On the basis of particle–to–galaxy distance computations and tracer particle analysis, I found that about 33–48 percent of the cold gas contributing to the merger–induced star formation in the bridge originated from the CGM regions. In Chapter 4, I used a sample of 234 MW-mass, L* galaxies from the TNG50 cosmological simulations, with an aim of characterizing the impact of their global satellite populations on the extended cold CGM properties of their host L* halos. On the basis of halo mass and number of satellite galaxies (N_sats ), I categorized the sample into low and high mass bins, and subsequently into bottom, inter and top quartiles respectively. After confirming that satellites indeed influence the extended cold halo gas density profiles of the host galaxies, I investigated the effects of different satellite population parameters on the host halo cold CGMs. My analysis showed that there is hardly any cold gas associated with the satellite population of the lowest mass halos. The stellar mass of the most massive satellite (M_*mms ) impacted the cold gas in low mass bin halos the most, while N_sats (followed by M_*mms ) was the most influential factor for the high mass halos. In any case, how easily cold gas was stripped off the most massive satellite did not play much role. The number of massive (Stellar mass, M* > 10^8 M_solar) satellites as well as the M_*mms associated with a galaxy are two of the most crucial parameters determining how much cold gas ultimately finds its way from the satellites to the host halo. Low mass galaxies are found rather lacking on both these fronts unlike their high mass counterparts. This work highlights some aspects of the complex gas physics that constitute the basic essence of a low-z CGM. My analysis proved the importance of a cosmological environment, local surroundings and merger history in defining some key observable properties of a galactic CGM. Furthermore, I found that different satellite properties were responsible for affecting the cold–dense CGM of the low and high-mass parent galaxies. Finally, the LG emerged as an exciting prospect for testing and pinning down several intricate details about the CGM.
Under Brazil's ex-president Bolsonaro, deforestation of the Amazon increased dramatically. An Austrian NGO filed a complaint to the Prosecutor of the International Criminal Court (ICC) against Bolsonaro in October 2021, accusing him of crimes against humanity against the backdrop of his involvement in environmental destruction. This paper deals with the question of whether this initi-ative constitutes a promising means of juridification to mitigate conflicts revolving around mass deforestation in Brazil. It thematizes attempts to juridify environmental destruction in international criminal law and examines the Climate Fund Case at the Brazilian Supreme Court. Finally, emerging problems and arguments in favour of starting preliminary examinations at the ICC against Bolsonaro are illuminated. This paper provides arguments as to why the initiative might be a promising undertaking, even though it is unlikely that Bolsonaro will be arrested.
This essay takes an Anglophone Cultural Studies approach to reflect on the interdependence among as well as the individual (implicit) impact of the elements constituting our (embodied) power structures. These are, e.g., bodily experience/s such as shame and fear, everyday and institutional discourses and practices, but also manifestations of differences and particularities that we transform into phenomena such as “norms”, “binary systems” and “binary organisations”. The analysis of seemingly cyclic “Othering processes” and patterns of violence shows how people who identify as trans*, inter*, or non-binary have to live through and embody epistemological, emotional, and/or physical violence. At the same time, the descriptions illustrate numberless potential forms of resistance and change.
Poor dietary quality is a major cause of morbidity, making the promotion of healthy eating a societal priority. Older adults are a critical target group for promoting healthy eating to enable healthy aging. One factor suggested to promote healthy eating is the willingness to try unfamiliar foods, referred to as food neophilia. This two-wave longitudinal study explored the stability of food neophilia and dietary quality and their prospective relationship over three years, analyzing self-reported data from N = 960 older adults (MT1 = 63.4, range = 50–84) participating in the NutriAct Family Study (NFS) in a cross-lagged panel design. Dietary quality was rated using the NutriAct diet score, based on the current evidence for chronic disease prevention. Food neophilia was measured using the Variety Seeking Tendency Scale. The analyses revealed high a longitudinal stability of both constructs and a small positive cross-sectional correlation between them. Food neophilia had no prospective effect on dietary quality, whereas a very small positive prospective effect of dietary quality on food neophilia was found. Our findings give initial insights into the positive relation of food neophilia and a health-promoting diet in aging and underscore the need for more in-depth research, e.g., on the constructs’ developmental trajectories and potential critical windows of opportunity for promoting food neophilia.
Understanding hydrological processes is of fundamental importance for the Vietnamese national food security and the livelihood of the population in the Vietnamese Mekong Delta (VMD). As a consequence of sparse data in this region, however, hydrologic processes, such as the controlling processes of precipitation, the interaction between surface and groundwater, and groundwater dynamics, have not been thoroughly studied. The lack of this knowledge may negatively impact the long-term strategic planning for sustainable groundwater resources management and may result in insufficient groundwater recharge and freshwater scarcity. It is essential to develop useful methods for a better understanding of hydrological processes in such data-sparse regions. The goal of this dissertation is to advance methodologies that can improve the understanding of fundamental hydrological processes in the VMD, based on the analyses of stable water isotopes and monitoring data. The thesis mainly focuses on the controlling processes of precipitation, the mechanism of surface–groundwater interaction, and the groundwater dynamics. These processes have not been fully addressed in the VMD so far. The thesis is based on statistical analyses of the isotopic data of Global Network of Isotopes in Precipitation (GNIP), of meteorological and hydrological data from Vietnamese agencies, and of the stable water isotopes and monitoring data collected as part of this work.
First, the controlling processes of precipitation were quantified by the combination of trajectory analysis, multi-factor linear regression, and relative importance analysis (hereafter, a model‐based statistical approach). The validity of this approach is confirmed by similar, but mainly qualitative results obtained in other studies. The total variation in precipitation isotopes (δ18O and δ2H) can be better explained by multiple linear regression (up to 80%) than single-factor linear regression (30%). The relative importance analysis indicates that atmospheric moisture regimes control precipitation isotopes rather than local climatic conditions. The most crucial factor is the upstream rainfall along the trajectories of air mass movement. However, the influences of regional and local climatic factors vary in importance over the seasons. The developed model‐based statistical approach is a robust tool for the interpretation of precipitation isotopes and could also be applied to understand the controlling processes of precipitation in other regions.
Second, the concept of the two-component lumped-parameter model (LPM) in conjunction with stable water isotopes was applied to examine the surface–groundwater interaction in the VMD. A calibration framework was also set up to evaluate the behaviour, parameter identifiability, and uncertainties of two-component LPMs. The modelling results provided insights on the subsurface flow conditions, the recharge contributions, and the spatial variation of groundwater transit time. The subsurface flow conditions at the study site can be best represented by the linear-piston flow distribution. The contributions of the recharge sources change with distance to the river. The mean transit time (mTT) of riverbank infiltration increases with the length of the horizontal flow path and the decreasing gradient between river and groundwater. River water infiltrates horizontally mainly via the highly permeable aquifer, resulting in short mTTs (<40 weeks) for locations close to the river (<200 m). The vertical infiltration from precipitation takes place primarily via a low‐permeable overlying aquitard, resulting in considerably longer mTTs (>80 weeks). Notably, the transit time of precipitation infiltration is independent of the distance to the river. All these results are hydrologically plausible and could be quantified by the presented method for the first time. This study indicates that the highly complex mechanism of surface–groundwater interaction at riverbank infiltration systems can be conceptualized by exploiting two‐component LPMs. It is illustrated that the model concept can be used as a tool to investigate the hydrological functioning of mixing processes and the flow path of multiple water components in riverbank infiltration systems.
Lastly, a suite of time series analysis approaches was applied to examine the groundwater dynamics in the VMD. The assessment was focused on the time-variant trends of groundwater levels (GWLs), the groundwater memory effect (representing the time that an aquifer holds water), and the hydraulic response between surface water and multi-layer alluvial aquifers. The analysis indicates that the aquifers act as low-pass filters to reduce the high‐frequency signals in the GWL variations, and limit the recharge to the deep groundwater. The groundwater abstraction has exceeded groundwater recharge between 1997 and 2017, leading to the decline of groundwater levels (0.01-0.55 m/year) in all considered aquifers in the VMD. The memory effect varies according to the geographical location, being shorter in shallow aquifers and flood-prone areas and longer in deep aquifers and coastal regions. Groundwater depth, season, and location primarily control the variation of the response time between the river and alluvial aquifers. These findings are important contributions to the hydrogeological literature of a little-known groundwater system in an alluvial setting. It is suggested that time series analysis can be used as an efficient tool to understand groundwater systems where resources are insufficient to develop a physical-based groundwater model.
This doctoral thesis demonstrates that important aspects of hydrological processes can be understood by statistical analysis of stable water isotope and monitoring data. The approaches developed in this thesis can be easily transferred to regions in similar tropical environments, particularly those in alluvial settings. The results of the thesis can be used as a baseline for future isotope-based studies and contribute to the hydrogeological literature of little-known groundwater systems in the VMD.
River flooding is a constant peril for societies, causing direct economic losses in the order of $100 billion worldwide each year. Under global change, the prolonged concentration of people and assets in floodplains is accompanied by an emerging intensification of flood extremes due to anthropogenic global warming, ultimately exacerbating flood risk in many regions of the world.
Flood adaptation plays a key role in the mitigation of impacts, but poor understanding of vulnerability and its dynamics limits the validity of predominant risk assessment methods and impedes effective adaptation strategies. Therefore, this thesis investigates new methods for flood risk assessment that embrace the complexity of flood vulnerability, using the understudied commercial sector as an application example.
Despite its importance for accurate risk evaluation, flood loss modeling has been based on univariable and deterministic stage-damage functions for a long time. However, such simplistic methods only insufficiently describe the large variation in damage processes, which initiated the development of multivariable and probabilistic loss estimation techniques. The first study of this thesis developed flood loss models for companies that are based on emerging statistical and machine learning approaches (i.e., random forest, Bayesian network, Bayesian regression). In a benchmarking experiment on basis of object-level loss survey data, the study showed that all proposed models reproduced the heterogeneity in damage processes and outperformed conventional stage-damage functions with respect to predictive accuracy. Another advantage of the novel methods is that they convey probabilistic information in predictions, which communicates the large remaining uncertainties transparently and, hence, supports well-informed risk assessment.
Flood risk assessment combines vulnerability assessment (e.g., loss estimation) with hazard and exposure analyses. Although all of the three risk drivers interact and change over time, such dependencies and dynamics are usually not explicitly included in flood risk models. Recently, systemic risk assessment that dissolves the isolated consideration of risk drivers has gained traction, but the move to holistic risk assessment comes with limited thoroughness in terms of loss estimation and data limitations. In the second study, I augmented a socio-hydrological system dynamics model for companies in Dresden, Germany, with the multivariable Bayesian regression loss model from the first study. The additional process-detail and calibration data improved the loss estimation in the systemic risk assessment framework and contributed to more accurate and reliable simulations. The model uses Bayesian inference to quantify uncertainty and learn the model parameters from a combination of prior knowledge and diverse data.
The third study demonstrates the potential of the socio-hydrological flood risk model for continuous, long-term risk assessment and management. Using hydroclimatic ad socioeconomic forcing data, I projected a wide range of possible risk trajectories until the end of the century, taking into account the adaptive behavior of companies. The study results underline the necessity of increased adaptation efforts to counteract the expected intensification of flood risk due to climate change. A sensitivity analysis of the effectiveness of different adaptation measures and strategies revealed that optimized adaptation has the potential to mitigate flood risk by up to 60%, particularly when combining structural and non-structural measures. Additionally, the application shows that systemic risk assessment is capable of capturing adverse long-term feedbacks in the human-flood system such as the levee effect.
Overall, this thesis advances the representation of vulnerability in flood risk modeling by offering modeling solutions that embrace the complexity of human-flood interactions and quantify uncertainties consistently using probabilistic modeling. The studies show how scarce information in data and previous experiments can be integrated in the inference process to provide model predictions and simulations that are reliable and rich in information. Finally, the focus on the flood vulnerability of companies provides new insights into the heterogeneous damage processes and distinct flood coping of this sector.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
EMOOCs 2023
(2023)
From June 14 to June 16, 2023, Hasso Plattner Institute, Potsdam, hosted the eighth European MOOC Stakeholder Summit (EMOOCs 2023).
The pandemic is fortunately over. It has once again shown how important digital education is. How well-prepared a country was could be seen in our schools, universities, and companies. In different countries, the problems manifested themselves differently. The measures and approaches to solving the problems varied accordingly. Digital education, whether micro-credentials, MOOCs, blended learning formats, or other e-learning tools, received a major boost.
EMOOCs 2023 focusses on the effects of this emergency situation. How has it affected the development and delivery of MOOCs and other e-learning offerings all over Europe? Which projects can serve as models for successful digital learning and teaching? Which roles can MOOCs and micro-credentials bear in the current business transformation? Is there a backlash to the routine we knew from pre-Corona times? Or have many things become firmly established in the meantime, e.g. remote work, hybrid conferences, etc.?
Furthermore, EMOOCs 2023 has a closer look at the development and formalization of digital learning. Micro-credentials are just the starting point. Further steps in this direction would be complete online study programs or full online universities.
Another main topic is the networking of learning offers and the standardization of formats and metadata. Examples of fruitful cooperations are the MOOChub, the European MOOC Consortium, and the Common Micro-Credential Framework.
The learnings, derived from practical experience and research, are explored in EMOOCs 2023 in four tracks and additional workshops, covering various aspects of this field. In this publication, we present papers from the conference’s Research & Experience Track, the Business Track and the International Track.
Purpose – Design thinking has become an omnipresent process to foster innovativeness in various fields. Due to its popularity in both practice and theory, the number of publications has been growing rapidly. The authors aim to develop a research framework that reflects the current state of research and allows for the identification of research gaps.
Design/methodology/approach – The authors conduct a systematic literature review based on 164 scholarly articles on design thinking.
Findings – This study proposes a framework, which identifies individual and organizational context factors, the stages of a typical design thinking process with its underlying principles and tools, and the individual as well as organizational outcomes of a design thinking project.
Originality/value – Whereas previous reviews focused on particular aspects of design thinking, such as its characteristics, the organizational culture as a context factor or its role on new product development, the authors provide a holistic overview of the current state of research.