Refine
Has Fulltext
- yes (341) (remove)
Year of publication
- 2022 (341) (remove)
Document Type
- Postprint (157)
- Doctoral Thesis (136)
- Working Paper (20)
- Monograph/Edited Volume (11)
- Article (9)
- Bachelor Thesis (2)
- Conference Proceeding (2)
- Part of Periodical (2)
- Habilitation Thesis (1)
- Master's Thesis (1)
Language
- English (341) (remove)
Keywords
- machine learning (7)
- climate change (6)
- COVID-19 (5)
- Klimawandel (5)
- exercise (4)
- obesity (4)
- Innovation (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- bibliometric analysis (3)
Institute
- Institut für Biochemie und Biologie (49)
- Extern (48)
- Strukturbereich Kognitionswissenschaften (34)
- Hasso-Plattner-Institut für Digital Engineering GmbH (30)
- Institut für Physik und Astronomie (30)
- Institut für Geowissenschaften (26)
- Fachgruppe Volkswirtschaftslehre (24)
- Center for Economic Policy Analysis (CEPA) (20)
- Institut für Chemie (17)
- Fachgruppe Betriebswirtschaftslehre (14)
Sharing marketplaces emerged as the new Holy Grail of value creation by enabling exchanges between strangers. Identity reveal, encouraged by platforms, cuts both ways: While inducing pre-transaction confidence, it is suspected of backfiring on the information senders with its discriminative potential. This study employs a discrete choice experiment to explore the role of names as signifiers of discriminative peculiarities and the importance of accompanying cues in peer choices of a ridesharing offer. We quantify users' preferences for quality signals in monetary terms and evidence comparative disadvantage of Middle Eastern descent male names for drivers and co-travelers. It translates into a lower willingness to accept and pay for an offer. Market simulations confirm the robustness of the findings. Further, we discover that females are choosier and include more signifiers of involuntary personal attributes in their decision-making. Price discounts and positive information only partly compensate for the initial disadvantage, and identity concealment is perceived negatively.
One for all, all for one
(2022)
We propose a conceptual model of acceptance of contact tracing apps based on the privacy calculus perspective. Moving beyond the duality of personal benefits and privacy risks, we theorize that users hold social considerations (i.e., social benefits and risks) that underlie their acceptance decisions. To test our propositions, we chose the context of COVID-19 contact tracing apps and conducted a qualitative pre-study and longitudinal quantitative main study with 589 participants from Germany and Switzerland. Our findings confirm the prominence of individual privacy calculus in explaining intention to use and actual behavior. While privacy risks are a significant determinant of intention to use, social risks (operationalized as fear of mass surveillance) have a notably stronger impact. Our mediation analysis suggests that social risks represent the underlying mechanism behind the observed negative link between individual privacy risks and contact tracing apps' acceptance. Furthermore, we find a substantial intention–behavior gap.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
The purpose of this study was to examine the test-retest reliability, and convergent and discriminative validity of a new taekwondo-specific change-of-direction (COD) speed test with striking techniques (TST) in elite taekwondo athletes. Twenty (10 males and 10 females) elite (athletes who compete at national level) and top-elite (athletes who compete at national and international level) taekwondo athletes with an average training background of 8.9 ± 1.3 years of systematic taekwondo training participated in this study. During the two-week test-retest period, various generic performance tests measuring COD speed, balance, speed, and jump performance were carried out during the first week and as a retest during the second week. Three TST trials were conducted with each athlete and the best trial was used for further analyses. The relevant performance measure derived from the TST was the time with striking penalty (TST-TSP). TST-TSP performances amounted to 10.57 ± 1.08 s for males and 11.74 ± 1.34 s for females. The reliability analysis of the TST performance was conducted after logarithmic transformation, in order to address the problem of heteroscedasticity. In both groups, the TST demonstrated a high relative test-retest reliability (intraclass correlation coefficients and 90% compatibility limits were 0.80 and 0.47 to 0.93, respectively). For absolute reliability, the TST’s typical error of measurement (TEM), 90% compatibility limits, and magnitudes were 4.6%, 3.4 to 7.7, for males, and 5.4%, 3.9 to 9.0, for females. The homogeneous sample of taekwondo athletes meant that the TST’s TEM exceeded the usual smallest important change (SIC) with 0.2 effect size in the two groups. The new test showed mostly very large correlations with linear sprint speed (r = 0.71 to 0.85) and dynamic balance (r = −0.71 and −0.74), large correlations with COD speed (r = 0.57 to 0.60) and vertical jump performance (r = −0.50 to −0.65), and moderate correlations with horizontal jump performance (r = −0.34 to −0.45) and static balance (r = −0.39 to −0.44). Top-elite athletes showed better TST performances than elite counterparts. Receiver operating characteristic analysis indicated that the TST effectively discriminated between top-elite and elite taekwondo athletes. In conclusion, the TST is a valid, and sensitive test to evaluate the COD speed with taekwondo specific skills, and reliable when considering ICC and TEM. Although the usefulness of the TST is questioned to detect small performance changes in the present population, the TST can detect moderate changes in taekwondo-specific COD speed.
This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
Sea level rise and coastal erosion have inundated large areas of Arctic permafrost. Submergence by warm and saline waters increases the rate of inundated permafrost thaw compared to sub-aerial thawing on land. Studying the contact between the unfrozen and frozen sediments below the seabed, also known as the ice-bearing permafrost table (IBPT), provides valuable information to understand the evolution of sub-aquatic permafrost, which is key to improving and understanding coastal erosion prediction models and potential greenhouse gas emissions. In this study, we use data from 2D electrical resistivity tomography (ERT) collected in the nearshore coastal zone of two Arctic regions that differ in their environmental conditions (e.g., seawater depth and resistivity) to image and study the subsea permafrost. The inversion of 2D ERT data sets is commonly performed using deterministic approaches that favor smoothed solutions, which are typically interpreted using a user-specified resistivity threshold to identify the IBPT position. In contrast, to target the IBPT position directly during inversion, we use a layer-based model parameterization and a global optimization approach to invert our ERT data. This approach results in ensembles of layered 2D model solutions, which we use to identify the IBPT and estimate the resistivity of the unfrozen and frozen sediments, including estimates of uncertainties. Additionally, we globally invert 1D synthetic resistivity data and perform sensitivity analyses to study, in a simpler way, the correlations and influences of our model parameters. The set of methods provided in this study may help to further exploit ERT data collected in such permafrost environments as well as for the design of future field experiments.
Objective: To examine the effect of plyometric jump training on skeletal muscle hypertrophy in healthy individuals.
Methods: A systematic literature search was conducted in the databases PubMed, SPORTDiscus, Web of Science, and Cochrane Library up to September 2021.
Results: Fifteen studies met the inclusion criteria. The main overall finding (44 effect sizes across 15 clusters median = 2, range = 1–15 effects per cluster) indicated that plyometric jump training had small to moderate effects [standardised mean difference (SMD) = 0.47 (95% CIs = 0.23–0.71); p < 0.001] on skeletal muscle hypertrophy. Subgroup analyses for training experience revealed trivial to large effects in non-athletes [SMD = 0.55 (95% CIs = 0.18–0.93); p = 0.007] and trivial to moderate effects in athletes [SMD = 0.33 (95% CIs = 0.16–0.51); p = 0.001]. Regarding muscle groups, results showed moderate effects for the knee extensors [SMD = 0.72 (95% CIs = 0.66–0.78), p < 0.001] and equivocal effects for the plantar flexors [SMD = 0.65 (95% CIs = −0.25–1.55); p = 0.143]. As to the assessment methods of skeletal muscle hypertrophy, findings indicated trivial to small effects for prediction equations [SMD = 0.29 (95% CIs = 0.16–0.42); p < 0.001] and moderate-to-large effects for ultrasound imaging [SMD = 0.74 (95% CIs = 0.59–0.89); p < 0.001]. Meta-regression analysis indicated that the weekly session frequency moderates the effect of plyometric jump training on skeletal muscle hypertrophy, with a higher weekly session frequency inducing larger hypertrophic gains [β = 0.3233 (95% CIs = 0.2041–0.4425); p < 0.001]. We found no clear evidence that age, sex, total training period, single session duration, or the number of jumps per week moderate the effect of plyometric jump training on skeletal muscle hypertrophy [β = −0.0133 to 0.0433 (95% CIs = −0.0387 to 0.1215); p = 0.101–0.751].
Conclusion: Plyometric jump training can induce skeletal muscle hypertrophy, regardless of age and sex. There is evidence for relatively larger effects in non-athletes compared with athletes. Further, the weekly session frequency seems to moderate the effect of plyometric jump training on skeletal muscle hypertrophy, whereby more frequent weekly plyometric jump training sessions elicit larger hypertrophic adaptations.
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) with its land and vegetation height data product (ATL08), and Global Ecosystem Dynamics Investigation (GEDI) with its terrain elevation and height metrics data product (GEDI Level 2A) missions have great potential to globally map ground and canopy heights. Canopy height is a key factor in estimating above-ground biomass and its seasonal changes; these satellite missions can also improve estimated above-ground carbon stocks. This study presents a novel Sparse Vegetation Detection Algorithm (SVDA) which uses ICESat-2 (ATL03, geolocated photons) data to map tree and vegetation heights in a sparsely vegetated savanna ecosystem. The SVDA consists of three main steps: First, noise photons are filtered using the signal confidence flag from ATL03 data and local point statistics. Second, we classify ground photons based on photon height percentiles. Third, tree and grass photons are classified based on the number of neighbors. We validated tree heights with field measurements (n = 55), finding a root-mean-square error (RMSE) of 1.82 m using SVDA, GEDI Level 2A (Geolocated Elevation and Height Metrics product): 1.33 m, and ATL08: 5.59 m. Our results indicate that the SVDA is effective in identifying canopy photons in savanna ecosystems, where ATL08 performs poorly. We further identify seasonal vegetation height changes with an emphasis on vegetation below 3 m; widespread height changes in this class from two wet-dry cycles show maximum seasonal changes of 1 m, possibly related to seasonal grass-height differences. Our study shows the difficulties of vegetation measurements in savanna ecosystems but provides the first estimates of seasonal biomass changes.
The starting point of this article is the occurrence of determiner-less and bare que relative complementizers like (en) que, ‘(in) that’, instead of (en) el que, ‘(in) which’, in Yucatecan Spanish (southeast Mexico). While reference grammars treat complementizers with a determiner as the standard option, previous diachronic research has shown that determiner-less complementizers actually predate relative complementizers with a determiner. Additionally, Yucatecan Spanish has been in long-standing contact with Yucatec Maya. Relative complementation in Yucatec Maya differs from that in Spanish (at least) in that the non-complex complementizer tu’ux (‘where’) is generally the only option for locative complementation. The paper explores monolingual and bilingual data from Yucatecan Spanish to discuss the question whether the determiner-less and bare que relative complementizers in our data constitute a historic remnant or a dialectal recast, possibly (but not necessarily) due to language contact. Although our pilot study may not answer these far-reaching questions, it does reveal two separate, but intertwined developments: (i) a generally increased rate of bare que relative complementation, across both monolingual speakers of Spanish and Spanish Maya bilinguals, compared to other Spanish varieties, and (ii) a preference for donde at the cost of other locative complementizer constructions in the bilingual group. Our analysis thus reveals intriguing differences between the complementizer preferences of monolingual and bilingual speakers, suggesting that different variational patterns caused by different (socio-)linguistic factors can co-develop in parallel in one and the [same] region.
The COVID-19 pandemic created the largest experiment in working from home. We study how persistent telework may change energy and transport consumption and costs in Germany to assess the distributional and environmental implications when working from home will stick. Based on data from the German Microcensus and available classifications of working-from-home feasibility for different occupations, we calculate the change in energy consumption and travel to work when 15% of employees work full time from home. Our findings suggest that telework translates into an annual increase in heating energy expenditure of 110 euros per worker and a decrease in transport expenditure of 840 euros per worker. All income groups would gain from telework but high-income workers gain twice as much as low-income workers. The value of time saving is between 1.3 and 6 times greater than the savings from reduced travel costs and almost 9 times higher for high-income workers than low-income workers. The direct effects on CO₂ emissions due to reduced car commuting amount to 4.5 millions tons of CO₂, representing around 3 percent of carbon emissions in the transport sector.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
Eight d-metal-containing N-butylpyridinium ionic liquids (ILs) with the nominal composition (C4Py)2[Ni0.5M0.5Cl4] or (C4Py)2[Zn0.5M0.5Cl4] (M = Cu, Co, Mn, Ni, Zn; C4Py = N-butylpyridinium) were synthesized, characterized, and investigated for their optical properties. Single crystal and powder X-ray analysis shows that the compounds are isostructural to existing examples based on other d-metal ions. Inductively coupled plasma optical emission spectroscopy measurements confirm that the metal/metal ratio is around 50 : 50. UV-Vis spectroscopy shows that the optical absorption can be tuned by selection of the constituent metals. Moreover, the compounds can act as an optical sensor for the detection of gases such as ammonia as demonstrated via a simple prototype setup.
Enterprise systems have long played an important role in businesses of various sizes. With the increasing complexity of today’s business relationships, pecialized application systems are being used more and more. Moreover, emerging technologies such as artificial intelligence are becoming accessible for enterprise systems. This raises the question of the future role of enterprise systems. This minitrack covers novel ideas that contribute to and shape the future role of enterprise systems with five contributions.
Algorithmic management
(2022)
Land-use intensification is the main factor for the catastrophic decline of insect pollinators. However, land-use intensification includes multiple processes that act across various scales and should affect pollinator guilds differently depending on their ecology. We aimed to reveal how two main pollinator guilds, wild bees and hoverflies, respond to different land-use intensification measures, that is, arable field cover (AFC), landscape heterogeneity (LH), and functional flower composition of local plant communities as a measure of habitat quality. We sampled wild bees and hoverflies on 22 dry grassland sites within a highly intensified landscape (NE Germany) within three campaigns using pan traps. We estimated AFC and LH on consecutive radii (60–3000 m) around the dry grassland sites and estimated the local functional flower composition. Wild bee species richness and abundance was positively affected by LH and negatively by AFC at small scales (140–400 m). In contrast, hoverflies were positively affected by AFC and negatively by LH at larger scales (500–3000 m), where both landscape parameters were negatively correlated to each other. At small spatial scales, though, LH had a positive effect on hoverfly abundance. Functional flower diversity had no positive effect on pollinators, but conspicuous flowers seem to attract abundance of hoverflies. In conclusion, landscape parameters contrarily affect two pollinator guilds at different scales. The correlation of landscape parameters may influence the observed relationships between landscape parameters and pollinators. Hence, effects of land-use intensification seem to be highly landscape-specific.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
Optimal carbon pricing with fluctuating energy prices — emission targeting vs. price targeting
(2022)
Prices of primary energy commodities display marked fluctuations over time. Market-based climate policy instruments (e.g., emissions pricing) create incentives to reduce energy consumption by increasing the user cost of fossil energy. This raises the question of whether climate policy should respond to fluctuations in fossil energy prices? We study this question within an environmental dynamic stochastic general equilibrium (E-DSGE) model calibrated on the German economy. Our results indicate that the welfare implications of dynamic emissions pricing crucially depend on how the revenues are used. When revenues are fully absorbed, a reduction in emissions prices stabilizes the economy in response to energy price shocks. However, when revenues are at least partially recycled, a stable emissions price improves overall welfare. This result is robust to different modeling assumptions.
Background: As the number of cardiac diseases continuously increases within the last years in modern society, so does cardiac treatment, especially cardiac catheterization. The procedure of a cardiac catheterization is challenging for both patients and practitioners. Several potential stressors of psychological or physical nature can occur during the procedure. The objective of the study is to develop and implement a stress management intervention for both practitioners and patients that aims to reduce the psychological and physical strain of a cardiac catheterization.
Methods: The clinical study (DRKS00026624) includes two randomized controlled intervention trials with parallel groups, for patients with elective cardiac catheterization and practitioners at the catheterization lab, in two clinic sites of the Ernst-von-Bergmann clinic network in Brandenburg, Germany. Both groups received different interventions for stress management. The intervention for patients comprises a psychoeducational video with different stress management technics and additional a standardized medical information about the cardiac catheterization examination. The control condition includes the in hospitals practiced medical patient education before the examination (usual care). Primary and secondary outcomes are measured by physiological parameters and validated questionnaires, the day before (M1) and after (M2) the cardiac catheterization and at a postal follow-up 6 months later (M3). It is expected that people with standardized information and psychoeducation show reduced complications during cardiac catheterization procedures, better pre- and post-operative wellbeing, regeneration, mood and lower stress levels over time. The intervention for practitioners includes a Mindfulness-based stress reduction program (MBSR) over 8 weeks supervised by an experienced MBSR practitioner directly at the clinic site and an operative guideline. It is expected that practitioners with intervention show improved perceived and chronic stress, occupational health, physical and mental function, higher effort-reward balance, regeneration and quality of life. Primary and secondary outcomes are measured by physiological parameters (heart rate variability, saliva cortisol) and validated questionnaires and will be assessed before (M1) and after (M2) the MBSR intervention and at a postal follow-up 6 months later (M3). Physiological biomarkers in practitioners will be assessed before (M1) and after intervention (M2) on two work days and a two days off. Intervention effects in both groups (practitioners and patients) will be evaluated separately using multivariate variance analysis.
Discussion: This study evaluates the effectiveness of two stress management intervention programs for patients and practitioners within cardiac catheter laboratory. Study will disclose strains during a cardiac catheterization affecting both patients and practitioners. For practitioners it may contribute to improved working conditions and occupational safety, preservation of earning capacity, avoidance of participation restrictions and loss of performance. In both groups less anxiety, stress and complications before and during the procedures can be expected. The study may add knowledge how to eliminate stressful exposures and to contribute to more (psychological) security, less output losses and exhaustion during work. The evolved stress management guidelines, training manuals and the standardized patient education should be transferred into clinical routines
The self-employed faced strong income losses during the Covid-19 pandemic. Many governments introduced programs to financially support the self-employed during the pandemic, including Germany. The German Ministry for Economic Affairs announced a €50bn emergency-aid program in March 2020, offering one-off lump-sum payments of up to €15,000 to those facing substantial revenue declines. By reassuring the self- employed that the government ‘would not let them down’ during the crisis, the program had also the important aim of motivating the self-employed to get through the crisis. We investigate whether the program affected the confidence of the self-employed to survive the crisis using real-time online-survey data comprising more than 20,000 observations. We employ propensity score matching, making use of a rich set of variables that influence the subjective survival probability as main outcome measure. We observe that this program had significant effects, with the subjective survival probability of the self- employed being moderately increased. We reveal important effect heterogeneities with respect to education, industries, and speed of payment. Notably, positive effects only occur among those self-employed whose application was processed quickly. This suggests stress-induced waiting costs due to the uncertainty associated with the administrative processing and the overall pandemic situation. Our findings have policy implications for the design of support programs, while also contributing to the literature on the instruments and effects of entrepreneurship policy interventions in crisis situations.
Property tax competition
(2022)
We develop a model of property taxation and characterize equilibria under three alternative taxa-tion regimes often used in the public finance literature: decentralized taxation, centralized taxation, and “rent seeking” regimes. We show that decentralized taxation results in inefficiently high tax rates, whereas centralized taxation yields a common optimal tax rate, and tax rates in the rent-seeking regime can be either inefficiently high or low. We quantify the effects of switching from the observed tax system to the three regimes for Japan and Germany. The decentralized or rent-seeking regime best describes the Japanese tax system, whereas the centralized regime does so for Germany. We also quantify the welfare effects of regime changes.
Urban pollution
(2022)
We use worldwide satellite data to analyse how population size and density affect urban pollution. We find that density significantly increases pollution exposure. Looking only at urban areas, we find that population size affects exposure more than density. Moreover, the effect is driven mostly by population commuting to core cities rather than the core city population itself. We analyse heterogeneity by geography and income levels. By and large, the influence of population on pollution is greatest in Asia and middle-income countries. A counterfactual simulation shows that PM2.5 exposure would fall by up to 36% and NO2 exposure up to 53% if within countries population size were equalized across all cities.
The prevalence of obesity in the pediatric population has become a major public health issue. Indeed, the dramatic increase of this epidemic causes multiple and harmful consequences, Physical activity, particularly physical exercise, remains to be the cornerstone of interventions against childhood obesity. Given the conflicting findings with reference to the relevant literature addressing the effects of exercise on adiposity and physical fitness outcomes in obese children and adolescents, the effect of duration-matched concurrent training (CT) [50% resistance (RT) and 50% high-intensity-interval-training (HIIT)] on body composition and physical fitness in obese youth remains to be elucidated. Thus, the purpose of this study was to examine the effects of 9-weeks of CT compared to RT or HIIT alone, on body composition and selected physical fitness components in healthy sedentary obese youth. Out of 73 participants, only 37; [14 males and 23 females; age 13.4 ± 0.9 years; body-mass-index (BMI): 31.2 ± 4.8 kg·m-2] were eligible and randomized into three groups: HIIT (n = 12): 3-4 sets×12 runs at 80–110% peak velocity, with 10-s passive recovery between bouts; RT (n = 12): 6 exercises; 3–4 sets × 10 repetition maximum (RM) and CT (n = 13): 50% serial completion of RT and HIIT. CT promoted significant greater gains compared to HIIT and RT on body composition (p < 0.01, d = large), 6-min-walking test distance (6 MWT-distance) and on 6 MWT-VO2max (p < 0.03, d = large). In addition, CT showed substantially greater improvements than HIIT in the medicine ball throw test (20.2 vs. 13.6%, p < 0.04, d = large). On the other hand, RT exhibited significantly greater gains in relative hand grip strength (p < 0.03, d = large) and CMJ (p < 0.01, d = large) than HIIT and CT. CT promoted greater benefits for fat, body mass loss and cardiorespiratory fitness than HIIT or RT modalities. This study provides important information for practitioners and therapists on the application of effective exercise regimes with obese youth to induce significant and beneficial body composition changes. The applied CT program and the respective programming parameters in terms of exercise intensity and volume can be used by practitioners as an effective exercise treatment to fight the pandemic overweight and obesity in youth.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
The negative impact of crude oil on the environment has led to a necessary transition toward alternative, renewable, and sustainable resources. In this regard, lignocellulosic biomass (LCB) is a promising renewable and sustainable alternative to crude oil for the production of fine chemicals and fuels in a so-called biorefinery process. LCB is composed of polysaccharides (cellulose and hemicellulose), as well as aromatics (lignin). The development of a sustainable and economically advantageous biorefinery depends on the complete and efficient valorization of all components. Therefore, in the new generation of biorefinery, the so-called biorefinery of type III, the LCB feedstocks are selectively deconstructed and catalytically transformed into platform chemicals. For this purpose, the development of highly stable and efficient catalysts is crucial for progress toward viability in biorefinery. Furthermore, a modern and integrated biorefinery relies on process and reactor design, toward more efficient and cost-effective methodologies that minimize waste. In this context, the usage of continuous flow systems has the potential to provide safe, sustainable, and innovative transformations with simple process integration and scalability for biorefinery schemes.
This thesis addresses three main challenges for future biorefinery: catalyst synthesis, waste feedstock valorization, and usage of continuous flow technology. Firstly, a cheap, scalable, and sustainable approach is presented for the synthesis of an efficient and stable 35 wt.-% Ni catalyst on highly porous nitrogen-doped carbon support (35Ni/NDC) in pellet shape. Initially, the performance of this catalyst was evaluated for the aqueous phase hydrogenation of LCB-derived compounds such as glucose, xylose, and vanillin in continuous flow systems. The 35Ni/NDC catalyst exhibited high catalytic performances in three tested hydrogenation reactions, i.e., sorbitol, xylitol, and 2-methoxy-4-methylphenol with yields of 82 mol%, 62 mol%, and 100 mol% respectively. In addition, the 35Ni/NDC catalyst exhibited remarkable stability over a long time on stream in continuous flow (40 h). Furthermore, the 35Ni/NDC catalyst was combined with commercially available Beta zeolite in a dual–column integrated process for isosorbide production from glucose (yield 83 mol%).
Finally, 35Ni/NDC was applied for the valorization of industrial waste products, namely sodium lignosulfonate (LS) and beech wood sawdust (BWS) in continuous flow systems. The LS depolymerization was conducted combining solvothermal fragmentation of water/alcohol mixtures (i.e.,methanol/water and ethanol/water) with catalytic hydrogenolysis/hydrogenation (SHF). The depolymerization was found to occur thermally in absence of catalyst with a tunable molecular weight according to temperature. Furthermore, the SHF generated an optimized cumulative yield of lignin-derived phenolic monomers of 42 mg gLS-1. Similarly, a solvothermal and reductive catalytic fragmentation (SF-RCF) of BWS was conducted using MeOH and MeTHF as a solvent. In this case, the optimized total lignin-derived phenolic monomers yield was found of 247 mg gKL-1.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
Strategic uncertainty is the uncertainty that players face with respect to the purposeful behavior of other players in an interactive decision situation. Our paper develops a new method for measuring strategic-uncertainty attitudes and distinguishing them from risk and ambiguity attitudes. We vary the source of uncertainty (whether strategic or not) across conditions in a ceteris paribus manner. We elicit certainty equivalents of participating in two strategic 2x2 games (a stag-hunt and a market-entry game) as well as certainty equivalents of related lotteries that yield the same possible payoffs with exogenously given probabilities (risk) and lotteries with unknown probabilities (ambiguity). We provide a structural model of uncertainty attitudes that allows us to measure a preference for or an aversion against the source of uncertainty, as well as optimism or pessimism regarding the desired outcome. We document systematic attitudes towards strategic uncertainty that vary across contexts. Under strategic complementarity [substitutability], the majority of participants tend to be pessimistic [optimistic] regarding the desired outcome. However, preferences for the source of uncertainty are distributed around zero.
Struggle for existence
(2022)
In this project, I sought to understand how Palestinian claim-making in the West Bank is possible within the context of continuing Israeli occupation and repression by the Palestinian political leadership. I explored the questions of what channels non-state actors use to advance their claims, what opportunities they have for making these claims, and what challenges they face. This exploration covers the time period from the Oslo Accords in the mid-1990s to the so-called Great March of Return in 2018.
I demonstrated that Palestinians used different modes and strategies of resistance in the past century, as the area of what today is Israel/Palestine has historically been a target for foreign penetration. Yet, the Oslo agreements between the Israeli government and the Palestinian leadership have ended Palestinians’ decentralized and pluralist social governance, reinforced Israeli rule in the Palestinian territories, promoted continuing dispossession and segregation of Palestinians, and further restricted their rights and their claim-making opportunities until this day. Therefore, today, Palestinian society in the West Bank is characterized by fragmentation, geographical and societal segregation, and double repression by Israeli occupation and Palestinian Authority (PA) policies. What is more, Palestinian claim-making is legally curtailed due to the establishment of different geographical entities in which Palestinians are subjugated to different forms of Israeli rule and regulations.
I argue that the concepts of civil society and acts of citizenship, which are often used to describe non-state actors’ rights-seeking activities, fall short on understanding and describing Palestinian claim-making in the West Bank comprehensively. By determining their boundaries, the concept of acts of subjecthood evolved as a novel theoretical approach within the research process and as a means of claim-making within repressive contexts where claim makers’ rights are curtailed and opportunities for rights-seeking activities are few. Thereby, this study applies a new theoretical framework to the conflict in Israel/Palestine and contributes to a better understanding of rights-seeking activities within the West Bank. Further, I argue that Palestinian acts of subjecthood against hostile Israeli rule in the West Bank are embedded within the comprehensive structure of settler colonialism. As a form of colonialism that aims at replacing an indigenous population, Israeli settler colonialism in the West Bank manifests itself in restrictions of Palestinian movement, settlement constructions, home demolitions, violence, and detentions.
By using grounded theory and inductive reasoning as methodological approaches, I was able to make generalizations about the state of Palestinian claim-making. These generalizations are based on the analysis of secondary materials and data collected via face-to-face and video interviews with non-state actors in Israel/Palestine. The conducted research shows that there is not a single measure or a standalone condition that hinders Palestinian claim-making, but a complex and comprehensive structure that, on the one hand, shrinks Palestinian living space by occupation and destruction and, on the other hand, diminishes Palestinian civic space by limiting the fundamental rights to organize and build social movements to change the status Palestinians live in.
Although the concrete, tangible outcomes of Palestinian acts of subjecthood are marginal, they contribute to strengthening and perpetuating Palestinian’s long history of resistance against Israeli oppression. With a lack of adherence to international law, the neglect of UN resolutions by the Israeli government, the continuous defeats of rights organizations in Israeli courts, and the repression of institutions based in the West Bank by PA and occupation policies, Palestinian acts of subjecthood cannot overturn current power structures. Nevertheless, the ongoing persistence of non-state actors claiming rights, as well as the pop-up of new initiatives and youth movements are all essential for strengthening Palestinians’ resilience and documenting current injustices. Therefore, they can build the pillars for social change in the future.
Das Ziel der vorliegenden Dissertation war es zu untersuchen, wie palästinensisches claim-making, also die Artikulation von Forderungen bzw. die Geltendmachung von bestimmten Rechten, vor dem Hintergrund der anhaltenden israelischen Besatzung und Repressalien durch die palästinensische politische Führung im Westjordanland durchgesetzt werden kann. Dabei soll der Frage nachgegangen werden, welche Kanäle nichtstaatliche Akteure nutzen, um ihre Ansprüche geltend zu machen, welche Möglichkeiten sich ihnen dafür bieten und vor welchen Herausforderungen sie stehen. Der Untersuchungszeitraum erstreckt sich dabei vom Osloer Friedensprozess Mitte der 1990er Jahre bis hin zum sogenannten Great March of Return im Jahr 2018.
Die im Gebiet des heutigen Israel/Palästina lebenden PalästinenserInnen bedienten sich in Zeiten ausländischer Einflussnahme, z.B. während der britischen Besatzung im vergangenen Jahrhundert, verschiedenster Widerstandsformen und -strategien. Jedoch haben die Osloer Abkommen zwischen der israelischen Regierung und der palästinensischen Führung die dezentrale und partizipative Mobilisierung der palästinensischen Gesellschaft erschwert, die andauernde Enteignung von PalästinenserInnen begünstigt und ihre Rechte bis zum heutigen Tag weiter eingeschränkt. Die heutige palästinensische Gesellschaft im Westjordanland ist daher durch Zersplitterung, geografische und gesellschaftliche Segregation und doppelte Un-terdrückung durch die israelische Besatzung sowie die Palästinensische Autonomiebehörde gekennzeichnet. Zudem führt die Etablierung verschiedener geografischer Entitäten, in denen PalästinenserInnen unterschiedlichen Formen israelischer Herrschaft, Regularien und Ein-griffsrechten unterworfen sind, dazu, dass palästinensisches claim-making auch formalrecht-lich eingeschränkt ist.
Um die Aktivitäten nichtstaatlicher Akteure in diesem Kontext beschreiben zu können, wer-den häufig das Konzept der Zivilgesellschaft oder das der acts of citizenship herangezogen. In der vorliegenden Arbeit wird jedoch argumentiert, dass diese Konzepte nur bedingt auf den Status Quo im Westjordanland anwendbar sind und palästinensisches claim-making nicht hinreichend verstehen und beschreiben können. Im Laufe des Forschungsprozesses hat sich daher das Konzept der acts of subjecthood als neuer theoretischer Ansatz herausgebildet, der claim-making in repressiven Kontexten beschreibt, in denen nichtstaatliche Akteure nur geringen Handlungsspielraum haben, ihre Forderungen durchsetzen zu können. Durch diese „Theorie-Brille“ ermöglicht meine Forschung einen neuartigen Blick auf den israelisch-palästinensischen Konflikt und trägt auf diese Weise zu einem besseren Verständnis von claim-making-Aktivitäten im Westjordanland bei. Darüber hinaus bettet die vorliegende Ar-beit acts of subjecthood in den größeren Kontext des Siedlungskolonialismus ein. Dieser beschreibt eine Form des Kolonialismus, die darauf abzielt, eine einheimische Bevölkerung durch die der Kolonialmacht zu ersetzen. Im Westjordanland manifestiert sich der israelische Siedlungskolonialismus in der Einschränkung der Bewegungsfreiheit von PalästinenserIn-nen, dem Bau von Siedlungen, der Zerstörung von Häusern, Gewalt und Inhaftierungen.
Die Verwendung der Grounded Theory und des induktiven Denkens als methodische Ansätze ermöglichte es, verallgemeinerbare Aussagen zum Zustand palästinensischen claim-makings treffen zu können. Diese Verallgemeinerungen beruhen auf der Analyse von Sekundärquellen und Daten, die im Rahmen von Interviews mit VertreterInnen nichtstaatlicher Organisationen in Israel/Palästina erhoben wurden. Die durchgeführte Analyse macht deutlich, dass nicht eine einzelne Maßnahme oder Bedingung palästinensisches claim-making behindert, sondern eine komplexe, vielschichtige und zielgerichtet implementierte Struktur. Diese verringert einerseits den Lebensraum von PalästinenserInnen durch Besatzung und Zerstörung und schränkt andererseits den zivilen Raum ein, indem sie ihnen grundlegende Rechte und fundamentale Freiheiten verwehrt.
Obwohl die konkreten Auswirkungen palästinensischer acts of subjecthood marginal sind, tragen sie dazu bei, den Widerstand gegen politische Unterdrückung zu stärken und fortzusetzen. Angesichts der Verletzung von Völkerrecht und der Missachtung zahlreicher UN-Resolutionen durch die israelische Regierung, der Niederlagen von Menschenrechtsorganisationen vor israelischen Gerichten, der Unterdrückung von Institutionen im Westjordanland durch die Palästinensische Autonomiebehörde und die Besatzungspolitik können acts of subjecthood die derzeitigen Machtstrukturen nicht aufbrechen. Dennoch sind die anhaltende Beharrlichkeit nichtstaatlicher Akteure, Forderungen zu artikulieren und Rechte einzufordern und die Gründung neuer Initiativen und Organisationen essenziell für die Stärkung gesellschaftlicher Resilienz sowie die Dokumentation von Ungerechtigkeiten und Rechtsverletzungen. Diese Akteure legen so den Grundstein für einen möglichen gesellschaftspolitischen Wandel in der Zukunft.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
We provide the first estimates of the impact of managers’ risk preferences on their training allocation decisions. Our conceptual framework links managers’ risk preferences to firms’ training decisions through the bonuses they expect to receive. Risk-averse managers are expected to select workers with low turnover risk and invest in specific rather than general training. Empirical evidence supporting these predictions is provided using a novel vignette study embedded in a nationally representative survey of firm managers. Risk-tolerant and risk-averse decision makers have significantly different training preferences. Risk aversion results in increased sensitivity to turnover risk. Managers who are risk-averse offer significantly less general training and, in some cases, are more reluctant to train workers with a history of job mobility. All managers, irrespective of their risk preferences, are sensitive to the investment risk associated with training, avoiding training that is more costly or targets those with less occupational expertise or nearing retirement. This suggests the risks of training are primarily due to the risk that trained workers will leave the firm (turnover risk) rather than the risk that the benefits of training do not outweigh the costs (investment risk).
We investigate the effect of the COVID-19 pandemic on self-employed people’s mental health. Using representative longitudinal survey data from Germany, we reveal differential effects by gender: whereas self-employed women experienced a substantial deterioration in their mental health, self-employed men displayed no significant changes up to early 2021. Financial losses are important in explaining these differences. In addition, we find larger mental health responses among self-employed women who were directly affected by government-imposed restrictions and bore an increased childcare burden due to school and daycare closures. We also find that self-employed individuals who are more resilient coped better with the crisis.
Predicting entrepreneurial development based on individual and business-related characteristics is a key objective of entrepreneurship research. In this context, we investigate whether the motives of becoming an entrepreneur influence the subsequent entrepreneurial development. In our analysis, we examine a broad range of business outcomes including survival and income, as well as job creation, expansion and innovation activities for up to 40 months after business formation. Using self-determination theory as conceptual background, we aggregate the start-up motives into a continuous motivational index. We show – based on a unique dataset of German start-ups from unemployment and non-unemployment – that the later business performance is better, the higher they score on this index. Effects are particularly strong for growth oriented outcomes like innovation and expansion activities. In a next step, we examine three underlying motivational categories that we term opportunity, career ambition, and necessity. We show that individuals driven by opportunity motives perform better in terms of innovation and business expansion activities, while career ambition is positively associated with survival, income, and the probability of hiring employees. All effects are robust to the inclusion of a large battery of covariates that are proven to be important determinants of entrepreneurial performance.
Subsidizing the geographical mobility of unemployed workers may improve welfare by relaxing their financial constraints and allowing them to find jobs in more prosperous regions. We exploit regional variation in the promotion of mobility programs along administrative borders of German employment agency districts to investigate the causal effect of offering such financial incentives on the job search behavior and labor market integration of unemployed workers. We show that promoting mobility – as intended – causes job seekers to increase their search radius, apply for and accept distant jobs. At the same time, local job search is reduced with adverse consequences for reemployment and earnings. These unintended negative effects are provoked by spatial search frictions. Overall, the unconditional provision of mobility programs harms the welfare of unemployed job seekers.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Transitional justice is conventionally theorized as how a society deals with past injustices after regime change and alongside democratization. Nonetheless, scholars have not reached a consensus on what is to be included or excluded. Recent ideas of transformative justice seek to expand the understanding of transitional justice to include systemic restructuring and socioeconomic considerations. In the context of Nicaragua — where two transitions occurred within an 11-year span — very little transitional justice took place, in terms of the conventional concept of top-down legalistic mechanisms; however, distinct structural changes and socioeconomic policies can be found with each regime change. By analyzing the transformative justice elements of Nicaragua’s dual transition, this chapter seeks to expand the understanding of transitional justice to include how these factors influence goals of transitions such as sustainable peace and reconciliation for past injustices. The results argue for increased attention to transformative justice theories and a more nuanced conception of justice.
Entrepreneurial failure
(2022)
Although entrepreneurial failure (EF) is a fairly recent topic in entrepreneurship literature, the number of publications has been growing dynamically and particularly rapidly. Our systematic review maps and integrates the research on EF based on a multi-method approach to give structure and consistency to this fragmented field of research. The results reveal that the field revolves around six thematic clusters of EF: 1) Soft underpinnings of EF, 2) Contextuality of EF, 3) Perception of EF, 4) Two-sided effects of EF, 5) Multi-stage EF effects, and 6) Institutional drivers of EF. An integrative framework of the positive and negative effects of entrepreneurial failure is proposed, and a research agenda is suggested.
The discovery that certain diseases have specific miRNA signatures which correspond to disease progression opens a new biomarker category. The detection of these small non-coding RNAs is performed routinely using body fluids or tissues with real-time PCR, next-generation sequencing, or amplification-based miRNA assays. Antibody-based detection systems allow an easy onset handling compared to PCR or sequencing and can be considered as alternative methods to support miRNA diagnostic in the future. In this study, we describe the generation of a camelid heavy-chain-only antibody specifically recognizing miRNAs to establish an antibody-based detection method. The generation of nucleic acid-specific binders is a challenge. We selected camelid binders via phage display, expressed them as VHH as well as full-length antibodies, and characterized the binding to several miRNAs from a signature specific for dilated cardiomyopathy. The described workflow can be used to create miRNA-specific binders and establish antibody-based detection methods to provide an additional way to analyze disease-specific miRNA signatures.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Business incubators hatch start-ups, helping them to survive their early stage and to create a solid foundation for sustainable growth by providing services and access to knowledge. The great practical relevance led to a strong interest of researchers and a high output of scholarly publications, which made the field complex and scattered. To organize the research on incubators and provide a systematic overview of the field, we conducted bibliometric performance analyses and science mappings. The performance analyses depict the temporal development of the number of incubator publications and their citations, the most cited and most productive journals, countries, and authors, and the 20 most cited articles. The author keyword co-occurrence analysis distinguishes six, and the bibliographic coupling seven research themes. Based on a content analysis of the science mappings, we propose a research framework for future research on business incubators.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
The Role of the Precuneus in Human Spatial Updating in a Real Environment Setting—A cTBS Study
(2022)
As we move through an environment, we update positions of our body relative to other objects, even when some objects temporarily or permanently leave our field of view—this ability is termed egocentric spatial updating and plays an important role in everyday life. Still, our knowledge about its representation in the brain is still scarce, with previous studies using virtual movements in virtual environments or patients with brain lesions suggesting that the precuneus might play an important role. However, whether this assumption is also true when healthy humans move in real environments where full body-based cues are available in addition to the visual cues typically used in many VR studies is unclear. Therefore, in this study we investigated the role of the precuneus in egocentric spatial updating in a real environment setting in 20 healthy young participants who underwent two conditions in a cross-over design: (a) stimulation, achieved through applying continuous theta-burst stimulation (cTBS) to inhibit the precuneus and (b) sham condition (activated coil turned upside down). In both conditions, participants had to walk back with blindfolded eyes to objects they had previously memorized while walking with open eyes. Simplified trials (without spatial updating) were used as control condition, to make sure the participants were not affected by factors such as walking blindfolded, vestibular or working memory deficits. A significant interaction was found, with participants performing better in the sham condition compared to real stimulation, showing smaller errors both in distance and angle. The results of our study reveal evidence of an important role of the precuneus in a real-environment egocentric spatial updating; studies on larger samples are necessary to confirm and further investigate this finding.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Fitness, risk taking, and spatial behavior covary with boldness in experimental vole populations
(2022)
Individuals of a population may vary along a pace-of-life syndrome from highly fecund, short-lived, bold, dispersive “fast” types at one end of the spectrum to less fecund, long-lived, shy, plastic “slow” types at the other end. Risk-taking behavior might mediate the underlying life history trade-off, but empirical evidence supporting this hypothesis is still ambiguous. Using experimentally created populations of common voles (Microtus arvalis)—a species with distinct seasonal life history trajectories—we aimed to test whether individual differences in boldness behavior covary with risk taking, space use, and fitness. We quantified risk taking, space use (via automated tracking), survival, and reproductive success (via genetic parentage analysis) in 8 to 14 experimental, mixed-sex populations of 113 common voles of known boldness type in large grassland enclosures over a significant part of their adult life span and two reproductive events. Populations were assorted to contain extreme boldness types (bold or shy) of both sexes. Bolder individuals took more risks than shyer ones, which did not affect survival. Bolder males but not females produced more offspring than shy conspecifics. Daily home range and core area sizes, based on 95% and 50% Kernel density estimates (20 ± 10 per individual, n = 54 individuals), were highly repeatable over time. Individual space use unfolded differently for sex-boldness type combinations over the course of the experiment. While day ranges decreased for shy females, they increased for bold females and all males. Space use trajectories may, hence, indicate differences in coping styles when confronted with a novel social and physical environment. Thus, interindividual differences in boldness predict risk taking under near-natural conditions and have consequences for fitness in males, which have a higher reproductive potential than females. Given extreme inter- and intra-annual fluctuations in population density in the study species and its short life span, density-dependent fluctuating selection operating differently on the sexes might maintain (co)variation in boldness, risk taking, and pace-of-life.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Biological invasions may result from multiple introductions, which might compensate for reduced gene pools caused by bottleneck events, but could also dilute adaptive processes. A previous common-garden experiment showed heritable latitudinal clines in fitness-related traits in the invasive goldenrod Solidago canadensis in Central Europe. These latitudinal clines remained stable even in plants chemically treated with zebularine to reduce epigenetic variation. However, despite the heritability of traits investigated, genetic isolation-by-distance was non-significant. Utilizing the same specimens, we applied a molecular analysis of (epi)genetic differentiation with standard and methylation-sensitive (MSAP) AFLPs. We tested whether this variation was spatially structured among populations and whether zebularine had altered epigenetic variation. Additionally, we used genome scans to mine for putative outlier loci susceptible to selection processes in the invaded range. Despite the absence of isolation-by-distance, we found spatial genetic neighborhoods among populations and two AFLP clusters differentiating northern and southern Solidago populations. Genetic and epigenetic diversity were significantly correlated, but not linked to phenotypic variation. Hence, no spatial epigenetic patterns were detected along the latitudinal gradient sampled. Applying genome-scan approaches (BAYESCAN, BAYESCENV, RDA, and LFMM), we found 51 genetic and epigenetic loci putatively responding to selection. One of these genetic loci was significantly more frequent in populations at the northern range. Also, one epigenetic locus was more frequent in populations in the southern range, but this pattern was lost under zebularine treatment. Our results point to some genetic, but not epigenetic adaptation processes along a large-scale latitudinal gradient of S. canadensis in its invasive range.
Hunting Down Animal Verbs
(2022)
Language change is an essential feature of human language, and it is therefore one of the focal areas of the scientific study of language. Language change is always tacitly at work in all languages of the world and at all levels of a given language, be it phonology, morphology, syntax, semantics, etc. It has been suggested that it is precisely the capacity to constantly change and adjust that allows language to keep serving the communicative goals of its users, from ancient to modern times (Fauconnier & Turner, 2003, p. 179).
This thesis investigates an especially salient pattern of lexicogrammatical change, namely word-formation of verbs from animal nouns by zero-derivation, in the process of which such nouns as, for example, dog, horse, or beaver change their usage and meaning to produce animal verbs: to dog ‘to follow someone persistently and with a malicious intent’, to horse about/around ‘to make fun of, to ‘rag’, to ridicule someone’ and to beaver away ‘to work at working with great enthusiasm’ respectively. In the previous literature this pattern of language change has been termed verbal zoosemy (e.g. Kiełtyka, 2016), i.e. metaphorical construal of human actions by means of linguistic material from the domain of animals.
The approach taken in this study is not to simply report on the objective changes in the morphology, syntactic distribution and meaning of such linguistic units before and after conversion, but to uncover the complexity of cognitive mechanisms which allow the speakers of English to reclassify such well-established nominal units as animal noun into verbs. It is assumed that the grammatical change in these lexical units is predicated on and triggered by preceding semantic change. Thus, the study is set in the framework of Cognitive Historical Semantics and employs the Conceptual Metaphor and Metonymy Theory (CMMT) to untangle the intricacies of the semantic change making the grammatical change of animal nouns into verbs possible and acceptable in the minds of English speakers.
To this end, this study employed the Oxford English Dictionary Online (OED Online) to compile a glossary of 96 denominal animal verbal forms tied to 209 verbal senses (most verbs in the dataset displayed polysemy). The data collected from the OED Online included not only the senses of the verbs, but also the date of the earliest recorded use of the verbal form with the given sense (regarded in the study as the date of conversion), the earliest usage examples for individual senses and morphologically or semantically related linguistic units from the lexical field of the respective parent noun which were amenable to explaining the observed instances of semantic change. Each instance of zoosemisation, i.e. of the creation of a separate metaphorical verbal sense, was then carefully analysed on the basis of the data collected and classified with the help of the CMMT. In the final stage, a comprehensive and systematic classification of the senses of animal verbs in accordance with the cognitive mechanisms of their creation (metaphor, metonymy, or a combination thereof) was produced together with a timeline of the first appearance of individual metaphorical senses of animal verbs recorded in the OED.
The results show that animal verbs are produced through the interaction of conceptual metaphor and metonymy. Specifically, it was established that two major patterns of metaphor-metonymy interaction underpinning the process of verbal zoosemisation are metaphor from metonymy and metonymy from metaphor. In the former pattern, either an already existing metonymic animal verb is expanded to include the target domain PEOPLE, or the animal noun itself acts as a metonymic vehicle to a certain element of the idealised cognitive model of the given animal, which is metaphorically projected onto people. In the latter mechanism, a metaphorical projection of an animal term initially enters the lexicon in the form of a metaphorical animal noun referring to a human entity, and later in the course of language development it comes to metonymically stand for the action, which the given entity either performs or is involved in. Secondarily, it was observed that individual animal nouns can undergo multiple rounds of zoosemic conversion over time depending on the semantic frame in which the given linguistic unit undergoes denominal conversion, and that results in the polysemy of most animal verbs.
Intelligence, as well as working memory and attention, affect the acquisition of mathematical competencies. This paper aimed to examine the influence of working memory and attention when taking different mathematical skills into account as a function of children’s intellectual ability. Overall, intelligence, working memory, attention and numerical skills were assessed twice in 1868 German pre-school children (t1, t2) and again at 2nd grade (t3). We defined three intellectual ability groups based on the results of intellectual assessment at t1 and t2. Group comparisons revealed significant differences between the three intellectual ability groups. Over time, children with low intellectual ability showed the lowest achievement in domain-general and numerical and mathematical skills compared to children of average intellectual ability. The highest achievement on the aforementioned variables was found for children of high intellectual ability. Additionally, path modelling revealed that, depending on the intellectual ability, different models of varying complexity could be generated. These models differed with regard to the relevance of the predictors (t2) and the future mathematical skills (t3). Causes and conclusions of these findings are discussed.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Objective
To improve consumer decision making, the results of risk assessments on food, feed, consumer products or chemicals need to be communicated not only to experts but also to non-expert audiences. The present study draws on evidence from literature reviews and focus groups with diverse stakeholders to identify content to integrate into an existing risk assessment communication (Risk Profile).
Methods
A combination of rapid literature reviews and focus groups with experts (risk assessors (n = 15), risk managers (n = 8)), and non-experts (general public (n = 18)) were used to identify content and strategies for including information about risk assessment results in the “Risk Profile” from the German Federal Institute for Risk Assessment. Feedback from initial focus groups was used to develop communication prototypes that informed subsequent feedback rounds in an iterative process. A final prototype was validated in usability tests with experts.
Results
Focus group feedback and suggestions from risk assessors were largely in line with findings from the literature. Risk managers and lay persons offered similar suggestions on how to improve the existing communication of risk assessment results (e.g., including more explanatory detail, reporting probabilities for individual health impairments, and specifying risks for subgroups in additional sections). Risk managers found information about quality of evidence important to communicate, whereas people from the general public found this information less relevant. Participants from lower educational backgrounds had difficulties understanding the purpose of risk assessments. User tests found that the final prototype was appropriate and feasible to implement by risk assessors.
Conclusion
An iterative and evidence-based process was used to develop content to improve the communication of risk assessments to the general public while being feasible to use by risk assessors. Remaining challenges include how to communicate dose-response relationships and standardise quality of evidence ratings across disciplines.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Aldehyde oxidases (AOXs) (E.C. 1.2.3.1) are molybdoflavo-enzymes belonging to the xanthine oxidase (XO) family. AOXs in mammals contain one molybdenum cofactor (Moco), one flavin adenine dinucleotide (FAD) and two [2Fe-2S] clusters, the presence of which is essential for the activity of the enzyme. Human aldehyde oxidase (hAOX1) is a cytosolic enzyme mainly expressed in the liver. hAOX1is involved in the metabolism of xenobiotics. It oxidizes aldehydes to their corresponding carboxylic acids and hydroxylates N-heterocyclic compounds. Since these functional groups are widely present in therapeutics, understanding the behaviour of hAOX1 has important implications in medicine. During the catalytic cycle of hAOX1, the substrate is oxidized at Moco and electrons are internally transferred to FAD via the FeS clusters. An electron acceptor juxtaposed to the FAD receives the electrons and re-oxidizes the enzyme for the next catalytic cycle. Molecular oxygen is the endogenous electron acceptor of hAOX1 and in doing so it is reduced and produces reactive oxygen species (ROS) including hydrogen peroxide (H2O2) and superoxide (O2.-). The production of ROS has patho-physiological importance, as ROS can have a wide range of effects on cell components including the enzyme itself.
In this thesis, we have shown that hAOX1 loses its activity over multiple cycles of catalysis due to endogenous ROS production and have identified a cysteine rich motif that protects hAOX1 from the ROS damaging effects. We have also shown that a sulfido ligand, which is bound at Moco and is essential for the catalytic activity of the enzyme, is vulnerable during turnover. The ROS produced during the course of the reaction are also able to remove this sulfido ligand from Moco. ROS, in addition, oxidize particular cysteine residues. The combined effects of ROS on the sulfido ligand and on specific cysteine residues in the enzyme result in its inactivation. Furthermore, we report that small reducing agents containing reactive sulfhydryl groups, in a selective manner, inactivate some of the mammalian AOXs by modifying the sulfido ligand at Moco. The mechanism of ROS production by hAOX1 is another scope that has been investigated as part of the work in this thesis. We have shown that the ratio of type of ROS, i.e. hydrogen peroxide (H2O2) and superoxide (O2.-), produced by hAOX1 is determined by a particular position on a flexible loop that locates in close proximity of FAD. The size of the cavity at the ROS producing site, i.e. the N5 position of the FAD isoalloxazine ring, kinetically affects the amount of each type of ROS generated by hAOX1. Taken together, hAOX1 is an enzyme with emerging importance in pharmacological and medical studies, not only due to its involvement in drug metabolism, but also due to ROS production which has physiological and pathological implications.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
Doing good by doing bad
(2022)
This study investigates how tone at the top, implemented by top management, and tone at the bottom, in an employee's immediate work environment, determine noncompliance. We focus on the disallowed actions of employees that improve their own and, in turn, the company's performance, referred to as performance-improving noncompliant behavior (PINC behavior). We conduct a survey of German sales employees to investigate specifically how, on the one hand, (1) corporate rules and (2) performance pressure, both implemented by top management, and, on the other hand, (3) others' PINC expectations and (4) others' PINC behavior, both arising from the employee's immediate work environment, influence PINC behavior. When considered in isolation, we find that corporate rules, as top management's main instrument to guide employee behavior, decrease employee PINC behavior. However, this effect is negatively influenced by the employees' immediate work environment when employees are expected to engage in PINC or when others engage in PINC. In contrast, even though top management places great performance pressure on employees, that by itself does not increase PINC behavior. Overall, our study informs practitioners and researchers about whether and how the four determinants increase or decrease employees' PINC behavior, which is important to comprehend triggers and to counteract such misconduct.
In light of climate change mitigation efforts, revenues from climate policies are growing, with no consensus yet on how they should be used. Potential efficiency gains from reducing distortionary taxes and the distributional implications of different revenue recycling schemes are currently debated. To account for households heterogeneity and dynamic trade-offs, we study the macroeconomic and welfare performance of different revenue recycling schemes using an Environmental Two-Agent New-Keynesian model, calibrated on the German economy. We find that, in the long run, welfare gains are higher when revenues are used to reduce distortionary taxes on capital, but this comes at the cost of higher inequality: while all households prefer labor income tax reductions to lump-sum transfers, only financially unconstrained households are better off when reducing taxes on capital income. Interestingly, we find that over the transition period relevant to meet short-medium run climate targets, labor income tax cuts are the most efficient and equitable instrument.
Hydraulic-driven fractures play a key role in subsurface energy technologies across several scales. By injecting fluid at high hydraulic pressure into rock with intrinsic low permeability, in-situ stress field and fracture development pattern can be characterised as well as rock permeability can be enhanced. Hydraulic fracturing is a commercial standard procedure for enhanced oil and gas production of rock reservoirs with low permeability in petroleum industry. However, in EGS utilization, a major geological concern is the unsolicited generation of earthquakes due to fault reactivation, referred to as induced seismicity, with a magnitude large enough to be felt on the surface or to damage facilities and buildings. Furthermore, reliable interpretation of hydraulic fracturing tests for stress measurement is a great challenge for the energy technologies. Therefore, in this cumulative doctoral thesis the following research questions are investigated. (1): How do hydraulic fractures grow in hard rock at various scales?; (2): Which parameters control hydraulic fracturing and hydro-mechanical coupling?; and (3): How can hydraulic fracturing in hard rock be modelled?
In the laboratory scale study, several laboratory hydraulic fracturing experiments are investigated numerically using Irazu2D that were performed on intact cubic Pocheon granite samples from South Korea applying different injection protocols. The goal of the laboratory experiments is to test the concept of cyclic soft stimulation which may enable sustainable permeability enhancement (Publication 1).
In the borehole scale study, hydraulic fracturing tests are reported that were performed in boreholes located in central Hungary to determine the in-situ stress for a geological site investigation. At depth of about 540 m, the recorded pressure versus time curves in mica schist with low dip angle foliation show atypical evolution. In order to provide explanation for this observation, a series of discrete element computations using Particle Flow Code 2D are performed (Publication 2).
In the reservoir scale study, the hydro-mechanical behaviour of fractured crystalline rock due to one of the five hydraulic stimulations at the Pohang Enhanced Geothermal site in South Korea is studied. Fluid pressure perturbation at faults of several hundred-meter lengths during hydraulic stimulation is simulated using FracMan (Publication 3).
The doctoral research shows that the resulting hydraulic fracturing geometry will depend “locally”, i.e. at the length scale of representative elementary volume (REV) and below that (sub-REV), on the geometry and strength of natural fractures, and “globally”, i.e. at super-REV domain volume, on far-field stresses. Regarding hydro-mechanical coupling, it is suggested to define separate coupling relationship for intact rock mass and natural fractures. Furthermore, the relative importance of parameters affecting the magnitude of formation breakdown pressure, a parameter characterising hydro-mechanical coupling, is defined. It can be also concluded that there is a clear gap between the capacity of the simulation software and the complexity of the studied problems. Therefore, the computational time of the simulation of complex hydraulic fracture geometries must be reduced while maintaining high fidelity simulation results. This can be achieved either by extending the computational resources via parallelization techniques or using time scaling techniques. The ongoing development of used numerical models focuses on tackling these methodological challenges.
Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
(2022)
Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets.
Background
The aim of this study was to analyze the shoulder functional profile (rotation range of motion [ROM] and strength), upper and lower body performance, and throwing speed of U13 versus U15 male handball players, and to establish the relationship between these measures of physical fitness and throwing speed.
Methods
One-hundred and nineteen young male handball players (under (U)-13 (U13) [n = 85]) and U15 [n = 34]) volunteered to participate in this study. The participating athletes had a mean background of sytematic handball training of 5.5 ± 2.8 years and they exercised on average 540 ± 10.1 min per week including sport-specific team handball training and strength and conditioning programs. Players were tested for passive shoulder range-of-motion (ROM) for both internal (IR) and external rotation (ER) and isometric strength (i.e., IR and ER) of the dominant/non-dominant shoulders, overhead medicine ball throw (OMB), hip isometric abductor (ABD) and adductor (ADD) strength, hip ROM, jumps (countermovement jump [CMJ] and triple leg-hop [3H] for distance), linear sprint test, modified 505 change-of-direction (COD) test and handball throwing speed (7 m [HT7] and 9 m [HT9]).
Results
U15 players outperformed U13 in upper (i.e., HT7 and HT9 speed, OMB, absolute IR and ER strength of the dominant and non-dominant sides; Cohen’s d: 0.76–2.13) and lower body (i.e., CMJ, 3H, 20-m sprint and COD, hip ABD and ADD; d: 0.70–2.33) performance measures. Regarding shoulder ROM outcomes, a lower IR ROM was found of the dominant side in the U15 group compared to the U13 and a higher ER ROM on both sides in U15 (d: 0.76–1.04). It seems that primarily anthropometric characteristics (i.e., body height, body mass) and upper body strength/power (OMB distance) are the most important factors that explain the throw speed variance in male handball players, particularly in U13.
Conclusions
Findings from this study imply that regular performance monitoring is important for performance development and for minimizing injury risk of the shoulder in both age categories of young male handball players. Besides measures of physical fitness, anthropometric data should be recorded because handball throwing performance is related to these measures.
The protein fraction, important for coffee cup quality, is modified during post-harvest treatment prior to roasting. Proteins may interact with phenolic compounds, which constitute the major metabolites of coffee, where the processing affects these interactions. This allows the hypothesis that the proteins are denatured and modified via enzymatic and/or redox activation steps. The present study was initiated to encompass changes in the protein fraction. The investigations were limited to major storage protein of green coffee beans. Fourteen Coffea arabica samples from various processing methods and countries were used. Different extraction protocols were compared to maintain the status quo of the protein modification. The extracts contained about 4–8 µg of chlorogenic acid derivatives per mg of extracted protein. High-resolution chromatography with multiple reaction monitoring was used to detect lysine modifications in the coffee protein. Marker peptides were allocated for the storage protein of the coffee beans. Among these, the modified peptides K.FFLANGPQQGGK.E and R.LGGK.T of the α-chain and R.ITTVNSQK.I and K.VFDDEVK.Q of β-chain were detected. Results showed a significant increase (p < 0.05) of modified peptides from wet processed green beans as compared to the dry ones. The present study contributes to a better understanding of the influence of the different processing methods on protein quality and its role in the scope of coffee cup quality and aroma. View Full-Text
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
This article examines public service resilience during the COVID-19 pandemic and studies the switch to telework due to social distancing measures. We argue that the pandemic and related policies led to increasing demands on public organisations and their employees. Following the job demands-resources model, we argue that resilience only can arise in the presence of resources for buffering these demands. Survey data were collected from 1,189 German public employees, 380 participants were included for analysis. The results suggest that the public service was resilient against the crisis and that the shift to telework was not as demanding as expected.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
Pokhara (ca. 850 m a.s.l.), Nepal's second-largest city, lies at the foot of the Higher Himalayas and has more than tripled its population in the past 3 decades. Construction materials are in high demand in rapidly expanding built-up areas, and several informal settlements cater to unregulated sand and gravel mining in the Pokhara Valley's main river, the Seti Khola. This river is fed by the Sabche glacier below Annapurna III (7555 m a.s.l.), some 35 km upstream of the city, and traverses one of the steepest topographic gradients in the Himalayas. In May 2012 a sudden flood caused >70 fatalities and intense damage along this river and rekindled concerns about flood risk management. We estimate the flow dynamics and inundation depths of flood scenarios using the hydrodynamic model HEC-RAS (Hydrologic Engineering Center’s River Analysis System). We simulate the potential impacts of peak discharges from 1000 to 10 000 m3 s−1 on land cover based on high-resolution Maxar satellite imagery and OpenStreetMap data (buildings and road network). We also trace the dynamics of two informal settlements near Kaseri and Yamdi with high potential flood impact from RapidEye, PlanetScope, and Google Earth imagery of the past 2 decades. Our hydrodynamic simulations highlight several sites of potential hydraulic ponding that would largely affect these informal settlements and sites of sand and gravel mining. These built-up areas grew between 3- and 20-fold, thus likely raising local flood exposure well beyond changes in flood hazard. Besides these drastic local changes, about 1 % of Pokhara's built-up urban area and essential rural road network is in the highest-hazard zones highlighted by our flood simulations. Our results stress the need to adapt early-warning strategies for locally differing hydrological and geomorphic conditions in this rapidly growing urban watershed.
As climate change worsens, there is a growing urgency to promote renewable energies and improve their accessibility to society. Here, solar energy harvesting is of particular importance. Currently, metal halide perovskite (MHP) solar cells are indispensable in future solar energy generation research. MHPs are crystalline semiconductors increasingly relevant as low-cost, high-performance materials for optoelectronics. Their processing from solution at low temperature enables easy fabrication of thin film elements, encompassing solar cells and light-emitting diodes or photodetectors. Understanding the coordination chemistry of MHPs in their precursor solution would allow control over the thin film crystallization, the material properties and the final device performance.
In this work, we elaborate on the key parameters to manipulate the precursor solution with the long-term objective of enabling systematic process control. We focus on the nanostructural characterization of the initial arrangements of MHPs in the precursor solutions. Small-angle scattering is particularly well suited for measuring nanoparticles in solution. This technique proved to be valuable for the direct analyzes of perovskite precursor solutions in standard processing concentrations without causing radiation damage. We gain insights into the chemical nature of widely used precursor structures such as methylammonium lead iodide (MAPbI3), presenting first insights into the complex arrangements and interaction within this precursor state. Furthermore, we transfer the preceding results to other more complex perovskite precursors. The influence of compositional engineering is investigated using the addition of alkali cations as an example. As a result, we propose a detailed working mechanism on how the alkali cations suppress the formation of intermediate phases and improve the quality of the crystalline thin film. In addition, we investigate the crystallization process of a tin-based perovskite composition (FASnI3) under the influence of fluoride chemistry. We prove that the frequently used additive, tin fluoride (SnF2), selectively binds undesired oxidized tin (Sn(IV)) in the precursor solution. This prevents its incorporation into the actual crystal structure and thus reduces the defect density of the material. Furthermore, SnF2 leads to a more homogeneous crystal growth process, which results in improved crystal quality of the thin film material.
In total, this study provides a detailed characterization of the complex system of perovskite precursor chemistry. We thereby cover relevant parameters for future MHP solar cell process control, such as (I) the environmental impact based on concentration and temperature (II) the addition of counter ions to reduce the diffuse layer surrounding the precursor nanostructures and (III) the targeted use of additives to eliminate unwanted components selectively and to ensure a more homogeneous crystal growth.
Modeling and Formal Analysis of Meta-Ecosystems with Dynamic Structure using Graph Transformation
(2022)
The dynamics of ecosystems is of crucial importance. Various model-based approaches exist to understand and analyze their internal effects. In this paper, we model the space structure dynamics and ecological dynamics of meta-ecosystems using the formal technique of Graph Transformation (short GT). We build GT models to describe how a meta-ecosystem (modeled as a graph) can evolve over time (modeled by GT rules) and to analyze these GT models with respect to qualitative properties such as the existence of structural stabilities. As a case study, we build three GT models describing the space structure dynamics and ecological dynamics of three different savanna meta-ecosystems. The first GT model considers a savanna meta-ecosystem that is limited in space to two ecosystem patches, whereas the other two GT models consider two savanna meta-ecosystems that are unlimited in the number of ecosystem patches and only differ in one GT rule describing how the space structure of the meta-ecosystem grows. In the first two GT models, the space structure dynamics and ecological dynamics of the meta-ecosystem shows two main structural stabilities: the first one based on grassland-savanna-woodland transitions and the second one based on grassland-desert transitions. The transition between these two structural stabilities is driven by high-intensity fires affecting the tree components. In the third GT model, the GT rule for savanna regeneration induces desertification and therefore a collapse of the meta-ecosystem. We believe that GT models provide a complementary avenue to that of existing approaches to rigorously study ecological phenomena.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.
This paper aims to contribute a different approach to transitional justice, one in which political decisions are rocketed to the forefront of the research. Theory asserts that, after a transition to democracy, it is the constituency who defines the direction a country will take. Therefore, pleasing them should be at the fore of the responses taken by those in power. However, reality distances itself from theory. History provides us with many examples of the contrary, which indicates that the politicization of transitional justice is an ever-present event. The first section will outline current definitions and obstacles faced by transitional justice, focusing on the implicit ties between them and the aforementioned politicization. An original categorization of Transitional Justice as a method of analysis will also be introduced, which I denominate Political Opportunism. The case of Argentina, a country that is usually described as a model to export but that after 35 years is still dealing with the consequences brought by the contradictions of using several methods of justice, will then be reinterpreted through this perspective. At the end of the paper, the inevitable question will be posed: can this new angle be exported and implemented in every transition?
Cosmic-ray neutron sensing (CRNS) is a non-invasive tool for measuring hydrogen pools such as soil moisture, snow or vegetation. The intrinsic integration over a radial hectare-scale footprint is a clear advantage for averaging out small-scale heterogeneity, but on the other hand the data may become hard to interpret in complex terrain with patchy land use.
This study presents a directional shielding approach to prevent neutrons from certain angles from being counted while counting neutrons entering the detector from other angles and explores its potential to gain a sharper horizontal view on the surrounding soil moisture distribution.
Using the Monte Carlo code URANOS (Ultra Rapid Neutron-Only Simulation), we modelled the effect of additional polyethylene shields on the horizontal field of view and assessed its impact on the epithermal count rate, propagated uncertainties and aggregation time.
The results demonstrate that directional CRNS measurements are strongly dominated by isotropic neutron transport, which dilutes the signal of the targeted direction especially from the far field. For typical count rates of customary CRNS stations, directional shielding of half-spaces could not lead to acceptable precision at a daily time resolution. However, the mere statistical distinction of two rates should be feasible.
Carbon dioxide removal (CDR) moves atmospheric carbon to geological or land-based sinks. In a first-best setting, the optimal use of CDR is achieved by a removal subsidy that equals the optimal carbon tax and marginal damages. We derive second-best subsidies for CDR when no global carbon price exists but a national government implements a unilateral climate policy. We find that the optimal carbon tax differs from an optimal CDR subsidy because of carbon leakage, terms-of-trade and fossil resource rent dynamics. First, the optimal removal subsidy tends to be larger than the carbon tax because of lower supply-side leakage on fossil resource markets. Second, terms-of-trade effects exacerbate this wedge for net resource exporters, implying even larger removal subsidies. Third, the optimal removal subsidy may fall below the carbon tax for resource-poor countries when marginal environmental damages are small.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
Timing of initial school enrollment may vary considerably for various reasons such as early or delayed enrollment, skipped or repeated school classes. Accordingly, the age range within school grades includes older-(OTK) and younger-than-keyage (YTK) children. Hardly any information is available on the impact of timing of school enrollment on physical fitness. There is evidence from a related research topic showing large differences in academic performance between OTK and YTK children versus keyage children. Thus, the aim of this study was to compare physical fitness of OTK (N = 26,540) and YTK (N = 2586) children versus keyage children (N = 108,295) in a representative sample of German third graders. Physical fitness tests comprised cardiorespiratory endurance, coordination, speed, lower, and upper limbs muscle power. Predictions of physical fitness performance for YTK and OTK children were estimated using data from keyage children by taking age, sex, school, and assessment year into account. Data were annually recorded between 2011 and 2019. The difference between observed and predicted z-scores yielded a delta z-score that was used as a dependent variable in the linear mixed models. Findings indicate that OTK children showed poorer performance compared to keyage children, especially in coordination, and that YTK children outperformed keyage children, especially in coordination. Teachers should be aware that OTK children show poorer physical fitness performance compared to keyage children.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.
Physical fatigue (PF) negatively affects postural control, resulting in impaired balance performance in young and older adults. Similar effects on postural control can be observed for mental fatigue (MF) mainly in older adults. Controversial results exist for young adults. There is a void in the literature on the effects of fatigue on balance and cortical activity. Therefore, this study aimed to examine the acute effects of PF and MF on postural sway and cortical activity. Fifteen healthy young adults aged 28 ± 3 years participated in this study. MF and PF protocols comprising of an all-out repeated sit-to-stand task and a computer-based attention network test, respectively, were applied in random order. Pre and post fatigue, cortical activity and postural sway (i.e., center of pressure displacements [CoPd], velocity [CoPv], and CoP variability [CV CoPd, CV CoPv]) were tested during a challenging bipedal balance board task. Absolute spectral power was calculated for theta (4–7.5 Hz), alpha-2 (10.5–12.5 Hz), beta-1 (13–18 Hz), and beta-2 (18.5–25 Hz) in frontal, central, and parietal regions of interest (ROI) and baseline-normalized. Inference statistics revealed a significant time-by-fatigue interaction for CoPd (p = 0.009, d = 0.39, Δ 9.2%) and CoPv (p = 0.009, d = 0.36, Δ 9.2%), and a significant main effect of time for CoP variability (CV CoPd: p = 0.001, d = 0.84; CV CoPv: p = 0.05, d = 0.62). Post hoc analyses showed a significant increase in CoPd (p = 0.002, d = 1.03) and CoPv (p = 0.003, d = 1.03) following PF but not MF. For cortical activity, a significant time-by-fatigue interaction was found for relative alpha-2 power in parietal (p < 0.001, d = 0.06) areas. Post hoc tests indicated larger alpha-2 power increases after PF (p < 0.001, d = 1.69, Δ 3.9%) compared to MF (p = 0.001, d = 1.03, Δ 2.5%). In addition, changes in parietal alpha-2 power and measures of postural sway did not correlate significantly, irrespective of the applied fatigue protocol. No significant changes were found for the other frequency bands, irrespective of the fatigue protocol and ROI under investigation. Thus, the applied PF protocol resulted in increased postural sway (CoPd and CoPv) and CoP variability accompanied by enhanced alpha-2 power in the parietal ROI while MF led to increased CoP variability and alpha-2 power in our sample of young adults. Potential underlying cortical mechanisms responsible for the greater increase in parietal alpha-2 power after PF were discussed but could not be clearly identified as cause. Therefore, further future research is needed to decipher alternative interpretations.
There is broad agreement among researchers to view mind wandering as an obstacle to learning because it draws attention away from learning tasks. Accordingly, empirical findings revealed negative correlations between the frequency of mind wandering during learning and various kinds of learning outcomes (e.g., text retention). However, a few studies have indicated positive effects of mind wandering on creativity in real-world learning environments. The present article reviews these studies and highlights potential benefits of mind wandering for learning mediated through creative processes. Furthermore, we propose various ways to promote useful mind wandering and, at the same time, minimize its negative impact on learning.
These days design thinking is no longer a “new approach”. Among practitioners, as well as academics, interest in the topic has gathered pace over the last two decades. However, opinions are divided over the longevity of the phenomenon: whether design thinking is merely “old wine in new bottles,” a passing trend, or still evolving as it is being spread to an increasing number of organizations and industries. Despite its growing relevance and the diffusion of design thinking, knowledge on the actual status quo in organizations remains scarce. With a new study, the research team of Prof. Uebernickel and Stefanie Gerken investigates temporal developments and changes in design thinking practices in organizations over the past six years comparing the results of the 2015 “Parts without a whole” study with current practices and future developments. Companies of all sizes and from different parts of the world participated in the survey. The findings from qualitative interviews with experts, i.e., people who have years of knowledge with design thinking, were cross-checked with the results from an exploratory analysis of the survey data. This analysis uncovers significant variances and similarities in how design thinking is interpreted and applied in businesses.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.