Refine
Has Fulltext
- yes (496) (remove)
Year of publication
- 2022 (496) (remove)
Document Type
- Postprint (163)
- Doctoral Thesis (159)
- Article (78)
- Working Paper (23)
- Monograph/Edited Volume (18)
- Master's Thesis (18)
- Part of Periodical (12)
- Bachelor Thesis (8)
- Report (8)
- Review (3)
Keywords
- machine learning (7)
- climate change (6)
- COVID-19 (5)
- Klimawandel (5)
- Deutschland (4)
- Digitalisierung (4)
- Erdmagnetismus (4)
- Germany (4)
- Lateinunterricht (4)
- Lehrkräftebildung (4)
Institute
- Extern (94)
- Institut für Biochemie und Biologie (55)
- Strukturbereich Kognitionswissenschaften (34)
- Hasso-Plattner-Institut für Digital Engineering GmbH (33)
- Institut für Physik und Astronomie (33)
- Institut für Geowissenschaften (26)
- Fachgruppe Volkswirtschaftslehre (24)
- Institut für Romanistik (24)
- Center for Economic Policy Analysis (CEPA) (20)
- Institut für Chemie (20)
Sharing marketplaces emerged as the new Holy Grail of value creation by enabling exchanges between strangers. Identity reveal, encouraged by platforms, cuts both ways: While inducing pre-transaction confidence, it is suspected of backfiring on the information senders with its discriminative potential. This study employs a discrete choice experiment to explore the role of names as signifiers of discriminative peculiarities and the importance of accompanying cues in peer choices of a ridesharing offer. We quantify users' preferences for quality signals in monetary terms and evidence comparative disadvantage of Middle Eastern descent male names for drivers and co-travelers. It translates into a lower willingness to accept and pay for an offer. Market simulations confirm the robustness of the findings. Further, we discover that females are choosier and include more signifiers of involuntary personal attributes in their decision-making. Price discounts and positive information only partly compensate for the initial disadvantage, and identity concealment is perceived negatively.
One for all, all for one
(2022)
We propose a conceptual model of acceptance of contact tracing apps based on the privacy calculus perspective. Moving beyond the duality of personal benefits and privacy risks, we theorize that users hold social considerations (i.e., social benefits and risks) that underlie their acceptance decisions. To test our propositions, we chose the context of COVID-19 contact tracing apps and conducted a qualitative pre-study and longitudinal quantitative main study with 589 participants from Germany and Switzerland. Our findings confirm the prominence of individual privacy calculus in explaining intention to use and actual behavior. While privacy risks are a significant determinant of intention to use, social risks (operationalized as fear of mass surveillance) have a notably stronger impact. Our mediation analysis suggests that social risks represent the underlying mechanism behind the observed negative link between individual privacy risks and contact tracing apps' acceptance. Furthermore, we find a substantial intention–behavior gap.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
The purpose of this study was to examine the test-retest reliability, and convergent and discriminative validity of a new taekwondo-specific change-of-direction (COD) speed test with striking techniques (TST) in elite taekwondo athletes. Twenty (10 males and 10 females) elite (athletes who compete at national level) and top-elite (athletes who compete at national and international level) taekwondo athletes with an average training background of 8.9 ± 1.3 years of systematic taekwondo training participated in this study. During the two-week test-retest period, various generic performance tests measuring COD speed, balance, speed, and jump performance were carried out during the first week and as a retest during the second week. Three TST trials were conducted with each athlete and the best trial was used for further analyses. The relevant performance measure derived from the TST was the time with striking penalty (TST-TSP). TST-TSP performances amounted to 10.57 ± 1.08 s for males and 11.74 ± 1.34 s for females. The reliability analysis of the TST performance was conducted after logarithmic transformation, in order to address the problem of heteroscedasticity. In both groups, the TST demonstrated a high relative test-retest reliability (intraclass correlation coefficients and 90% compatibility limits were 0.80 and 0.47 to 0.93, respectively). For absolute reliability, the TST’s typical error of measurement (TEM), 90% compatibility limits, and magnitudes were 4.6%, 3.4 to 7.7, for males, and 5.4%, 3.9 to 9.0, for females. The homogeneous sample of taekwondo athletes meant that the TST’s TEM exceeded the usual smallest important change (SIC) with 0.2 effect size in the two groups. The new test showed mostly very large correlations with linear sprint speed (r = 0.71 to 0.85) and dynamic balance (r = −0.71 and −0.74), large correlations with COD speed (r = 0.57 to 0.60) and vertical jump performance (r = −0.50 to −0.65), and moderate correlations with horizontal jump performance (r = −0.34 to −0.45) and static balance (r = −0.39 to −0.44). Top-elite athletes showed better TST performances than elite counterparts. Receiver operating characteristic analysis indicated that the TST effectively discriminated between top-elite and elite taekwondo athletes. In conclusion, the TST is a valid, and sensitive test to evaluate the COD speed with taekwondo specific skills, and reliable when considering ICC and TEM. Although the usefulness of the TST is questioned to detect small performance changes in the present population, the TST can detect moderate changes in taekwondo-specific COD speed.
Im Artikel werden von Frauen verfasste Filmdrehbücher der 1910er Jahre im Russischen Kaiserreich chronologisch untersucht. Zunächst werden die ersten Drehbuchautorinnen Makarova und Tat’jana Suchotina-Tolstaja, die am Anfang der 1910er Jahre in Koautorschaft mit den bekannten Autoren (Makarova mit den Regisseur Vladimir Gončarov; Suchotina-Tolstaja mit ihrem Vater Leo Tolstoj) arbeiteten, und ihre Filme in Betracht gezogen. Dann wird der Film Ključi sčastʹja / Schlüssel zum Glück (Vladimir Gardin, Jakov Protazanov, 1913, Russisches Kaiserreich) nach dem Roman von Anastasija Verbickaja näher behandelt. Verbickajas Film demonstrierte, dass eine Drehbuchautorin eine selbständige Autorin sein kann und diente als Impuls für die Entwicklung der Frauenfilmdramaturgie im Russischen Kaiserreich, deren Aufschwung in der zweiten Hälfte der 1910er Jahre begann, und prägte bestimmte Erwartungen von auf weiblichen Drehbüchern basierenden Filmen. Maria Kallaš, die an den Drehbüchern zu den Verfilmungen des russischen literarischen Kanons 1913 arbeitete, kritisierte Verbickajas Text als pseudofeministisch und behauptete in ihrem Essay „Ženskie kabare“ („Frauenkabarett“), dass Frauenliteratur noch „keine eigene Sprache“ habe (1916). Anna Mar begann ihre Arbeit im Kino 1914, parallel zu Verbiсkajas Nachfolgerinnen, und konzentrierte sich in ihren Filmen auf die soziale Problematik – die Stellung moderner Frauen in der Gesellschaft. Damit eröffnete Mar eine neue Entwicklungsperspektive für das weibliche Drehbuchschreiben.
This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
Sea level rise and coastal erosion have inundated large areas of Arctic permafrost. Submergence by warm and saline waters increases the rate of inundated permafrost thaw compared to sub-aerial thawing on land. Studying the contact between the unfrozen and frozen sediments below the seabed, also known as the ice-bearing permafrost table (IBPT), provides valuable information to understand the evolution of sub-aquatic permafrost, which is key to improving and understanding coastal erosion prediction models and potential greenhouse gas emissions. In this study, we use data from 2D electrical resistivity tomography (ERT) collected in the nearshore coastal zone of two Arctic regions that differ in their environmental conditions (e.g., seawater depth and resistivity) to image and study the subsea permafrost. The inversion of 2D ERT data sets is commonly performed using deterministic approaches that favor smoothed solutions, which are typically interpreted using a user-specified resistivity threshold to identify the IBPT position. In contrast, to target the IBPT position directly during inversion, we use a layer-based model parameterization and a global optimization approach to invert our ERT data. This approach results in ensembles of layered 2D model solutions, which we use to identify the IBPT and estimate the resistivity of the unfrozen and frozen sediments, including estimates of uncertainties. Additionally, we globally invert 1D synthetic resistivity data and perform sensitivity analyses to study, in a simpler way, the correlations and influences of our model parameters. The set of methods provided in this study may help to further exploit ERT data collected in such permafrost environments as well as for the design of future field experiments.
Objective: To examine the effect of plyometric jump training on skeletal muscle hypertrophy in healthy individuals.
Methods: A systematic literature search was conducted in the databases PubMed, SPORTDiscus, Web of Science, and Cochrane Library up to September 2021.
Results: Fifteen studies met the inclusion criteria. The main overall finding (44 effect sizes across 15 clusters median = 2, range = 1–15 effects per cluster) indicated that plyometric jump training had small to moderate effects [standardised mean difference (SMD) = 0.47 (95% CIs = 0.23–0.71); p < 0.001] on skeletal muscle hypertrophy. Subgroup analyses for training experience revealed trivial to large effects in non-athletes [SMD = 0.55 (95% CIs = 0.18–0.93); p = 0.007] and trivial to moderate effects in athletes [SMD = 0.33 (95% CIs = 0.16–0.51); p = 0.001]. Regarding muscle groups, results showed moderate effects for the knee extensors [SMD = 0.72 (95% CIs = 0.66–0.78), p < 0.001] and equivocal effects for the plantar flexors [SMD = 0.65 (95% CIs = −0.25–1.55); p = 0.143]. As to the assessment methods of skeletal muscle hypertrophy, findings indicated trivial to small effects for prediction equations [SMD = 0.29 (95% CIs = 0.16–0.42); p < 0.001] and moderate-to-large effects for ultrasound imaging [SMD = 0.74 (95% CIs = 0.59–0.89); p < 0.001]. Meta-regression analysis indicated that the weekly session frequency moderates the effect of plyometric jump training on skeletal muscle hypertrophy, with a higher weekly session frequency inducing larger hypertrophic gains [β = 0.3233 (95% CIs = 0.2041–0.4425); p < 0.001]. We found no clear evidence that age, sex, total training period, single session duration, or the number of jumps per week moderate the effect of plyometric jump training on skeletal muscle hypertrophy [β = −0.0133 to 0.0433 (95% CIs = −0.0387 to 0.1215); p = 0.101–0.751].
Conclusion: Plyometric jump training can induce skeletal muscle hypertrophy, regardless of age and sex. There is evidence for relatively larger effects in non-athletes compared with athletes. Further, the weekly session frequency seems to moderate the effect of plyometric jump training on skeletal muscle hypertrophy, whereby more frequent weekly plyometric jump training sessions elicit larger hypertrophic adaptations.
Hier geblieben?
(2022)
Die historische Forschung hat seit längerem herausgearbeitet, dass Migration nichts von einer Norm Abweichendes ist, sondern vielmehr ein »konstitutives Element der Menschheitsgeschichte« (J. Oltmer), der Mensch mithin stets ein »homo migrans« (K.-J. Bade) war. Auch die Geschichte Brandenburgs wurde seit jeher von Einwanderungsprozessen geprägt. Von »Toleranz« im modernen Sinne kann freilich keine Rede sein, sondern meistens ging es um ökonomisch nutzbringende Aufnahme bestimmter Gruppen. Sehr oft waren die Ansiedlungen aber auch das Ergebnis von Flucht, Vertreibung und kriegerischer Gewalt. Der vorliegende Band zeigt anhand von Beispielen vom frühen Mittelalter bis zur Gegenwart die Bedeutung der Zuwanderung für Brandenburg auf. Der Bogen reicht von der slawischen Einwanderung des 8./9. Jahrhunderts bis zur Ankunft russisch-jüdischer »Kontingentflüchtlinge« im Gefolge der deutschen Wiedervereinigung, von Niederländern, Juden, Hugenotten, Revolutionsflüchtlingen in der Frühen Neuzeit bis hin zu Muslimen, Zwangsarbeitern, Vertriebenen und DDR-»Fremdarbeitern« im 20. Jahrhundert – eine Geschichte der Vielfalt des brandenburgischen Raumes und seiner Bevölkerung im Spiegel der Zuwanderung.
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) with its land and vegetation height data product (ATL08), and Global Ecosystem Dynamics Investigation (GEDI) with its terrain elevation and height metrics data product (GEDI Level 2A) missions have great potential to globally map ground and canopy heights. Canopy height is a key factor in estimating above-ground biomass and its seasonal changes; these satellite missions can also improve estimated above-ground carbon stocks. This study presents a novel Sparse Vegetation Detection Algorithm (SVDA) which uses ICESat-2 (ATL03, geolocated photons) data to map tree and vegetation heights in a sparsely vegetated savanna ecosystem. The SVDA consists of three main steps: First, noise photons are filtered using the signal confidence flag from ATL03 data and local point statistics. Second, we classify ground photons based on photon height percentiles. Third, tree and grass photons are classified based on the number of neighbors. We validated tree heights with field measurements (n = 55), finding a root-mean-square error (RMSE) of 1.82 m using SVDA, GEDI Level 2A (Geolocated Elevation and Height Metrics product): 1.33 m, and ATL08: 5.59 m. Our results indicate that the SVDA is effective in identifying canopy photons in savanna ecosystems, where ATL08 performs poorly. We further identify seasonal vegetation height changes with an emphasis on vegetation below 3 m; widespread height changes in this class from two wet-dry cycles show maximum seasonal changes of 1 m, possibly related to seasonal grass-height differences. Our study shows the difficulties of vegetation measurements in savanna ecosystems but provides the first estimates of seasonal biomass changes.
The starting point of this article is the occurrence of determiner-less and bare que relative complementizers like (en) que, ‘(in) that’, instead of (en) el que, ‘(in) which’, in Yucatecan Spanish (southeast Mexico). While reference grammars treat complementizers with a determiner as the standard option, previous diachronic research has shown that determiner-less complementizers actually predate relative complementizers with a determiner. Additionally, Yucatecan Spanish has been in long-standing contact with Yucatec Maya. Relative complementation in Yucatec Maya differs from that in Spanish (at least) in that the non-complex complementizer tu’ux (‘where’) is generally the only option for locative complementation. The paper explores monolingual and bilingual data from Yucatecan Spanish to discuss the question whether the determiner-less and bare que relative complementizers in our data constitute a historic remnant or a dialectal recast, possibly (but not necessarily) due to language contact. Although our pilot study may not answer these far-reaching questions, it does reveal two separate, but intertwined developments: (i) a generally increased rate of bare que relative complementation, across both monolingual speakers of Spanish and Spanish Maya bilinguals, compared to other Spanish varieties, and (ii) a preference for donde at the cost of other locative complementizer constructions in the bilingual group. Our analysis thus reveals intriguing differences between the complementizer preferences of monolingual and bilingual speakers, suggesting that different variational patterns caused by different (socio-)linguistic factors can co-develop in parallel in one and the [same] region.
The COVID-19 pandemic created the largest experiment in working from home. We study how persistent telework may change energy and transport consumption and costs in Germany to assess the distributional and environmental implications when working from home will stick. Based on data from the German Microcensus and available classifications of working-from-home feasibility for different occupations, we calculate the change in energy consumption and travel to work when 15% of employees work full time from home. Our findings suggest that telework translates into an annual increase in heating energy expenditure of 110 euros per worker and a decrease in transport expenditure of 840 euros per worker. All income groups would gain from telework but high-income workers gain twice as much as low-income workers. The value of time saving is between 1.3 and 6 times greater than the savings from reduced travel costs and almost 9 times higher for high-income workers than low-income workers. The direct effects on CO₂ emissions due to reduced car commuting amount to 4.5 millions tons of CO₂, representing around 3 percent of carbon emissions in the transport sector.
Nach einer langen Phase der Restriktion und Bekämpfung durch die staatliche Sprachpolitik erleben Regionalsprachen in Frankreich seit den 1970er Jahren verstärktes Interesse und private sowie staatliche Förderungen. Dies gilt auch für das Bretonische. Trotz des kontinuierlichen Rückgangs der Sprecher*innenzahl ist eine konstruierte Verbindung zwischen bretonischer Sprache und einer „bretonischen Identität“ bemerkbar, die sich in positiven Spracheinstellungen zu der Regionalsprache äußert. Die vorliegende Masterarbeit analysiert anhand von öffentlich ausgestrahlten Videointerviews mit den Spitzenkandidat*innen der Regionalwahlen im Jahr 2021 Spracheinstellungen zum Bretonischen. In einem diskursanalytischen Ansatz werden die mündlichen Äußerungen der Politiker*innen auf explizite und implizite positive sowie negative Bewertungen der bretonischen Sprache hin untersucht. Interviewübergreifende Muster in den auftretenden Metaphern, Argumentationsstrukturen und Topoi weisen auf kollektive Wissensbestände und Annahmen hin, auf denen die Spracheinstellungen basieren. Diese bilden die Grundlage für sprachliche sowie sprachpolitische Handlungen.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
Barrierefreiheit im Sinne (räumlicher) Anpassung an heterogene Schülerschaften wurde bislang kaum an Schulen bzw. konkret im Technikunterricht erforscht. In dieser Arbeit wurden 5 Potsdamer Grundschulen unter dem Siegel „Schule des gemeinsamen Lernens“ unter dem Aspekt der Barrierefreiheit in technischen Fachräumen untersucht. Die Arbeit erfasst den Ist-Zustand über Einrichtung und Ausstattung technischer Fachräume an oben genannten Potsdamer Schulen und zeigt gleichzeitig verschiedene Fachraumkonzepte auf. Hierzu wurden technische Fachräume zum einen auf barrierefreie Elemente untersucht und zum anderen von WAT-Fachlehrkräften in Hinblick auf ihre Barrierefreiheit bewertet.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
Die vorliegende Arbeit untersucht, inwiefern extrapersonale Einflussfaktoren das Verhalten der Wissensteilung im Offboarding in der öffentlichen Verwaltung Deutschlands beeinflussen. Hier besteht eine Forschungslücke, die es insbesondere vor dem Hintergrund einer nahenden Pensionierungswelle und der daraus resultierenden Gefahr eines massiven Wissensverlusts zu schließen gilt. Zu diesem Zweck werden unterschiedliche Analyseebenen verknüpft, Einflussfaktoren aus der Literatur herausgearbeitet und in die Theorie des geplanten Verhaltens eingebunden. Anschließend werden Hypothesen formuliert, wie extrapersonale Einflussfaktoren, die sich aus der Verwaltung als organisationalen Kontext und dem Prozess des Offboarding ergeben, das Verhalten der Wissensteilung fördern oder hemmen. Die Testung der Hypothesen erfolgt durch die Erhebung und Auswertung qualitativer Interviewdaten. Daraus resultierende Erkenntnisse verdeutlichen, dass die anstehende Pensionierungswelle in der deutschen Verwaltung eine stärkere Ausrichtung des organisationalen Wissensmanagements auf den Prozess des Offboarding und dessen Gestaltung erfordert, um Wissensverluste zu reduzieren.
Eight d-metal-containing N-butylpyridinium ionic liquids (ILs) with the nominal composition (C4Py)2[Ni0.5M0.5Cl4] or (C4Py)2[Zn0.5M0.5Cl4] (M = Cu, Co, Mn, Ni, Zn; C4Py = N-butylpyridinium) were synthesized, characterized, and investigated for their optical properties. Single crystal and powder X-ray analysis shows that the compounds are isostructural to existing examples based on other d-metal ions. Inductively coupled plasma optical emission spectroscopy measurements confirm that the metal/metal ratio is around 50 : 50. UV-Vis spectroscopy shows that the optical absorption can be tuned by selection of the constituent metals. Moreover, the compounds can act as an optical sensor for the detection of gases such as ammonia as demonstrated via a simple prototype setup.
Enterprise systems have long played an important role in businesses of various sizes. With the increasing complexity of today’s business relationships, pecialized application systems are being used more and more. Moreover, emerging technologies such as artificial intelligence are becoming accessible for enterprise systems. This raises the question of the future role of enterprise systems. This minitrack covers novel ideas that contribute to and shape the future role of enterprise systems with five contributions.
Algorithmic management
(2022)
Land-use intensification is the main factor for the catastrophic decline of insect pollinators. However, land-use intensification includes multiple processes that act across various scales and should affect pollinator guilds differently depending on their ecology. We aimed to reveal how two main pollinator guilds, wild bees and hoverflies, respond to different land-use intensification measures, that is, arable field cover (AFC), landscape heterogeneity (LH), and functional flower composition of local plant communities as a measure of habitat quality. We sampled wild bees and hoverflies on 22 dry grassland sites within a highly intensified landscape (NE Germany) within three campaigns using pan traps. We estimated AFC and LH on consecutive radii (60–3000 m) around the dry grassland sites and estimated the local functional flower composition. Wild bee species richness and abundance was positively affected by LH and negatively by AFC at small scales (140–400 m). In contrast, hoverflies were positively affected by AFC and negatively by LH at larger scales (500–3000 m), where both landscape parameters were negatively correlated to each other. At small spatial scales, though, LH had a positive effect on hoverfly abundance. Functional flower diversity had no positive effect on pollinators, but conspicuous flowers seem to attract abundance of hoverflies. In conclusion, landscape parameters contrarily affect two pollinator guilds at different scales. The correlation of landscape parameters may influence the observed relationships between landscape parameters and pollinators. Hence, effects of land-use intensification seem to be highly landscape-specific.
Individuelle Selbstbestimmung ist Kernelement der Menschenwürde und damit ein Höchstwert der Verfassung. Dennoch scheint sich ihr Schutz auf die Abwesenheit des Staates zu beschränken. Tatsächlich ist sie zahlreichen Gefährdungen ausgesetzt. Der Beitrag will darum ihren Schutz auf das gebotene Niveau heben. Art. 1 Abs. 1 GG verpflichtet den Staat nicht nur zur Achtung, sondern auch zum Schutz der Menschenwürde. Will er diesen Auftrag ernstnehmen, muss er sich entsprechend in den Dienst der Selbstbestimmung seiner Bürger stellen. Dazu darf und muss er ihnen bisweilen Grenzen setzen, um ihre Verantwortungsfähigkeit zu fördern.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
Optimal carbon pricing with fluctuating energy prices — emission targeting vs. price targeting
(2022)
Prices of primary energy commodities display marked fluctuations over time. Market-based climate policy instruments (e.g., emissions pricing) create incentives to reduce energy consumption by increasing the user cost of fossil energy. This raises the question of whether climate policy should respond to fluctuations in fossil energy prices? We study this question within an environmental dynamic stochastic general equilibrium (E-DSGE) model calibrated on the German economy. Our results indicate that the welfare implications of dynamic emissions pricing crucially depend on how the revenues are used. When revenues are fully absorbed, a reduction in emissions prices stabilizes the economy in response to energy price shocks. However, when revenues are at least partially recycled, a stable emissions price improves overall welfare. This result is robust to different modeling assumptions.
Background: As the number of cardiac diseases continuously increases within the last years in modern society, so does cardiac treatment, especially cardiac catheterization. The procedure of a cardiac catheterization is challenging for both patients and practitioners. Several potential stressors of psychological or physical nature can occur during the procedure. The objective of the study is to develop and implement a stress management intervention for both practitioners and patients that aims to reduce the psychological and physical strain of a cardiac catheterization.
Methods: The clinical study (DRKS00026624) includes two randomized controlled intervention trials with parallel groups, for patients with elective cardiac catheterization and practitioners at the catheterization lab, in two clinic sites of the Ernst-von-Bergmann clinic network in Brandenburg, Germany. Both groups received different interventions for stress management. The intervention for patients comprises a psychoeducational video with different stress management technics and additional a standardized medical information about the cardiac catheterization examination. The control condition includes the in hospitals practiced medical patient education before the examination (usual care). Primary and secondary outcomes are measured by physiological parameters and validated questionnaires, the day before (M1) and after (M2) the cardiac catheterization and at a postal follow-up 6 months later (M3). It is expected that people with standardized information and psychoeducation show reduced complications during cardiac catheterization procedures, better pre- and post-operative wellbeing, regeneration, mood and lower stress levels over time. The intervention for practitioners includes a Mindfulness-based stress reduction program (MBSR) over 8 weeks supervised by an experienced MBSR practitioner directly at the clinic site and an operative guideline. It is expected that practitioners with intervention show improved perceived and chronic stress, occupational health, physical and mental function, higher effort-reward balance, regeneration and quality of life. Primary and secondary outcomes are measured by physiological parameters (heart rate variability, saliva cortisol) and validated questionnaires and will be assessed before (M1) and after (M2) the MBSR intervention and at a postal follow-up 6 months later (M3). Physiological biomarkers in practitioners will be assessed before (M1) and after intervention (M2) on two work days and a two days off. Intervention effects in both groups (practitioners and patients) will be evaluated separately using multivariate variance analysis.
Discussion: This study evaluates the effectiveness of two stress management intervention programs for patients and practitioners within cardiac catheter laboratory. Study will disclose strains during a cardiac catheterization affecting both patients and practitioners. For practitioners it may contribute to improved working conditions and occupational safety, preservation of earning capacity, avoidance of participation restrictions and loss of performance. In both groups less anxiety, stress and complications before and during the procedures can be expected. The study may add knowledge how to eliminate stressful exposures and to contribute to more (psychological) security, less output losses and exhaustion during work. The evolved stress management guidelines, training manuals and the standardized patient education should be transferred into clinical routines
The self-employed faced strong income losses during the Covid-19 pandemic. Many governments introduced programs to financially support the self-employed during the pandemic, including Germany. The German Ministry for Economic Affairs announced a €50bn emergency-aid program in March 2020, offering one-off lump-sum payments of up to €15,000 to those facing substantial revenue declines. By reassuring the self- employed that the government ‘would not let them down’ during the crisis, the program had also the important aim of motivating the self-employed to get through the crisis. We investigate whether the program affected the confidence of the self-employed to survive the crisis using real-time online-survey data comprising more than 20,000 observations. We employ propensity score matching, making use of a rich set of variables that influence the subjective survival probability as main outcome measure. We observe that this program had significant effects, with the subjective survival probability of the self- employed being moderately increased. We reveal important effect heterogeneities with respect to education, industries, and speed of payment. Notably, positive effects only occur among those self-employed whose application was processed quickly. This suggests stress-induced waiting costs due to the uncertainty associated with the administrative processing and the overall pandemic situation. Our findings have policy implications for the design of support programs, while also contributing to the literature on the instruments and effects of entrepreneurship policy interventions in crisis situations.
Property tax competition
(2022)
We develop a model of property taxation and characterize equilibria under three alternative taxa-tion regimes often used in the public finance literature: decentralized taxation, centralized taxation, and “rent seeking” regimes. We show that decentralized taxation results in inefficiently high tax rates, whereas centralized taxation yields a common optimal tax rate, and tax rates in the rent-seeking regime can be either inefficiently high or low. We quantify the effects of switching from the observed tax system to the three regimes for Japan and Germany. The decentralized or rent-seeking regime best describes the Japanese tax system, whereas the centralized regime does so for Germany. We also quantify the welfare effects of regime changes.
Urban pollution
(2022)
We use worldwide satellite data to analyse how population size and density affect urban pollution. We find that density significantly increases pollution exposure. Looking only at urban areas, we find that population size affects exposure more than density. Moreover, the effect is driven mostly by population commuting to core cities rather than the core city population itself. We analyse heterogeneity by geography and income levels. By and large, the influence of population on pollution is greatest in Asia and middle-income countries. A counterfactual simulation shows that PM2.5 exposure would fall by up to 36% and NO2 exposure up to 53% if within countries population size were equalized across all cities.
Kaum einem anderen Unterrichtsfach ist das Fachübergreifende so immanent wie dem Fach Musik, das durch seine Themen- und Inhaltsfelder vielfältige Bezüge zu anderen Fächern und Wissenschaftsdisziplinen aufweist. Dennoch lässt sich bezüglich der Literatur- und Forschungslage konstatieren, dass zwar theoretische Ansätze und Modelle für einen fachübergreifenden Musikunterricht existieren, sich die musikpädagogische Forschung jedoch mit dem fachübergreifenden Musikunterricht und dessen Umsetzung durch die Musiklehrkräfte noch nicht befasst hat. Auch die Zahl der praxisbezogenen Publikationen für einen fachübergreifenden Musikunterricht ist überschaubar, ebenso das Fortbildungsangebot für Musiklehrende.
Aus diesem Grund widmet sich der vorliegende Band 9 der „Potsdamer Schriftenreihe zur Musikpädagogik“ dem Thema „Fachübergreifender Musikunterricht“ aus verschiedenen Perspektiven. Zum einen bilden die derzeit aktuellen theoretischen Grundlagen eine wichtige Basis. Zum anderen fließen auch ausbildungsrelevante und methodische Aspekte zur Umsetzung eines fachübergreifenden Musikunterrichts in die Texte ein. In bewährter Tradition der Schriftenreihe werden dabei sowohl Beiträge von Lehrenden am Lehrstuhl für Musikpädagogik und Musikdidaktik der Universität Potsdam als auch von Studierenden sowie von Kooperationspartnern des Lehrstuhls in der Musiklehrer*innenbildung berücksichtigt. Ziel ist es, auf der Basis verschiedener theoretischer Ansätze Umsetzungsmöglichkeiten eines fachübergreifenden Musikunterrichts als Beitrag zum Erreichen der im Teil B des Rahmenlehrplans für Berlin und Brandenburg angeführten fachübergreifenden Kompetenzziele aufzuzeigen.
The prevalence of obesity in the pediatric population has become a major public health issue. Indeed, the dramatic increase of this epidemic causes multiple and harmful consequences, Physical activity, particularly physical exercise, remains to be the cornerstone of interventions against childhood obesity. Given the conflicting findings with reference to the relevant literature addressing the effects of exercise on adiposity and physical fitness outcomes in obese children and adolescents, the effect of duration-matched concurrent training (CT) [50% resistance (RT) and 50% high-intensity-interval-training (HIIT)] on body composition and physical fitness in obese youth remains to be elucidated. Thus, the purpose of this study was to examine the effects of 9-weeks of CT compared to RT or HIIT alone, on body composition and selected physical fitness components in healthy sedentary obese youth. Out of 73 participants, only 37; [14 males and 23 females; age 13.4 ± 0.9 years; body-mass-index (BMI): 31.2 ± 4.8 kg·m-2] were eligible and randomized into three groups: HIIT (n = 12): 3-4 sets×12 runs at 80–110% peak velocity, with 10-s passive recovery between bouts; RT (n = 12): 6 exercises; 3–4 sets × 10 repetition maximum (RM) and CT (n = 13): 50% serial completion of RT and HIIT. CT promoted significant greater gains compared to HIIT and RT on body composition (p < 0.01, d = large), 6-min-walking test distance (6 MWT-distance) and on 6 MWT-VO2max (p < 0.03, d = large). In addition, CT showed substantially greater improvements than HIIT in the medicine ball throw test (20.2 vs. 13.6%, p < 0.04, d = large). On the other hand, RT exhibited significantly greater gains in relative hand grip strength (p < 0.03, d = large) and CMJ (p < 0.01, d = large) than HIIT and CT. CT promoted greater benefits for fat, body mass loss and cardiorespiratory fitness than HIIT or RT modalities. This study provides important information for practitioners and therapists on the application of effective exercise regimes with obese youth to induce significant and beneficial body composition changes. The applied CT program and the respective programming parameters in terms of exercise intensity and volume can be used by practitioners as an effective exercise treatment to fight the pandemic overweight and obesity in youth.
Giros Topográficos
(2022)
Giros topográficos explora las producciones simbólicas del espacio en una serie de textos narrativos publicados desde el cambio de milenio en América Latina. Retomando los planteos teóricos del spatial turn y de la geocrítica, el estudio aborda las topografías literarias desde cuatro ángulos que exceden y transforman los límites territoriales y nacionales: dinámicas de hiperconectividad mediática y movilidad acelerada; genealogías afectivas; ecologías urbanas; y representaciones de la alteridad.
A partir del análisis de obras de Lina Meruane, Guillermo Fadanelli, Andrés Neuman, Andrea Jeftanovic, Sergio Chejfech y Bernardo Carvalho, entre otros, el libro señala los flujos, ambigüedades y tensiones proyectadas por las nuevas comunidades imaginadas del s.XXI. Con ello, el ensayo busca ofrecer un aporte para repensar el estatus de la literatura latinoamericana en el marco de su globalización avanzada y la consecuente consolidación de espacios de enunciación translocalizados.
Cognitive resources contribute to balance control. There is evidence that mental fatigue reduces cognitive resources and impairs balance performance, particularly in older adults and when balance tasks are complex, for example when trying to walk or stand while concurrently performing a secondary cognitive task.
We conducted a systematic literature search in PubMed (MEDLINE), Web of Science and Google Scholar to identify eligible studies and performed a random effects meta-analysis to quantify the effects of experimentally induced mental fatigue on balance performance in healthy adults. Subgroup analyses were computed for age (healthy young vs. healthy older adults) and balance task complexity (balance tasks with high complexity vs. balance tasks with low complexity) to examine the moderating effects of these factors on fatigue-mediated balance performance.
We identified 7 eligible studies with 9 study groups and 206 participants. Analysis revealed that performing a prolonged cognitive task had a small but significant effect (SMDwm = −0.38) on subsequent balance performance in healthy young and older adults. However, age- and task-related differences in balance responses to fatigue could not be confirmed statistically.
Overall, aggregation of the available literature indicates that mental fatigue generally reduces balance in healthy adults. However, interactions between cognitive resource reduction, aging and balance task complexity remain elusive.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
The negative impact of crude oil on the environment has led to a necessary transition toward alternative, renewable, and sustainable resources. In this regard, lignocellulosic biomass (LCB) is a promising renewable and sustainable alternative to crude oil for the production of fine chemicals and fuels in a so-called biorefinery process. LCB is composed of polysaccharides (cellulose and hemicellulose), as well as aromatics (lignin). The development of a sustainable and economically advantageous biorefinery depends on the complete and efficient valorization of all components. Therefore, in the new generation of biorefinery, the so-called biorefinery of type III, the LCB feedstocks are selectively deconstructed and catalytically transformed into platform chemicals. For this purpose, the development of highly stable and efficient catalysts is crucial for progress toward viability in biorefinery. Furthermore, a modern and integrated biorefinery relies on process and reactor design, toward more efficient and cost-effective methodologies that minimize waste. In this context, the usage of continuous flow systems has the potential to provide safe, sustainable, and innovative transformations with simple process integration and scalability for biorefinery schemes.
This thesis addresses three main challenges for future biorefinery: catalyst synthesis, waste feedstock valorization, and usage of continuous flow technology. Firstly, a cheap, scalable, and sustainable approach is presented for the synthesis of an efficient and stable 35 wt.-% Ni catalyst on highly porous nitrogen-doped carbon support (35Ni/NDC) in pellet shape. Initially, the performance of this catalyst was evaluated for the aqueous phase hydrogenation of LCB-derived compounds such as glucose, xylose, and vanillin in continuous flow systems. The 35Ni/NDC catalyst exhibited high catalytic performances in three tested hydrogenation reactions, i.e., sorbitol, xylitol, and 2-methoxy-4-methylphenol with yields of 82 mol%, 62 mol%, and 100 mol% respectively. In addition, the 35Ni/NDC catalyst exhibited remarkable stability over a long time on stream in continuous flow (40 h). Furthermore, the 35Ni/NDC catalyst was combined with commercially available Beta zeolite in a dual–column integrated process for isosorbide production from glucose (yield 83 mol%).
Finally, 35Ni/NDC was applied for the valorization of industrial waste products, namely sodium lignosulfonate (LS) and beech wood sawdust (BWS) in continuous flow systems. The LS depolymerization was conducted combining solvothermal fragmentation of water/alcohol mixtures (i.e.,methanol/water and ethanol/water) with catalytic hydrogenolysis/hydrogenation (SHF). The depolymerization was found to occur thermally in absence of catalyst with a tunable molecular weight according to temperature. Furthermore, the SHF generated an optimized cumulative yield of lignin-derived phenolic monomers of 42 mg gLS-1. Similarly, a solvothermal and reductive catalytic fragmentation (SF-RCF) of BWS was conducted using MeOH and MeTHF as a solvent. In this case, the optimized total lignin-derived phenolic monomers yield was found of 247 mg gKL-1.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
Ein schonender Umgang mit den Ressourcen und der Umwelt ist wesentlicher Bestandteil des modernen Bergbaus sowie der zukünftigen Versorgung unserer Gesellschaft mit essentiellen Rohstoffen. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung analytischer Strategien, die durch eine exakte und schnelle Vor-Ort-Analyse den technisch-praktischen Anforderungen des Bergbauprozesses gerecht werden und somit zu einer gezielten und nachhaltigen Nutzung von Rohstofflagerstätten beitragen. Die Analysen basieren auf den spektroskopischen Daten, die mittels der laserinduzierten Breakdownspektroskopie (LIBS) erhalten und mittels multivariater Datenanalyse ausgewertet werden. Die LIB-Spektroskopie ist eine vielversprechende Technik für diese Aufgabe. Ihre Attraktivität machen insbesondere die Möglichkeiten aus, Feldproben vor Ort ohne Probennahme oder ‑vorbereitung messen zu können, aber auch die Detektierbarkeit sämtlicher Elemente des Periodensystems und die Unabhängigkeit vom Aggregatzustand. In Kombination mit multivariater Datenanalyse kann eine schnelle Datenverarbeitung erfolgen, die Aussagen zur qualitativen Elementzusammensetzung der untersuchten Proben erlaubt. Mit dem Ziel die Verteilung der Elementgehalte in einer Lagerstätte zu ermitteln, werden in dieser Arbeit Kalibrierungs- und Quantifizierungsstrategien evaluiert. Für die Charakterisierung von Matrixeffekten und zur Klassifizierung von Mineralen werden explorative Datenanalysemethoden angewendet. Die spektroskopischen Untersuchungen erfolgen an Böden und Gesteinen sowie an Mineralen, die Kupfer oder Seltene Erdelemente beinhalten und aus verschiedenen Lagerstätten bzw. von unterschiedlichen Agrarflächen stammen.
Für die Entwicklung einer Kalibrierungsstrategie wurden sowohl synthetische als auch Feldproben von zwei verschiedenen Agrarflächen mittels LIBS analysiert. Anhand der Beispielanalyten Calcium, Eisen und Magnesium erfolgte die auf uni- und multivariaten Methoden beruhende Evaluierung verschiedener Kalibrierungsmethoden. Grundlagen der Quantifizierungsstrategien sind die multivariaten Analysemethoden der partiellen Regression der kleinsten Quadrate (PLSR, von engl.: partial least squares regression) und der Intervall PLSR (iPLSR, von engl.: interval PLSR), die das gesamte detektierte Spektrum oder Teilspektren in der Analyse berücksichtigen. Der Untersuchung liegen synthetische sowie Feldproben von Kupfermineralen zugrunde als auch solche die Seltene Erdelemente beinhalten. Die Proben stammen aus verschiedenen Lagerstätten und weisen unterschiedliche Begleitmatrices auf. Mittels der explorativen Datenanalyse erfolgte die Charakterisierung dieser Begleitmatrices. Die dafür angewendete Hauptkomponentenanalyse gruppiert Daten anhand von Unterschieden und Regelmäßigkeiten. Dies erlaubt Aussagen über Gemeinsamkeiten und Unterschiede der untersuchten Proben im Bezug auf ihre Herkunft, chemische Zusammensetzung oder lokal bedingte Ausprägungen. Abschließend erfolgte die Klassifizierung kupferhaltiger Minerale auf Basis der nicht-negativen Tensorfaktorisierung. Diese Methode wurde mit dem Ziel verwendet, unbekannte Proben aufgrund ihrer Eigenschaften in Klassen einzuteilen.
Die Verknüpfung von LIBS und multivariater Datenanalyse bietet die Möglichkeit durch eine Analyse vor Ort auf eine Probennahme und die entsprechende Laboranalytik weitestgehend zu verzichten und kann somit zum Umweltschutz sowie einer Schonung der natürlichen Ressourcen bei der Prospektion und Exploration von neuen Erzgängen und Lagerstätten beitragen. Die Verteilung von Elementgehalten der untersuchten Gebiete ermöglicht zudem einen gezielten Abbau und damit eine effiziente Nutzung der mineralischen Rohstoffe.
Strategic uncertainty is the uncertainty that players face with respect to the purposeful behavior of other players in an interactive decision situation. Our paper develops a new method for measuring strategic-uncertainty attitudes and distinguishing them from risk and ambiguity attitudes. We vary the source of uncertainty (whether strategic or not) across conditions in a ceteris paribus manner. We elicit certainty equivalents of participating in two strategic 2x2 games (a stag-hunt and a market-entry game) as well as certainty equivalents of related lotteries that yield the same possible payoffs with exogenously given probabilities (risk) and lotteries with unknown probabilities (ambiguity). We provide a structural model of uncertainty attitudes that allows us to measure a preference for or an aversion against the source of uncertainty, as well as optimism or pessimism regarding the desired outcome. We document systematic attitudes towards strategic uncertainty that vary across contexts. Under strategic complementarity [substitutability], the majority of participants tend to be pessimistic [optimistic] regarding the desired outcome. However, preferences for the source of uncertainty are distributed around zero.
Respiratorische Erkrankungen stellen zunehmend eine relevante globale Problematik dar. Die Erweiterung bzw. Modifizierung von Applikationswegen möglicher Arzneimittel für gezielte topische Anwendungen ist dabei von größter Bedeutung. Die Variation eines bekannten Applikationsweges durch unterschiedliche technologische Umsetzungen kann die Vielfalt der Anwendungsmöglichkeiten, aber auch die Patienten-Compliance erhöhen. Die einfache und flexible Verfahrensweise durch schnelle Verfügbarkeit und eine handliche Technologie sind heutzutage wichtige Eigenschaften im Entwicklungsprozess eines Produktes. Eine direkte topische Behandlung von Atemwegserkrankungen am Wirkort in Form einer inhalativen Applikation bietet dabei viele Vorteile gegenüber einer systemischen Therapie. Die medizinische Inhalation von Wirkstoffen über die Lunge ist jedoch eine komplexe Herausforderung. Inhalatoren gehören zu den erklärungsbedürftigen Applikationsformen, die zur Erhöhung der konsequenten Einhaltung der Verordnung so einfach, wie möglich gestaltet werden müssen. Parallel besitzen und nutzen weltweit annähernd 68 Millionen Menschen die Technologie eines inhalativen Applikators zur bewussten Schädigung ihrer Gesundheit in Form einer elektronischen Zigarette. Diese bekannte Anwendung bietet die potentielle Möglichkeit einer verfügbaren, kostengünstigen und qualitätsgeprüften Gesundheitsmaßnahme zur Kontrolle, Prävention und Heilung von Atemwegserkrankungen. Sie erzeugt ein Aerosol durch elektrothermische Erwärmung eines sogenannten Liquids, das durch Kapillarkräfte eines Trägermaterials an ein Heizelement gelangt und verdampft. Ihr Bekanntheitsgrad zeigt, dass eine beabsichtigte Wirkung in den Atemwegen eintritt. Diese Wirkung könnte jedoch auch auf potentielle pharmazeutische Einsatzgebiete übertragbar sein. Die Vorteile der pulmonalen Verabreichung sind dabei vielfältig. Im Vergleich zur peroralen Applikation gelangt der Wirkstoff gezielt zum Wirkort. Wenn eine systemische Applikation zu Arzneimittelkonzentrationen unterhalb der therapeutischen Wirksamkeit in der Lunge führt, könnte eine inhalative Darreichung bereits bei niedriger Dosierung die gewünschten höheren Konzentrationen am Wirkort hervorrufen. Aufgrund der großen Resorptionsfläche der Lunge sind eine höhere Bioverfügbarkeit und ein schnellerer Wirkungseintritt infolge des fehlenden First-Pass-Effektes möglich. Es kommt ebenfalls zu minimalen systemischen Nebenwirkungen. Die elektronische Zigarette erzeugt wie die medizinischen Inhalatoren lungengängige Partikel. Die atemzuggesteuerte Technik ermöglicht eine unkomplizierte und intuitive Anwendung. Der prinzipielle Aufbau besteht aus einer elektrisch beheizten Wendel und einem Akku. Die Heizwendel ist von einem sogenannten Liquid in einem Tank umgeben und erzeugt das Aerosol. Das Liquid beinhaltet eine Basismischung bestehend aus Propylenglycol, Glycerin und reinem Wasser in unterschiedlichen prozentualen Anteilen. Es besteht die Annahme, dass das Basisliquid auch mit pharmazeutischen Wirkstoffen für die pulmonale Applikation beladen werden kann. Aufgrund der thermischen Belastung durch die e-Zigarette müssen potentielle Wirkstoffe sowie das Vehikel eine thermische Stabilität aufweisen.
Die potentielle medizinische Anwendung der Technologie einer handelsüblichen e-Zigarette wurde anhand von drei Schwerpunkten an vier Wirkstoffen untersucht. Die drei ätherischen Öle Eucalyptusöl, Minzöl und Nelkenöl wurden aufgrund ihrer leichten Flüchtigkeit und der historischen pharmazeutischen Anwendung anhand von Inhalationen bei Erkältungssymptomen bzw. im zahnmedizinischen Bereich gewählt. Das eingesetzte Cannabinoid Cannabidiol (CBD) hat einen aktuellen Bezug zu dem pharmazeutischen Markt Deutschlands zur Legalisierung von cannabishaltigen Produkten und der medizinischen Forschung zum inhalativen Konsum. Es wurden relevante wirkstoffhaltige Flüssigformulierungen entwickelt und hinsichtlich ihrer Verdampfbarkeit zu Aerosolen bewertet. In den quantitativen und qualitativen chromatographischen Untersuchungen konnten spezifische Verdampfungsprofile der Wirkstoffe erfasst und bewertet werden. Dabei stieg die verdampfte Masse der Leitsubstanzen 1,8-Cineol (Eucalyptusöl), Menthol (Minzöl) und Eugenol (Nelkenöl) zwischen 33,6 µg und 156,2 µg pro Zug proportional zur Konzentration im Liquid im Bereich zwischen 0,5% und 1,5% bei einer Leistung von 20 Watt. Die Freisetzungsrate von Cannabidiol hingegen schien unabhängig von der Konzentration im Liquid im Mittelwert bei 13,3 µg pro Zug zu liegen. Dieses konnte an fünf CBD-haltigen Liquids im Konzentrationsbereich zwischen 31 µg/g und 5120 µg/g Liquid gezeigt werden. Außerdem konnte eine Steigerung der verdampften Massen mit Zunahme der Leistung der e-Zigarette festgestellt werden. Die Interaktion der Liquids bzw. Aerosole mit den Bestandteilen des Speichels sowie weiterer gastrointestinaler Flüssigkeiten wurde über die Anwendung von zugehörigen in vitro Modellen und Einsatz von Enzymaktivitäts-Assays geprüft. In den Untersuchungen wurden Änderungen von Enzymaktivitäten anhand des oralen Schlüsselenzyms α-Amylase sowie von Proteasen ermittelt. Damit sollte exemplarisch ein möglicher Einfluss auf physiologische bzw. metabolische Prozesse im humanen Organismus geprüft werden. Das Bedampfen von biologischen Suspensionen führte bei niedriger Leistung der e-Zigarette (20 Watt) zu keiner bzw. einer leichten Änderung der Enzymaktivität. Die Anwendung einer hohen Leistung (80 Watt) bewirkte tendenziell das Herabsetzen der Enzymaktivitäten. Die Erhöhung der Enzymaktivitäten könnte zu einem enzymatischen Abbau von Schleimstoffen wie Mucinen führen, was wiederum die effektive, mechanische Abwehr gegenüber bakteriellen Infektionen zur Folge hätte. Da eine Anwendung der Applikation insbesondere bei bakteriellen Atemwegserkrankungen denkbar wäre, folgten abschließend Untersuchungen der antibakteriellen Eigenschaften der Liquids bzw. Aerosole in vitro. Es wurden sechs klinisch relevante bakterielle Krankheitserreger ausgewählt, die nach zwei Charakteristika gruppiert werden können. Die drei multiresistenten Bakterien Pseudomonas aeruginosa, Klebsiella pneumoniae und Methicillin-resistenter Staphylococcus aureus können mithilfe von üblichen Therapien mit Antibiotika nicht abgetötet werden und haben vor allem eine nosokomiale Relevanz. Die zweite Gruppe weist Eigenschaften auf, die vordergründig assoziiert sind mit respiratorischen Erkrankungen. Die Bakterien Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae sind repräsentativ beteiligt an Atemwegserkrankungen mit diverser Symptomatik. Die Bakterienarten wurden mit den jeweiligen Liquids behandelt bzw. bedampft und deren grundlegende Dosis-Wirkungsbeziehung charakterisiert. Dabei konnte eine antibakterielle Aktivität der Formulierungen ermittelt werden, die durch Zugabe eines Wirkstoffes die bereits antibakterielle Wirkung der Bestandteile Glycerin und Propylenglycol verstärkte. Die hygroskopischen Eigenschaften dieser Substanzen sind vermutlich für eine Wirkung in aerosolierter Form verantwortlich. Sie entziehen die Feuchtigkeit aus der Luft und haben einen austrocknenden Effekt auf die Bakterien. Das Bedampfen der Bakterienarten Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae hatte einen antibakteriellen Effekt, der zeitlich abhängig von der Leistung der e-Zigarette war.
Die Ergebnisse der Untersuchungen führen zu dem Schluss, dass jeder Wirkstoff bzw. jede Substanzklasse individuell zu bewerten ist und somit Inhalator und Formulierung aufeinander abgestimmt werden müssen. Der Einsatz der e-Zigarette als Medizinprodukt zur Applikation von Arzneimitteln setzt stets Prüfungen nach Europäischem Arzneibuch voraus. Durch Modifizierungen könnte eine Dosierung gut kontrollierbar gemacht werden, aber auch die Partikelgrößenverteilung kann insoweit reguliert werden, dass die Wirkstoffe je nach Partikelgröße zu einem geeigneten Applikationsort wie Mund, Rachen oder Bronchien transportiert werden. Der Vergleich mit den Eigenschaften anderer medizinischer Inhalatoren führt zu dem Schluss, dass die Technologie der e-Zigarette durchaus eine gleichartige oder bessere Performance für thermisch stabile Wirkstoffe bieten könnte. Dieses fiktive Medizinprodukt könnte aus einer hersteller-unspezifisch produzierten, wieder aufladbaren Energiequelle mit Universalgewinde zum mehrfachen Gebrauch und einer hersteller- und wirkstoffspezifisch produzierten Einheit aus Verdampfer und Arzneimittel bestehen. Das Arzneimittel, ein medizinisches Liquid (Vehikel und Wirkstoff) kann in dem Tank des Verdampfers mit konstanten, nicht variablen Parametern patientenindividuell produziert werden. Inhalative Anwendungen werden perspektivisch wohl nicht zuletzt aufgrund der aktuellen COVID-19-Pandemie eine zunehmende Rolle spielen. Der Bedarf nach alternativen Therapieoptionen wird weiter ansteigen. Diese Arbeit liefert einen Beitrag zum Einsatz der Technologie der elektronischen Zigarette als electronic nicotin delivery system (ENDS) nach Modifizierung zu einem potentiellen pulmonalen Applikationssystem als electronic drug delivery system (EDDS) von inhalativen, thermisch stabilen Arzneimitteln in Form eines Medizinproduktes.
Die Arbeit gibt einen Einblick in die Verständigungspraxen bei Stadtführungen mit (ehemaligen) Obdachlosen, die in ihrem Selbstverständnis auf die Herstellung von Verständnis, Toleranz und Anerkennung für von Obdachlosigkeit betroffene Personen zielen. Zunächst wird in den Diskurs des Slumtourismus eingeführt und, angesichts der Vielfalt der damit verbundenen Erscheinungsformen, Slumming als organisierte Begegnung mit sozialer Ungleichheit definiert. Die zentralen Diskurslinien und die darin eingewobenen moralischen Positionen werden nachvollzogen und im Rahmen der eigenommenen wissenssoziologischen Perspektive als Ausdruck einer per se polykontexturalen Praxis re-interpretiert. Slumming erscheint dann als eine organisierte Begegnung von Lebensformen, die sich in einer Weise fremd sind, als dass ein unmittelbares Verstehen unwahrscheinlich erscheint und genau aus diesem Grund auf der Basis von gängigen Interpretationen des Common Sense ausgehandelt werden muss. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie sich Teilnehmer und Stadtführer über die Erfahrung der Obdachlosigkeit praktisch verständigen und welcher Art das hierüber erzeugte Verständnis für die im öffentlichen Diskurs mit vielfältigen stigmatisierenden Zuschreibungen versehenen Obdachlosen ist. Dabei interessiert besonders, in Bezug auf welche Aspekte der Erfahrung von Obdachlosigkeit ein gemeinsames Verständnis möglich wird und an welchen Stellen dieses an Grenzen gerät. Dazu wurden die Gesprächsverläufe auf neun Stadtführungen mit (ehemaligen) obdachlosen Stadtführern unterschiedlicher Anbieter im deutschsprachigen Raum verschriftlicht und mit dem Verfahren der Dokumentarischen Methode ausgewertet. Die vergleichende Betrachtung der Verständigungspraxen eröffnet nicht zuletzt eine differenzierte Perspektive auf die in den Prozessen der Verständigung immer schon eingewobenen Anerkennungspraktiken. Mit Blick auf die moralische Debatte um organisierte Begegnungen mit sozialer Ungleichheit wird dadurch eine ethische Perspektive angeregt, in deren Zentrum Fragen zur Vermittlungsarbeit stehen.
Struggle for existence
(2022)
In this project, I sought to understand how Palestinian claim-making in the West Bank is possible within the context of continuing Israeli occupation and repression by the Palestinian political leadership. I explored the questions of what channels non-state actors use to advance their claims, what opportunities they have for making these claims, and what challenges they face. This exploration covers the time period from the Oslo Accords in the mid-1990s to the so-called Great March of Return in 2018.
I demonstrated that Palestinians used different modes and strategies of resistance in the past century, as the area of what today is Israel/Palestine has historically been a target for foreign penetration. Yet, the Oslo agreements between the Israeli government and the Palestinian leadership have ended Palestinians’ decentralized and pluralist social governance, reinforced Israeli rule in the Palestinian territories, promoted continuing dispossession and segregation of Palestinians, and further restricted their rights and their claim-making opportunities until this day. Therefore, today, Palestinian society in the West Bank is characterized by fragmentation, geographical and societal segregation, and double repression by Israeli occupation and Palestinian Authority (PA) policies. What is more, Palestinian claim-making is legally curtailed due to the establishment of different geographical entities in which Palestinians are subjugated to different forms of Israeli rule and regulations.
I argue that the concepts of civil society and acts of citizenship, which are often used to describe non-state actors’ rights-seeking activities, fall short on understanding and describing Palestinian claim-making in the West Bank comprehensively. By determining their boundaries, the concept of acts of subjecthood evolved as a novel theoretical approach within the research process and as a means of claim-making within repressive contexts where claim makers’ rights are curtailed and opportunities for rights-seeking activities are few. Thereby, this study applies a new theoretical framework to the conflict in Israel/Palestine and contributes to a better understanding of rights-seeking activities within the West Bank. Further, I argue that Palestinian acts of subjecthood against hostile Israeli rule in the West Bank are embedded within the comprehensive structure of settler colonialism. As a form of colonialism that aims at replacing an indigenous population, Israeli settler colonialism in the West Bank manifests itself in restrictions of Palestinian movement, settlement constructions, home demolitions, violence, and detentions.
By using grounded theory and inductive reasoning as methodological approaches, I was able to make generalizations about the state of Palestinian claim-making. These generalizations are based on the analysis of secondary materials and data collected via face-to-face and video interviews with non-state actors in Israel/Palestine. The conducted research shows that there is not a single measure or a standalone condition that hinders Palestinian claim-making, but a complex and comprehensive structure that, on the one hand, shrinks Palestinian living space by occupation and destruction and, on the other hand, diminishes Palestinian civic space by limiting the fundamental rights to organize and build social movements to change the status Palestinians live in.
Although the concrete, tangible outcomes of Palestinian acts of subjecthood are marginal, they contribute to strengthening and perpetuating Palestinian’s long history of resistance against Israeli oppression. With a lack of adherence to international law, the neglect of UN resolutions by the Israeli government, the continuous defeats of rights organizations in Israeli courts, and the repression of institutions based in the West Bank by PA and occupation policies, Palestinian acts of subjecthood cannot overturn current power structures. Nevertheless, the ongoing persistence of non-state actors claiming rights, as well as the pop-up of new initiatives and youth movements are all essential for strengthening Palestinians’ resilience and documenting current injustices. Therefore, they can build the pillars for social change in the future.
Das Ziel der vorliegenden Dissertation war es zu untersuchen, wie palästinensisches claim-making, also die Artikulation von Forderungen bzw. die Geltendmachung von bestimmten Rechten, vor dem Hintergrund der anhaltenden israelischen Besatzung und Repressalien durch die palästinensische politische Führung im Westjordanland durchgesetzt werden kann. Dabei soll der Frage nachgegangen werden, welche Kanäle nichtstaatliche Akteure nutzen, um ihre Ansprüche geltend zu machen, welche Möglichkeiten sich ihnen dafür bieten und vor welchen Herausforderungen sie stehen. Der Untersuchungszeitraum erstreckt sich dabei vom Osloer Friedensprozess Mitte der 1990er Jahre bis hin zum sogenannten Great March of Return im Jahr 2018.
Die im Gebiet des heutigen Israel/Palästina lebenden PalästinenserInnen bedienten sich in Zeiten ausländischer Einflussnahme, z.B. während der britischen Besatzung im vergangenen Jahrhundert, verschiedenster Widerstandsformen und -strategien. Jedoch haben die Osloer Abkommen zwischen der israelischen Regierung und der palästinensischen Führung die dezentrale und partizipative Mobilisierung der palästinensischen Gesellschaft erschwert, die andauernde Enteignung von PalästinenserInnen begünstigt und ihre Rechte bis zum heutigen Tag weiter eingeschränkt. Die heutige palästinensische Gesellschaft im Westjordanland ist daher durch Zersplitterung, geografische und gesellschaftliche Segregation und doppelte Un-terdrückung durch die israelische Besatzung sowie die Palästinensische Autonomiebehörde gekennzeichnet. Zudem führt die Etablierung verschiedener geografischer Entitäten, in denen PalästinenserInnen unterschiedlichen Formen israelischer Herrschaft, Regularien und Ein-griffsrechten unterworfen sind, dazu, dass palästinensisches claim-making auch formalrecht-lich eingeschränkt ist.
Um die Aktivitäten nichtstaatlicher Akteure in diesem Kontext beschreiben zu können, wer-den häufig das Konzept der Zivilgesellschaft oder das der acts of citizenship herangezogen. In der vorliegenden Arbeit wird jedoch argumentiert, dass diese Konzepte nur bedingt auf den Status Quo im Westjordanland anwendbar sind und palästinensisches claim-making nicht hinreichend verstehen und beschreiben können. Im Laufe des Forschungsprozesses hat sich daher das Konzept der acts of subjecthood als neuer theoretischer Ansatz herausgebildet, der claim-making in repressiven Kontexten beschreibt, in denen nichtstaatliche Akteure nur geringen Handlungsspielraum haben, ihre Forderungen durchsetzen zu können. Durch diese „Theorie-Brille“ ermöglicht meine Forschung einen neuartigen Blick auf den israelisch-palästinensischen Konflikt und trägt auf diese Weise zu einem besseren Verständnis von claim-making-Aktivitäten im Westjordanland bei. Darüber hinaus bettet die vorliegende Ar-beit acts of subjecthood in den größeren Kontext des Siedlungskolonialismus ein. Dieser beschreibt eine Form des Kolonialismus, die darauf abzielt, eine einheimische Bevölkerung durch die der Kolonialmacht zu ersetzen. Im Westjordanland manifestiert sich der israelische Siedlungskolonialismus in der Einschränkung der Bewegungsfreiheit von PalästinenserIn-nen, dem Bau von Siedlungen, der Zerstörung von Häusern, Gewalt und Inhaftierungen.
Die Verwendung der Grounded Theory und des induktiven Denkens als methodische Ansätze ermöglichte es, verallgemeinerbare Aussagen zum Zustand palästinensischen claim-makings treffen zu können. Diese Verallgemeinerungen beruhen auf der Analyse von Sekundärquellen und Daten, die im Rahmen von Interviews mit VertreterInnen nichtstaatlicher Organisationen in Israel/Palästina erhoben wurden. Die durchgeführte Analyse macht deutlich, dass nicht eine einzelne Maßnahme oder Bedingung palästinensisches claim-making behindert, sondern eine komplexe, vielschichtige und zielgerichtet implementierte Struktur. Diese verringert einerseits den Lebensraum von PalästinenserInnen durch Besatzung und Zerstörung und schränkt andererseits den zivilen Raum ein, indem sie ihnen grundlegende Rechte und fundamentale Freiheiten verwehrt.
Obwohl die konkreten Auswirkungen palästinensischer acts of subjecthood marginal sind, tragen sie dazu bei, den Widerstand gegen politische Unterdrückung zu stärken und fortzusetzen. Angesichts der Verletzung von Völkerrecht und der Missachtung zahlreicher UN-Resolutionen durch die israelische Regierung, der Niederlagen von Menschenrechtsorganisationen vor israelischen Gerichten, der Unterdrückung von Institutionen im Westjordanland durch die Palästinensische Autonomiebehörde und die Besatzungspolitik können acts of subjecthood die derzeitigen Machtstrukturen nicht aufbrechen. Dennoch sind die anhaltende Beharrlichkeit nichtstaatlicher Akteure, Forderungen zu artikulieren und Rechte einzufordern und die Gründung neuer Initiativen und Organisationen essenziell für die Stärkung gesellschaftlicher Resilienz sowie die Dokumentation von Ungerechtigkeiten und Rechtsverletzungen. Diese Akteure legen so den Grundstein für einen möglichen gesellschaftspolitischen Wandel in der Zukunft.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
We provide the first estimates of the impact of managers’ risk preferences on their training allocation decisions. Our conceptual framework links managers’ risk preferences to firms’ training decisions through the bonuses they expect to receive. Risk-averse managers are expected to select workers with low turnover risk and invest in specific rather than general training. Empirical evidence supporting these predictions is provided using a novel vignette study embedded in a nationally representative survey of firm managers. Risk-tolerant and risk-averse decision makers have significantly different training preferences. Risk aversion results in increased sensitivity to turnover risk. Managers who are risk-averse offer significantly less general training and, in some cases, are more reluctant to train workers with a history of job mobility. All managers, irrespective of their risk preferences, are sensitive to the investment risk associated with training, avoiding training that is more costly or targets those with less occupational expertise or nearing retirement. This suggests the risks of training are primarily due to the risk that trained workers will leave the firm (turnover risk) rather than the risk that the benefits of training do not outweigh the costs (investment risk).
We investigate the effect of the COVID-19 pandemic on self-employed people’s mental health. Using representative longitudinal survey data from Germany, we reveal differential effects by gender: whereas self-employed women experienced a substantial deterioration in their mental health, self-employed men displayed no significant changes up to early 2021. Financial losses are important in explaining these differences. In addition, we find larger mental health responses among self-employed women who were directly affected by government-imposed restrictions and bore an increased childcare burden due to school and daycare closures. We also find that self-employed individuals who are more resilient coped better with the crisis.
Predicting entrepreneurial development based on individual and business-related characteristics is a key objective of entrepreneurship research. In this context, we investigate whether the motives of becoming an entrepreneur influence the subsequent entrepreneurial development. In our analysis, we examine a broad range of business outcomes including survival and income, as well as job creation, expansion and innovation activities for up to 40 months after business formation. Using self-determination theory as conceptual background, we aggregate the start-up motives into a continuous motivational index. We show – based on a unique dataset of German start-ups from unemployment and non-unemployment – that the later business performance is better, the higher they score on this index. Effects are particularly strong for growth oriented outcomes like innovation and expansion activities. In a next step, we examine three underlying motivational categories that we term opportunity, career ambition, and necessity. We show that individuals driven by opportunity motives perform better in terms of innovation and business expansion activities, while career ambition is positively associated with survival, income, and the probability of hiring employees. All effects are robust to the inclusion of a large battery of covariates that are proven to be important determinants of entrepreneurial performance.
Subsidizing the geographical mobility of unemployed workers may improve welfare by relaxing their financial constraints and allowing them to find jobs in more prosperous regions. We exploit regional variation in the promotion of mobility programs along administrative borders of German employment agency districts to investigate the causal effect of offering such financial incentives on the job search behavior and labor market integration of unemployed workers. We show that promoting mobility – as intended – causes job seekers to increase their search radius, apply for and accept distant jobs. At the same time, local job search is reduced with adverse consequences for reemployment and earnings. These unintended negative effects are provoked by spatial search frictions. Overall, the unconditional provision of mobility programs harms the welfare of unemployed job seekers.
Indigene Rechte und COVID-19 (Brasilien) – indigenes Land und Gesundheit unter ernster Bedrohung
(2022)
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Nation, migration, narration
(2022)
En France et en Allemagne, l’immigration est devenue dans les dernières décennies une problématique centrale. C’est dans ce contexte qu’est apparu le rap. Celui-ci connaît une popularité énorme chez les populations issues de l’immigration. Pour autant, les rappeurs ne s’en confrontent pas moins à leur identité française ou allemande.
Le but de ce travail est d’expliquer cette apparente contradiction : comment des personnes issues de l’immigration, exprimant un mal-être face à un racisme qu’ils considèrent omniprésent, peuvent-elles se sentir pleinement françaises / allemandes ?
On a divisé le travail entre les chapitres suivants : Contexte de l'étude, méthodologie et théories (I) ; Analyse des différentes formes d’identité nationale au prisme du corpus (II) ; Analyse en trois étapes chronologiques du rapport à la société dans les textes des rappeurs (III-V) ; étude de cas de Kery James en France et Samy Deluxe en Allemagne (VI).
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Lehrende in der Lehrkräfteausbildung sind stets damit konfrontiert, dass sie den Studierenden innovative Methoden modernen Schulunterrichts traditionell rezipierend vorstellen. In Deutschland gibt es circa 40 Universitäten, die Informatik mit Lehramtsbezug ausbilden. Allerdings gibt es nur wenige Konzepte, die sich mit der Verbindung von Bildungswissenschaften und der Informatik mit ihrer Didaktik beschäftigen und keine Konzepte, die eine konstruktivistische Lehre in der Informatik verfolgen.
Daher zielt diese Masterarbeit darauf ab, diese Lücke aufgreifen und anhand des „Didaktik der Informatik I“ Moduls der Universität Potsdam ein Modell zur konstruktivistischen Hochschullehre zu entwickeln. Dabei soll ein bestehendes konstruktivistisches Lehrmodell auf die Informatikdidaktik übertragen und Elemente zur Verbindung von Bildungswissenschaften, Fachwissenschaften und Fachdidaktiken mit einbezogen werden. Dies kann eine Grundlage für die Planung von Informatikdidaktischen Modulen bieten, aber auch als Inspiration zur Übertragung bestehender innovativer Lehrkonzepte auf andere Fachdidaktiken dienen.
Um ein solches konstruktivistisches Lehr-Lern-Modell zu erstellen, wird zunächst der Zusammenhang von Bildungswissenschaften, Fachwissenschaften und Fachdidaktiken erläutert und anschließend die Notwendigkeit einer Vernetzung hervorgehoben. Hieran folgt eine Darstellung zu relevanten Lerntheorien und bereits entwickelten innovativen Lernkonzepten. Anknüpfend wird darauf eingegangen, welche Anforderungen die Kultusminister- Konferenz an die Ausbildung von Lehrkräften stellt und wie diese Ausbildung für die Informatik momentan an der Universität Potsdam erfolgt. Aus allen Erkenntnissen heraus werden Anforderungen an ein konstruktivistisches Lehrmodell festgelegt. Unter Berücksichtigung der Voraussetzungen der Studienordnung für das Lehramt Informatik wird anschließend ein Modell für konstruktivistische Informatikdidaktik vorgestellt.
Weiterführende Forschung könnte sich damit auseinandersetzen, inwiefern sich die Motivation und Leistung im vergleich zum ursprünglichen Modul ändert und ob die Kompetenzen zur Unterrichtsplanung und Unterrichtsgestaltung durch das neue Modulkonzept stärker ausgebaut werden können.
Transitional justice is conventionally theorized as how a society deals with past injustices after regime change and alongside democratization. Nonetheless, scholars have not reached a consensus on what is to be included or excluded. Recent ideas of transformative justice seek to expand the understanding of transitional justice to include systemic restructuring and socioeconomic considerations. In the context of Nicaragua — where two transitions occurred within an 11-year span — very little transitional justice took place, in terms of the conventional concept of top-down legalistic mechanisms; however, distinct structural changes and socioeconomic policies can be found with each regime change. By analyzing the transformative justice elements of Nicaragua’s dual transition, this chapter seeks to expand the understanding of transitional justice to include how these factors influence goals of transitions such as sustainable peace and reconciliation for past injustices. The results argue for increased attention to transformative justice theories and a more nuanced conception of justice.
Entrepreneurial failure
(2022)
Although entrepreneurial failure (EF) is a fairly recent topic in entrepreneurship literature, the number of publications has been growing dynamically and particularly rapidly. Our systematic review maps and integrates the research on EF based on a multi-method approach to give structure and consistency to this fragmented field of research. The results reveal that the field revolves around six thematic clusters of EF: 1) Soft underpinnings of EF, 2) Contextuality of EF, 3) Perception of EF, 4) Two-sided effects of EF, 5) Multi-stage EF effects, and 6) Institutional drivers of EF. An integrative framework of the positive and negative effects of entrepreneurial failure is proposed, and a research agenda is suggested.
The discovery that certain diseases have specific miRNA signatures which correspond to disease progression opens a new biomarker category. The detection of these small non-coding RNAs is performed routinely using body fluids or tissues with real-time PCR, next-generation sequencing, or amplification-based miRNA assays. Antibody-based detection systems allow an easy onset handling compared to PCR or sequencing and can be considered as alternative methods to support miRNA diagnostic in the future. In this study, we describe the generation of a camelid heavy-chain-only antibody specifically recognizing miRNAs to establish an antibody-based detection method. The generation of nucleic acid-specific binders is a challenge. We selected camelid binders via phage display, expressed them as VHH as well as full-length antibodies, and characterized the binding to several miRNAs from a signature specific for dilated cardiomyopathy. The described workflow can be used to create miRNA-specific binders and establish antibody-based detection methods to provide an additional way to analyze disease-specific miRNA signatures.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
Background
Isometric muscle actions can be performed either by initiating the action, e.g., pulling on an immovable resistance (PIMA), or by reacting to an external load, e.g., holding a weight (HIMA). In the present study, it was mainly examined if these modalities could be differentiated by oxygenation variables as well as by time to task failure (TTF). Furthermore, it was analyzed if variables are changed by intermittent voluntary muscle twitches during weight holding (Twitch). It was assumed that twitches during a weight holding task change the character of the isometric muscle action from reacting (≙ HIMA) to acting (≙ PIMA).
Methods
Twelve subjects (two drop outs) randomly performed two tasks (HIMA vs. PIMA or HIMA vs. Twitch, n = 5 each) with the elbow flexors at 60% of maximal torque maintained until muscle failure with each arm. Local capillary venous oxygen saturation (SvO2) and relative hemoglobin amount (rHb) were measured by light spectrometry.
Results
Within subjects, no significant differences were found between tasks regarding the behavior of SvO2 and rHb, the slope and extent of deoxygenation (max. SvO2 decrease), SvO2 level at global rHb minimum, and time to SvO2 steady states. The TTF was significantly longer during Twitch and PIMA (incl. Twitch) compared to HIMA (p = 0.043 and 0.047, respectively). There was no substantial correlation between TTF and maximal deoxygenation independently of the task (r = − 0.13).
Conclusions
HIMA and PIMA seem to have a similar microvascular oxygen and blood supply. The supply might be sufficient, which is expressed by homeostatic steady states of SvO2 in all trials and increases in rHb in most of the trials. Intermittent voluntary muscle twitches might not serve as a further support but extend the TTF. A changed neuromuscular control is discussed as possible explanation.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Objective: A role for microRNAs is implicated in several biological and pathological processes. We investigated the effects of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on molecular markers of diabetic cardiomyopathy in rats.
Methods: Eighteen male Wistar rats (260 ± 10 g; aged 8 weeks) with streptozotocin (STZ)-induced type 1 diabetes mellitus (55 mg/kg, IP) were randomly allocated to three groups: control, MICT, and HIIT. The two different training protocols were performed 5 days each week for 5 weeks. Cardiac performance (end-systolic and end-diastolic dimensions, ejection fraction), the expression of miR-206, HSP60, and markers of apoptosis (cleaved PARP and cytochrome C) were determined at the end of the exercise interventions.
Results: Both exercise interventions (HIIT and MICT) decreased blood glucose levels and improved cardiac performance, with greater changes in the HIIT group (p < 0.001, η2: 0.909). While the expressions of miR-206 and apoptotic markers decreased in both training protocols (p < 0.001, η2: 0.967), HIIT caused greater reductions in apoptotic markers and produced a 20% greater reduction in miR-206 compared with the MICT protocol (p < 0.001). Furthermore, both training protocols enhanced the expression of HSP60 (p < 0.001, η2: 0.976), with a nearly 50% greater increase in the HIIT group compared with MICT.
Conclusions: Our results indicate that both exercise protocols, HIIT and MICT, have the potential to reduce diabetic cardiomyopathy by modifying the expression of miR-206 and its downstream targets of apoptosis. It seems however that HIIT is even more effective than MICT to modulate these molecular markers.
Aims: High intensity interval training (HIIT) improves mitochondrial characteristics. This study compared the impact of two workload-matched high intensity interval training (HIIT) protocols with different work:recovery ratios on regulatory factors related to mitochondrial biogenesis in the soleus muscle of diabetic rats.
Materials and methods: Twenty-four Wistar rats were randomly divided into four equal-sized groups: non-diabetic control, diabetic control (DC), diabetic with long recovery exercise [4–5 × 2-min running at 80%–90% of the maximum speed reached with 2-min of recovery at 40% of the maximum speed reached (DHIIT1:1)], and diabetic with short recovery exercise (5–6 × 2-min running at 80%–90% of the maximum speed reached with 1-min of recovery at 30% of the maximum speed reached [DHIIT2:1]). Both HIIT protocols were completed five times/week for 4 weeks while maintaining equal running distances in each session.
Results: Gene and protein expressions of PGC-1α, p53, and citrate synthase of the muscles increased significantly following DHIIT1:1 and DHIIT2:1 compared to DC (p ˂ 0.05). Most parameters, except for PGC-1α protein (p = 0.597), were significantly higher in DHIIT2:1 than in DHIIT1:1 (p ˂ 0.05). Both DHIIT groups showed significant increases in maximum speed with larger increases in DHIIT2:1 compared with DHIIT1:1.
Conclusion: Our findings indicate that both HIIT protocols can potently up-regulate gene and protein expression of PGC-1α, p53, and CS. However, DHIIT2:1 has superior effects compared with DHIIT1:1 in improving mitochondrial adaptive responses in diabetic rats.
Synthetische Transkriptionsfaktoren bestehen wie natürliche Transkriptionsfaktoren aus einer DNA-Bindedomäne, die sich spezifisch an die Bindestellensequenz vor dem Ziel-Gen anlagert, und einer Aktivierungsdomäne, die die Transkriptionsmaschinerie rekrutiert, sodass das Zielgen exprimiert wird. Der Unterschied zu den natürlichen Transkriptionsfaktoren ist, sowohl dass die DNA-Bindedomäne als auch die Aktivierungsdomäne wirtsfremd sein können und dadurch künstliche Stoffwechselwege im Wirt, größtenteils chemisch, induziert werden können. Optogenetische synthetische Transkriptionsfaktoren, die hier entwickelt wurden, gehen einen Schritt weiter. Dabei ist die DNA-Bindedomäne nicht mehr an die Aktivierungsdomäne, sondern mit dem Blaulicht-Photorezeptor CRY2 gekoppelt. Die Aktivierungsdomäne wurde mit dem Interaktionspartner CIB1 fusioniert. Unter Blaulichtbestrahlung dimerisieren CRY2 und CIB1 und damit einhergehend die beiden Domänen, sodass ein funktionsfähiger Transkriptionsfaktor entsteht. Dieses System wurde in die Saccharomyces cerevisiae genomisch integriert. Verifiziert wurde das konstruierte System mit Hilfe des Reporters yEGFP, welcher durchflusszytometrisch detektiert werden konnte. Es konnte gezeigt werden, dass die yEGFP Expression variabel gestaltet werden kann, indem unterschiedlich lange Blaulichtimpulse ausgesendet wurden, die DNA-Bindedomäne, die Aktivierungsdomäne oder die Anzahl der Bindestellen, an dem sich die DNA-Bindedomäne anlagert, verändert wurden. Um das System für industrielle Anwendungen attraktiv zu gestalten, wurde das System vom Deepwell-Maßstab auf Photobioreaktor-Maßstab hochskaliert. Außerdem erwies sich das Blaulichtsystem sowohl im Laborstamm YPH500 als auch im industriell oft verwendeten Hefestamm CEN.PK als funktional. Des Weiteren konnte ein industrierelevante Protein ebenso mit Hilfe des verifizierten Systems exprimiert werden. Schlussendlich konnte in dieser Arbeit das etablierte Blaulicht-System erfolgreich mit einem Rotlichtsystem kombiniert werden, was zuvor noch nicht beschrieben wurde.
Business incubators hatch start-ups, helping them to survive their early stage and to create a solid foundation for sustainable growth by providing services and access to knowledge. The great practical relevance led to a strong interest of researchers and a high output of scholarly publications, which made the field complex and scattered. To organize the research on incubators and provide a systematic overview of the field, we conducted bibliometric performance analyses and science mappings. The performance analyses depict the temporal development of the number of incubator publications and their citations, the most cited and most productive journals, countries, and authors, and the 20 most cited articles. The author keyword co-occurrence analysis distinguishes six, and the bibliographic coupling seven research themes. Based on a content analysis of the science mappings, we propose a research framework for future research on business incubators.
In den vergangenen Jahren hat der im anglo-amerikanischen Rechtsraum wurzelnde Amicus Curiae, wenn auch in unterschiedlicher Ausprägung, Eingang in die Verwaltungsgerichtsbarkeiten in Deutschland und Frankreich gefunden. Dabei erweist sich die französische Verwaltungsgerichtsordnung aus rechtsvergleichender Sicht als progressiv, da das Verfahrensinstrument hier – im Gegensatz zur deutschen Rechtslage – bereits positiv-rechtlich normiert ist. Diese Fortschrittlichkeit hat sich bisher jedoch nicht merklich auf die Drittinterventionspraxis niedergeschlagen, besitzen Amicus Curiae-Stellungnahmen doch in beiden Ländern und über alle verwaltungsgerichtlichen Instanzen hinweg noch immer Seltenheitswert.
Da mithin keine Generalisierungen zur dieser Rechtspraxis erlaubt sind, kann sich eine Analyse der möglichen funktionalen Rolle derartiger Amicus Curiae-Stellungnahmen nur auf theoretische Überlegungen stützen. Danach ist eine Informationsfunktion gegenüber dem Gericht in Bezug auf Tatsachen- und Rechtsfragen klar zu bejahen. Auch dürfte der Verfahrensmechanismus ein zusätzliches – wenngleich nicht demokratisches – Legitimationspotential für gerichtliche Entscheidungen besitzen: Indem dieser gesellschaftliche Teilhabe und damit gleichzeitig die Einbettung verwaltungsgerichtlicher Verfahren in den jeweiligen sozialen Kontext ermöglicht, kann er zur Steigerung der gesellschaftlichen Akzeptanz der zunehmend unter Rechtsfertigungsdruck geratenden Richtermacht beitragen.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
The Role of the Precuneus in Human Spatial Updating in a Real Environment Setting—A cTBS Study
(2022)
As we move through an environment, we update positions of our body relative to other objects, even when some objects temporarily or permanently leave our field of view—this ability is termed egocentric spatial updating and plays an important role in everyday life. Still, our knowledge about its representation in the brain is still scarce, with previous studies using virtual movements in virtual environments or patients with brain lesions suggesting that the precuneus might play an important role. However, whether this assumption is also true when healthy humans move in real environments where full body-based cues are available in addition to the visual cues typically used in many VR studies is unclear. Therefore, in this study we investigated the role of the precuneus in egocentric spatial updating in a real environment setting in 20 healthy young participants who underwent two conditions in a cross-over design: (a) stimulation, achieved through applying continuous theta-burst stimulation (cTBS) to inhibit the precuneus and (b) sham condition (activated coil turned upside down). In both conditions, participants had to walk back with blindfolded eyes to objects they had previously memorized while walking with open eyes. Simplified trials (without spatial updating) were used as control condition, to make sure the participants were not affected by factors such as walking blindfolded, vestibular or working memory deficits. A significant interaction was found, with participants performing better in the sham condition compared to real stimulation, showing smaller errors both in distance and angle. The results of our study reveal evidence of an important role of the precuneus in a real-environment egocentric spatial updating; studies on larger samples are necessary to confirm and further investigate this finding.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Fitness, risk taking, and spatial behavior covary with boldness in experimental vole populations
(2022)
Individuals of a population may vary along a pace-of-life syndrome from highly fecund, short-lived, bold, dispersive “fast” types at one end of the spectrum to less fecund, long-lived, shy, plastic “slow” types at the other end. Risk-taking behavior might mediate the underlying life history trade-off, but empirical evidence supporting this hypothesis is still ambiguous. Using experimentally created populations of common voles (Microtus arvalis)—a species with distinct seasonal life history trajectories—we aimed to test whether individual differences in boldness behavior covary with risk taking, space use, and fitness. We quantified risk taking, space use (via automated tracking), survival, and reproductive success (via genetic parentage analysis) in 8 to 14 experimental, mixed-sex populations of 113 common voles of known boldness type in large grassland enclosures over a significant part of their adult life span and two reproductive events. Populations were assorted to contain extreme boldness types (bold or shy) of both sexes. Bolder individuals took more risks than shyer ones, which did not affect survival. Bolder males but not females produced more offspring than shy conspecifics. Daily home range and core area sizes, based on 95% and 50% Kernel density estimates (20 ± 10 per individual, n = 54 individuals), were highly repeatable over time. Individual space use unfolded differently for sex-boldness type combinations over the course of the experiment. While day ranges decreased for shy females, they increased for bold females and all males. Space use trajectories may, hence, indicate differences in coping styles when confronted with a novel social and physical environment. Thus, interindividual differences in boldness predict risk taking under near-natural conditions and have consequences for fitness in males, which have a higher reproductive potential than females. Given extreme inter- and intra-annual fluctuations in population density in the study species and its short life span, density-dependent fluctuating selection operating differently on the sexes might maintain (co)variation in boldness, risk taking, and pace-of-life.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Biological invasions may result from multiple introductions, which might compensate for reduced gene pools caused by bottleneck events, but could also dilute adaptive processes. A previous common-garden experiment showed heritable latitudinal clines in fitness-related traits in the invasive goldenrod Solidago canadensis in Central Europe. These latitudinal clines remained stable even in plants chemically treated with zebularine to reduce epigenetic variation. However, despite the heritability of traits investigated, genetic isolation-by-distance was non-significant. Utilizing the same specimens, we applied a molecular analysis of (epi)genetic differentiation with standard and methylation-sensitive (MSAP) AFLPs. We tested whether this variation was spatially structured among populations and whether zebularine had altered epigenetic variation. Additionally, we used genome scans to mine for putative outlier loci susceptible to selection processes in the invaded range. Despite the absence of isolation-by-distance, we found spatial genetic neighborhoods among populations and two AFLP clusters differentiating northern and southern Solidago populations. Genetic and epigenetic diversity were significantly correlated, but not linked to phenotypic variation. Hence, no spatial epigenetic patterns were detected along the latitudinal gradient sampled. Applying genome-scan approaches (BAYESCAN, BAYESCENV, RDA, and LFMM), we found 51 genetic and epigenetic loci putatively responding to selection. One of these genetic loci was significantly more frequent in populations at the northern range. Also, one epigenetic locus was more frequent in populations in the southern range, but this pattern was lost under zebularine treatment. Our results point to some genetic, but not epigenetic adaptation processes along a large-scale latitudinal gradient of S. canadensis in its invasive range.
Hunting Down Animal Verbs
(2022)
Language change is an essential feature of human language, and it is therefore one of the focal areas of the scientific study of language. Language change is always tacitly at work in all languages of the world and at all levels of a given language, be it phonology, morphology, syntax, semantics, etc. It has been suggested that it is precisely the capacity to constantly change and adjust that allows language to keep serving the communicative goals of its users, from ancient to modern times (Fauconnier & Turner, 2003, p. 179).
This thesis investigates an especially salient pattern of lexicogrammatical change, namely word-formation of verbs from animal nouns by zero-derivation, in the process of which such nouns as, for example, dog, horse, or beaver change their usage and meaning to produce animal verbs: to dog ‘to follow someone persistently and with a malicious intent’, to horse about/around ‘to make fun of, to ‘rag’, to ridicule someone’ and to beaver away ‘to work at working with great enthusiasm’ respectively. In the previous literature this pattern of language change has been termed verbal zoosemy (e.g. Kiełtyka, 2016), i.e. metaphorical construal of human actions by means of linguistic material from the domain of animals.
The approach taken in this study is not to simply report on the objective changes in the morphology, syntactic distribution and meaning of such linguistic units before and after conversion, but to uncover the complexity of cognitive mechanisms which allow the speakers of English to reclassify such well-established nominal units as animal noun into verbs. It is assumed that the grammatical change in these lexical units is predicated on and triggered by preceding semantic change. Thus, the study is set in the framework of Cognitive Historical Semantics and employs the Conceptual Metaphor and Metonymy Theory (CMMT) to untangle the intricacies of the semantic change making the grammatical change of animal nouns into verbs possible and acceptable in the minds of English speakers.
To this end, this study employed the Oxford English Dictionary Online (OED Online) to compile a glossary of 96 denominal animal verbal forms tied to 209 verbal senses (most verbs in the dataset displayed polysemy). The data collected from the OED Online included not only the senses of the verbs, but also the date of the earliest recorded use of the verbal form with the given sense (regarded in the study as the date of conversion), the earliest usage examples for individual senses and morphologically or semantically related linguistic units from the lexical field of the respective parent noun which were amenable to explaining the observed instances of semantic change. Each instance of zoosemisation, i.e. of the creation of a separate metaphorical verbal sense, was then carefully analysed on the basis of the data collected and classified with the help of the CMMT. In the final stage, a comprehensive and systematic classification of the senses of animal verbs in accordance with the cognitive mechanisms of their creation (metaphor, metonymy, or a combination thereof) was produced together with a timeline of the first appearance of individual metaphorical senses of animal verbs recorded in the OED.
The results show that animal verbs are produced through the interaction of conceptual metaphor and metonymy. Specifically, it was established that two major patterns of metaphor-metonymy interaction underpinning the process of verbal zoosemisation are metaphor from metonymy and metonymy from metaphor. In the former pattern, either an already existing metonymic animal verb is expanded to include the target domain PEOPLE, or the animal noun itself acts as a metonymic vehicle to a certain element of the idealised cognitive model of the given animal, which is metaphorically projected onto people. In the latter mechanism, a metaphorical projection of an animal term initially enters the lexicon in the form of a metaphorical animal noun referring to a human entity, and later in the course of language development it comes to metonymically stand for the action, which the given entity either performs or is involved in. Secondarily, it was observed that individual animal nouns can undergo multiple rounds of zoosemic conversion over time depending on the semantic frame in which the given linguistic unit undergoes denominal conversion, and that results in the polysemy of most animal verbs.
Die vorliegende Masterarbeit untersucht die Rolle des „Neuen Plan“ von Reichswirtschaftsminister Hjalmar Schacht in der nationalsozialistischen Außenwirtschaftspolitik in fünf konsekutiven Teilschritten. Erstens wird ein kurzer Überblick über den derzeitigen Forschungsstand zum „Neuen Plan“ geliefert und auf die Quellenlage zur Bearbeitung der Fragestellung eingegangen. Um zweitens eine Aussage über das Verhältnis zwischen dem „Neuen Plan“ und den außenwirtschaftspolitischen Leitlinien des Nationalsozialismus treffen zu können, werden diese für die NSDAP als Partei sowie für Hitler als unangefochtene politische Führungsfigur auf Basis geeigneter Primärquellen herausgearbeitet. Drittens wird anhand relevanter Wirtschaftsentwicklungen auf die Ausgangslage der außenwirtschaftspolitischen Krisensituation ab Mitte 1934 eingegangen, die durch den „Neuen Plan“ im Sinne des NS-Regimes gelöst werden sollte. Viertens wird im Hauptteil der Forschungsarbeit der „Neue Plan“ in mehreren Teilschritten erklärt. Zunächst wird hierfür auf die politischen Entwicklungen eingegangen, an welche der „Neue Plan“ anknüpfen konnte sowie auf die verschiedenen Bestandteile seiner Funktionsweise, die auf dieser Grundlage reformiert, erweitert oder neu geschaffen wurden. Inwieweit diese Maßnahmen mit den außenwirtschaftspolitischen Leitlinien der NS-Ideologie kompatibel waren, wird im Nachgang analysiert und kritisch eingeordnet. Die Effektivität des „Neuen Plans“ wird zudem in Bezug auf fünf Themenfelder anhand wirtschaftlicher Kennzahlen des Statistischen Jahrbuchs des Deutschen Reiches quellenbasiert beurteilt. Diese Analyse umfasst den Zeitraum vom Beginn des „Neuen Plans“ im Jahr 1934 bis zur Entmachtung Schachts als Reichswirtschaftsminister zum Jahresende 1937.
Intelligence, as well as working memory and attention, affect the acquisition of mathematical competencies. This paper aimed to examine the influence of working memory and attention when taking different mathematical skills into account as a function of children’s intellectual ability. Overall, intelligence, working memory, attention and numerical skills were assessed twice in 1868 German pre-school children (t1, t2) and again at 2nd grade (t3). We defined three intellectual ability groups based on the results of intellectual assessment at t1 and t2. Group comparisons revealed significant differences between the three intellectual ability groups. Over time, children with low intellectual ability showed the lowest achievement in domain-general and numerical and mathematical skills compared to children of average intellectual ability. The highest achievement on the aforementioned variables was found for children of high intellectual ability. Additionally, path modelling revealed that, depending on the intellectual ability, different models of varying complexity could be generated. These models differed with regard to the relevance of the predictors (t2) and the future mathematical skills (t3). Causes and conclusions of these findings are discussed.
Die Reform des Gemeinsamen Europäischen Asylsystems (GEAS) ist eine der größten Herausforderungen und eine der drängendsten Aufgaben der EU und ihrer Mitgliedstaaten. Dabei stellt die Frage der „gerechten Lastenteilung“ in der Asyl- und Migrationspolitik den Zusammenhalt der EU auf eine Zerreißprobe. Seit den gescheiterten Verhandlungen über die GEAS-Reform 2016/2017 versuchen die Mitgliedstaaten, einen Ausgleich zwischen den Grundsätzen der Solidarität und Verantwortlichkeit zu finden, wie es Art. 80 AEUV für das GEAS vorgibt. Je nach Interessenlage verbirgt sich dahinter aber ein sehr unterschiedliches Verständnis.
Diese Arbeit untersucht die Reformbemühungen beim GEAS nach Vorlage der Kommissionsvorschläge im September 2020 und beleuchtet die divergierenden Interessenlagen der Mitgliedstaaten hinsichtlich Aufnahme und Verteilung von Geflüchteten. Ziel der Arbeit ist, eine Aussage über die Erfolgsaussichten einer Einigung über die Grundsätze der Solidarität und Verantwortung zu treffen. Dazu werden zunächst die Verpflichtungen im Asylrecht basierend auf internationalen Übereinkommen wie der Genfer Flüchtlingskonvention dargestellt. An-schließend werden GEAS und Dublin-System, das dem Ersteinreisestaat die Zuständigkeit für die Asylverfahren zuschreibt, und die Ursachen für sein Scheitern analysiert.
Diese Verantwortungsteilung, die zu einer überproportionalen Belastung der Mitgliedstaaten im Süden führt, ist Kristallisationspunkt für Konflikte, gegenseitigen Vorwürfe und Misstrau-en zwischen den Mitgliedstaaten. Infolge einer tatsächlichen Überlastung und teilweise selbst verschuldeten Unmöglichkeit, die GEAS-Verpflichtungen zu erfüllen, rufen die Südstaaten nach Unterstützung aus dem Norden und betreiben teilweise sogar eine Politik des Laissez-Passer. Durch teilweise katastrophale Zustände bei Verfahren, Unterbringung und Versorgung der Geflüchteten entstehen Rückführungshindernisse und Druck auf die Zielstaaten, mehr Solidarität zu leisten.
Ausgehend von diesem Befund wird der Bedeutungsgehalt des Solidaritätsprinzips in Art. 80 AEUV in normativer und deskriptiver Hinsicht untersucht. Normativ handelt es sich dabei um eine abstrakte Rechtspflicht zur gegenseitigen Unterstützung, deren Ausgestaltung im politischen Ermessen der Mitgliedstaaten liegt. Deskriptiv kann unter „Solidarität“ der Zweck verstanden werden, dass die Verwirklichung individueller Interessen einer kollektiven Anstrengung bedarf, die wiederum das Gemeinwohl fördert und somit im Interesse aller liegt. Dem folgend müssten alle Mitgliedstaaten ein Interesse an der Bewältigung der Herausforderungen der Migration nach Europa haben.
Die Interessen der Mitgliedstaten deuten aber auf etwas anderes hin. Die durch die Ankünfte von Schutzsuchenden aus dem Süden stark belasteten Mittelmeeranrainer wie Griechenland und Italien fordern eine Abkehr vom Dublin-System. Die migrationskritischen Visegrád-Staaten verweigern im Grunde jede Unterstützung bei der Aufnahme und berufen sich darauf, dass sie ihre rechtlichen Verpflichtungen erfüllen. Staaten, die lange Zeit eine liberale Migrationspolitik verfolgten und beliebte Zielländer waren wie Schweden, ringen nach der Migrationskrise 2015/2016 mit sich auf der Suche nach einem migrationspolitischen Kurs, der rechts-populistische Kräfte nicht noch weiter erstarken lässt. Auch die Hauptzielländer Deutschland und Frankreich versuchen den jeweiligen innenpolitischen Diskursen entsprechend, die Sekundärmigration zu verhindern und wollen auf unterschiedliche Weise die Außengrenzstaaten unterstützen, wobei Deutschland die Umverteilung aller unterstützt.
Die im September 2020 vorgelegten Vorschläge der Kommission versuchen, den unterschiedlichen Interessen Rechnung zu tragen. Durch die Schaffung eines Grenzverfahrens soll die Anzahl der in die EU einreisenden und zu verteilenden Geflüchteten reduziert werden. Durch Änderung der Dublin-Kriterien soll die Zuständigkeit der potentiellen Zielländer erweitert werden, um die Südländer zu entlasten und der Sekundärmigration entgegenzuwirken. Mit der gleichen Zielrichtung soll auf Grundlage eines neuen Solidaritätsmechanismus eine Umverteilung unbegleiteter Minderjähriger und aus Seenot Geretteter erfolgen. In Krisenzeiten soll daraus eine generelle Umverteilung aller Schutzsuchenden erwachsen, wobei Solidarität weiterhin auf verschiedene Art und Weise geleistet werden können soll.
Angesichts der Verhandlungen während der deutschen EU-Ratspräsidentschaft und des er-reichten Zwischenergebnisses besteht Skepsis, dass die Mitgliedstaaten sich bald auf eine GEAS-Reform einigen werden. Dazu liegen die Interessen der Mitgliedstaaten auch hinsichtlich der Solidarität zu weit auseinander. Zudem stellt sich die in Hinblick auf die europäische Integration und die Zukunft der EU besorgniserregende Frage, worin das im Interesse aller liegende Gemeinwohl in der Asylpolitik liegen soll, das die gemeinsame Kraftanstrengung zu einem individuellen Interesse jedes Einzelnen werden lässt. Denn anders als bei der Schaffung des Schengen-Raums als Raum ohne Binnengrenzen sind Wohlstandsgewinne von der Aufnahme Geflüchteter vorerst nicht zu erwarten.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Background
Maximal isokinetic strength ratios of joint flexors and extensors are important parameters to indicate the level of muscular balance at the joint. Further, in combat sports athletes, upper and lower limb muscle strength is affected by the type of sport. Thus, this study aimed to examine the differences in maximal isokinetic strength of the flexors and extensors and the corresponding flexor–extensor strength ratios of the elbows and knees in combat sports athletes.
Method
Forty male participants (age = 22.3 ± 2.5 years) from four different combat sports (amateur boxing, taekwondo, karate, and judo; n = 10 per sport) were tested for eccentric peak torque of the elbow/knee flexors (EF/KF) and concentric peak torque of the elbow/knee extensors (EE/KE) at three different angular velocities (60, 120, and 180°/s) on the dominant and non-dominant side using an isokinetic device.
Results
Analyses revealed significant, large-sized group × velocity × limb interactions for EF, EE, and EF–EE ratio, KF, KE, and KF–KE ratio (p ≤ 0.03; 0.91 ≤ d ≤ 1.75). Post-hoc analyses indicated that amateur boxers displayed the largest EE strength values on the non-dominant side at ≤ 120°/s and the dominant side at ≥ 120°/s (p < 0.03; 1.21 ≤ d ≤ 1.59). The largest EF–EE strength ratios were observed on amateur boxers’ and judokas’ non-dominant side at ≥ 120°/s (p < 0.04; 1.36 ≤ d ≤ 2.44). Further, we found lower KF–KE strength measures in karate (p < 0.04; 1.12 ≤ d ≤ 6.22) and judo athletes (p ≤ 0.03; 1.60 ≤ d ≤ 5.31) particularly on the non-dominant side.
Conclusions
The present findings indicated combat sport-specific differences in maximal isokinetic strength measures of EF, EE, KF, and KE particularly in favor of amateur boxers on the non-dominant side.
Objective
To improve consumer decision making, the results of risk assessments on food, feed, consumer products or chemicals need to be communicated not only to experts but also to non-expert audiences. The present study draws on evidence from literature reviews and focus groups with diverse stakeholders to identify content to integrate into an existing risk assessment communication (Risk Profile).
Methods
A combination of rapid literature reviews and focus groups with experts (risk assessors (n = 15), risk managers (n = 8)), and non-experts (general public (n = 18)) were used to identify content and strategies for including information about risk assessment results in the “Risk Profile” from the German Federal Institute for Risk Assessment. Feedback from initial focus groups was used to develop communication prototypes that informed subsequent feedback rounds in an iterative process. A final prototype was validated in usability tests with experts.
Results
Focus group feedback and suggestions from risk assessors were largely in line with findings from the literature. Risk managers and lay persons offered similar suggestions on how to improve the existing communication of risk assessment results (e.g., including more explanatory detail, reporting probabilities for individual health impairments, and specifying risks for subgroups in additional sections). Risk managers found information about quality of evidence important to communicate, whereas people from the general public found this information less relevant. Participants from lower educational backgrounds had difficulties understanding the purpose of risk assessments. User tests found that the final prototype was appropriate and feasible to implement by risk assessors.
Conclusion
An iterative and evidence-based process was used to develop content to improve the communication of risk assessments to the general public while being feasible to use by risk assessors. Remaining challenges include how to communicate dose-response relationships and standardise quality of evidence ratings across disciplines.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Zwischenbericht
(2022)
In Deutschland leben aktuell rund 1,8 Mio. als schutzsuchend registrierte Menschen mit Fluchterfahrung, deren Integration eine gesamtgesellschaftliche Aufgabe darstellt. Viele dieser Personen sind hoch qualifiziert und arbeiteten in ihrem Herkunftsland als Lehrkräfte. Das Qualifizierungsprogramm Lehrkräfte Plus ermöglicht migrierten Lehrkräften den beruflichen Wiedereinstieg in Deutschland zu erlangen. Da bislang wenig wissenschaftliche Evidenz zur Wirksamkeit solcher Qualifizierungsprogramme vorliegt, wird das Programm Lehrkräfte Plus durch ein Forschungsvorhaben der Universität Potsdam untersucht. In dem vorliegenden Zwischenbericht werden erste Ergebnisse der wissenschaftlichen Begleitforschung auf Basis der ersten Erhebungen vorgestellt.