Refine
Has Fulltext
- yes (13208) (remove)
Year of publication
Document Type
- Article (3990)
- Postprint (3294)
- Doctoral Thesis (2507)
- Monograph/Edited Volume (966)
- Review (558)
- Preprint (446)
- Part of Periodical (418)
- Master's Thesis (261)
- Conference Proceeding (245)
- Working Paper (241)
Language
- German (6954)
- English (5954)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (106)
- climate change (78)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Nachhaltigkeit (61)
- European Union (59)
- Europäische Union (58)
- Klimawandel (57)
Institute
- Extern (1365)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (710)
- Institut für Biochemie und Biologie (706)
- Wirtschaftswissenschaften (582)
- Institut für Chemie (552)
- Institut für Mathematik (519)
- Institut für Romanistik (508)
- Institut für Geowissenschaften (506)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
Additive manufacturing (AM) processes enable the production of metal structures with exceptional design freedom, of which laser powder bed fusion (PBF-LB) is one of the most common. In this process, a laser melts a bed of loose feedstock powder particles layer-by-layer to build a structure with the desired geometry. During fabrication, the repeated melting and rapid, directional solidification create large temperature gradients that generate large thermal stress. This thermal stress can itself lead to cracking or delamination during fabrication. More often, large residual stresses remain in the final part as a footprint of the thermal stress. This residual stress can cause premature distortion or even failure of the part in service. Hence, knowledge of the residual stress field is critical for both process optimization and structural integrity.
Diffraction-based techniques allow the non-destructive characterization of the residual stress fields. However, such methods require a good knowledge of the material of interest, as certain assumptions must be made to accurately determine residual stress. First, the measured lattice plane spacings must be converted to lattice strains with the knowledge of a strain-free material state. Second, the measured lattice strains must be related to the macroscopic stress using Hooke's law, which requires knowledge of the stiffness of the material. Since most crystal structures exhibit anisotropic material behavior, the elastic behavior is specific to each lattice plane of the single crystal. Thus, the use of individual lattice planes in monochromatic diffraction residual stress analysis requires knowledge of the lattice plane-specific elastic properties. In addition, knowledge of the microstructure of the material is required for a reliable assessment of residual stress.
This work presents a toolbox for reliable diffraction-based residual stress analysis. This is presented for a nickel-based superalloy produced by PBF-LB. First, this work reviews the existing literature in the field of residual stress analysis of laser-based AM using diffraction-based techniques. Second, the elastic and plastic anisotropy of the nickel-based superalloy Inconel 718 produced by PBF-LB is studied using in situ energy dispersive synchrotron X-ray and neutron diffraction techniques. These experiments are complemented by ex situ material characterization techniques. These methods establish the relationship between the microstructure and texture of the material and its elastic and plastic anisotropy. Finally, surface, sub-surface, and bulk residual stress are determined using a texture-based approach. Uncertainties of different methods for obtaining stress-free reference values are discussed.
The tensile behavior in the as-built condition is shown to be controlled by texture and cellular sub-grain structure, while in the heat-treated condition the precipitation of strengthening phases and grain morphology dictate the behavior. In fact, the results of this thesis show that the diffraction elastic constants depend on the underlying microstructure, including texture and grain morphology. For columnar microstructures in both as-built and heat-treated conditions, the diffraction elastic constants are best described by the Reuss iso-stress model. Furthermore, the low accumulation of intergranular strains during deformation demonstrates the robustness of using the 311 reflection for the diffraction-based residual stress analysis with columnar textured microstructures. The differences between texture-based and quasi-isotropic approaches for the residual stress analysis are shown to be insignificant in the observed case. However, the analysis of the sub-surface residual stress distributions show, that different scanning strategies result in a change in the orientation of the residual stress tensor. Furthermore, the location of the critical sub-surface tensile residual stress is related to the surface roughness and the microstructure. Finally, recommendations are given for the diffraction-based determination and evaluation of residual stress in textured additively manufactured alloys.
Die Begrenzung systemischer Risiken ist essentieller Bestandteil der neuen internationalen Finanzmarktordnung. Dabei galt es nicht nur die Verflechtung der Banken untereinander, sondern auch die Verbindung zwischen den Staatsfinanzen und der Solvenz der nationalen Bankensysteme (dem sog. Risikoverbund zwischen Staat und Banken) zu durchbrechen. Der Beitrag beleuchtet die Entwicklung der Forderungen gegenüber Staaten in den Bankbilanzen der Euroländer und des Eurosystems im Zeitverlauf sowie den daraus erwachsenden Risiken für die Finanzstabilität. Hierzu werden die Determinanten des Risikoverbunds theoretisch wie empirisch analysiert. Die fiskalische Kapazität der Eurostaaten wird anhand verschiedener Faktoren wie der Verschuldungsquote, dem Leistungsbilanzsaldo und der Kredit-BIP Lücke aufgezeigt; anschließend werden die Strukturen der Bankensysteme im Euroraum untersucht. Im Einzelnen werden die private und staatliche Gesamtverschuldung, die konsolidierte Bankenbilanzsumme und die darin enthaltenen Verbindlichkeiten sowie der Anteil des Bankensektors an der Bruttowertschöpfung in Relation zur Wirtschaftsleistung betrachtet. Außerdem finden NPE-Bestände in den Bankbilanzen sowie die Renditen der emittierten Staatsanleihen und damit in Verbindung stehenden CDS-Spreads Betrachtung. Zusätzlich werden die Konzentration, der Verschuldungsgrad, Liquiditätsziffern sowie länderspezifische Unterschiede in Art und Fristigkeit der Refinanzierung der Bankensektoren abgebildet. Auf Basis der empirischen Befunde werden im Hinblick auf die wechselseitigen Ansteckungseffekte zwischen Banken und Staaten Implikationen für die Finanzmarktregulierung diskutiert.
The European Water Framework Directive (WFD) has identified river morphological alteration and diffuse pollution as the two main pressures affecting water bodies in Europe at the catchment scale. Consequently, river restoration has become a priority to achieve the WFD's objective of good ecological status. However, little is known about the effects of stream morphological changes, such as re-meandering, on in-stream nitrate retention at the river network scale. Therefore, catchment nitrate modeling is necessary to guide the implementation of spatially targeted and cost-effective mitigation measures. Meanwhile, Germany, like many other regions in central Europe, has experienced consecutive summer droughts from 2015-2018, resulting in significant changes in river nitrate concentrations in various catchments. However, the mechanistic exploration of catchment nitrate responses to changing weather conditions is still lacking.
Firstly, a fully distributed, process-based catchment Nitrate model (mHM-Nitrate) was used, which was properly calibrated and comprehensively evaluated at numerous spatially distributed nitrate sampling locations. Three calibration schemes were designed, taking into account land use, stream order, and mean nitrate concentrations, and they varied in spatial coverage but used data from the same period (2011–2019). The model performance for discharge was similar among the three schemes, with Nash-Sutcliffe Efficiency (NSE) scores ranging from 0.88 to 0.92. However, for nitrate concentrations, scheme 2 outperformed schemes 1 and 3 when compared to observed data from eight gauging stations. This was likely because scheme 2 incorporated a diverse range of data, including low discharge values and nitrate concentrations, and thus provided a better representation of within-catchment heterogenous. Therefore, the study suggests that strategically selecting gauging stations that reflect the full range of within-catchment heterogeneity is more important for calibration than simply increasing the number of stations.
Secondly, the mHM-Nitrate model was used to reveal the causal relations between sequential droughts and nitrate concentration in the Bode catchment (3200 km2) in central Germany, where stream nitrate concentrations exhibited contrasting trends from upstream to downstream reaches. The model was evaluated using data from six gauging stations, reflecting different levels of runoff components and their associated nitrate-mixing from upstream to downstream. Results indicated that the mHM-Nitrate model reproduced dynamics of daily discharge and nitrate concentration well, with Nash-Sutcliffe Efficiency ≥ 0.73 for discharge and Kling-Gupta Efficiency ≥ 0.50 for nitrate concentration at most stations. Particularly, the spatially contrasting trends of nitrate concentration were successfully captured by the model. The decrease of nitrate concentration in the lowland area in drought years (2015-2018) was presumably due to (1) limited terrestrial export loading (ca. 40% lower than that of normal years 2004-2014), and (2) increased in-stream retention efficiency (20% higher in summer within the whole river network). From a mechanistic modelling perspective, this study provided insights into spatially heterogeneous flow and nitrate dynamics and effects of sequential droughts, which shed light on water-quality responses to future climate change, as droughts are projected to be more frequent.
Thirdly, this study investigated the effects of stream restoration via re-meandering on in-stream nitrate retention at network-scale in the well-monitored Bode catchment. The mHM-Nitrate model showed good performance in reproducing daily discharge and nitrate concentrations, with median Kling-Gupta values of 0.78 and 0.74, respectively. The mean and standard deviation of gross nitrate retention efficiency, which accounted for both denitrification and assimilatory uptake, were 5.1 ± 0.61% and 74.7 ± 23.2% in winter and summer, respectively, within the stream network. The study found that in the summer, denitrification rates were about two times higher in lowland sub-catchments dominated by agricultural lands than in mountainous sub-catchments dominated by forested areas, with median ± SD of 204 ± 22.6 and 102 ± 22.1 mg N m-2 d-1, respectively. Similarly, assimilatory uptake rates were approximately five times higher in streams surrounded by lowland agricultural areas than in those in higher-elevation, forested areas, with median ± SD of 200 ± 27.1 and 39.1 ± 8.7 mg N m-2 d-1, respectively. Therefore, restoration strategies targeting lowland agricultural areas may have greater potential for increasing nitrate retention. The study also found that restoring stream sinuosity could increase net nitrate retention efficiency by up to 25.4 ± 5.3%, with greater effects seen in small streams. These results suggest that restoration efforts should consider augmenting stream sinuosity to increase nitrate retention and decrease nitrate concentrations at the catchment scale.
Im Rahmen einer explorativen Entwicklung wurde in der vorliegenden Studie ein Konzept zur Wissenschaftskommunikation für ein Graduiertenkolleg, in dem an photochemischen Prozessen geforscht wird, erstellt und anschließend evaluiert. Der Grund dafür ist die immer stärker wachsende Forderung nach Wissenschaftskommunikation seitens der Politik. Es wird darüber hinaus gefordert, dass die Kommunikation der eigenen Forschung in Zukunft integrativer Bestandteil des wissenschaftlichen Arbeitens wird. Um junge Wissenschaftler bereits frühzeitig auf diese Aufgabe vorzubereiten, wird Wissenschaftskommunikation auch in Forschungsverbünden realisiert.
Aus diesem Grund wurde in einer Vorstudie untersucht, welche Anforderungen an ein Konzept zur Wissenschaftskommunikation im Rahmen eines Forschungsverbundes gestellt werden, indem die Einstellung der Doktoranden zur Wissenschaftskommunikation sowie ihre Kommunikationsfähigkeiten anhand eines geschlossenen Fragebogens evaluiert wurden. Darüber hinaus wurden aus den Daten Wissenschaftskommunikationstypen abgeleitet. Auf Grundlage der Ergebnisse wurden unterschiedliche Wissenschaftskommunikationsmaßnahmen entwickelt, die sich in der Konzeption, den Rezipienten, sowie der Form der Kommunikation und den Inhalten unterscheiden.
Im Rahmen dieser Entwicklung wurde eine Lerneinheit mit Bezug auf die Inhalte des Graduiertenkollegs, bestehend aus einem Lehr-Lern-Experiment und den dazugehörigen Begleitmaterialien, konzipiert. Anschließend wurde die Lerneinheit in eine der Wissenschaftskommunikationsmaßnahmen integriert. Je nach Anforderung an die Doktoranden, wurden die Maßnahmen durch vorbereitende Workshops ergänzt.
Durch einen halboffenen Pre-Post-Fragebogen wurde der Einfluss der Wissenschaftskommunikationsmaßnahmen und der dazugehörigen Workshops auf die Selbstwirksamkeit der Doktoranden evaluiert, um Rückschlüsse darauf zu ziehen, wie sich die Wahrnehmung der eigenen Kommunikationsfähigkeiten durch die Interventionen verändert. Die Ergebnisse deuten darauf hin, dass die einzelnen Wissenschaftskommunikationsmaßnahmen die verschiedenen Typen in unterschiedlicher Weise beeinflussen. Es ist anzunehmen, dass es abhängig von der eigenen Einschätzung der Kommunikationsfähigkeit unterschiedliche Bedürfnisse der Förderung gibt, die durch dedizierte Wissenschaftskommunikationsmaßnahmen berücksichtigt werden können.
Auf dieser Grundlage werden erste Ansätze für eine allgemeingültige Strategie vorgeschlagen, die die individuellen Fähigkeiten zur Wissenschaftskommunikation in einem naturwissenschaftlichen Forschungsverbund fördert.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
The reliance on fossil fuels has resulted in an abnormal increase in the concentration of greenhouse gases, contributing to the global climate crisis. In response, a rapid transition to renewable energy sources has begun, particularly lithium-ion batteries, playing a crucial role in the green energy transformation. However, concerns regarding the availability and geopolitical implications of lithium have prompted the exploration of alternative rechargeable battery systems, such as sodium-ion batteries. Sodium is significantly abundant and more homogeneously distributed in the crust and seawater, making it easier and less expensive to extract than lithium. However, because of the mysterious nature of its components, sodium-ion batteries are not yet sufficiently advanced to take the place of lithium-ion batteries. Specifically, sodium exhibits a more metallic character and a larger ionic radius, resulting in a different ion storage mechanism utilized in lithium-ion batteries. Innovations in synthetic methods, post-treatments, and interface engineering clearly demonstrate the significance of developing high-performance carbonaceous anode materials for sodium-ion batteries. The objective of this dissertation is to present a systematic approach for fabricating efficient, high-performance, and sustainable carbonaceous anode materials for sodium-ion batteries. This will involve a comprehensive investigation of different chemical environments and post-modification techniques as well.
This dissertation focuses on three main objectives. Firstly, it explores the significance of post-synthetic methods in designing interfaces. A conformal carbon nitride coating is deposited through chemical vapor deposition on a carbon electrode as an artificial solid-electrolyte interface layer, resulting in improved electrochemical performance. The interaction between the carbon nitride artificial interface and the carbon electrode enhances initial Coulombic efficiency, rate performance, and total capacity. Secondly, a novel process for preparing sulfur-rich carbon as a high-performing anode material for sodium-ion batteries is presented. The method involves using an oligo-3,4-ethylenedioxythiophene precursor for high sulfur content hard carbon anode to investigate the sulfur heteroatom effect on the electrochemical sodium storage mechanism. By optimizing the condensation temperature, a significant transformation in the materials’ nanostructure is achieved, leading to improved electrochemical performance. The use of in-operando small-angle X-ray scattering provides valuable insights into the interaction between micropores and sodium ions during the electrochemical processes. Lastly, the development of high-capacity hard carbon, derived from 5-hydroxymethyl furfural, is examined. This carbon material exhibits exceptional performance at both low and high current densities. Extensive electrochemical and physicochemical characterizations shed light on the sodium storage mechanism concerning the chemical environment, establishing the material’s stability and potential applications in sodium-ion batteries.
Vergangenheit ist vergangen, Geschichte wird gemacht. An diesem Konstruktionsprozess sind nicht nur die historischen Akteur:innen und deren Quellen, sondern in besonderem Maße auch die Historiker:innen, die sich mit diesen auseinandersetzen, beteiligt. Sie sind es, die die Quellen erst zum Sprudeln bringen. Was dabei zutage tritt, ist somit in hohem Maße von den Forschenden selbst, von ihren Vorannahmen und Methoden aber auch von ihren sozialen, kulturellen und biografischen Prägungen abhängig. Das hier vorgestellte Prozessmodell versucht, diese als Einflussfaktoren zu fassen und sichtbar zu machen, um auf dieser Basis eine erweiterte wissenschaftliche (Selbst-)Reflexion zu ermöglichen.
The evaluation of process-oriented cognitive theories through time-ordered observations is crucial for the advancement of cognitive science. The findings presented herein integrate insights from research on eye-movement control and sentence comprehension during reading, addressing challenges in modeling time-ordered data, statistical inference, and interindividual variability. Using kernel density estimation and a pseudo-marginal likelihood for fixation durations and locations, a likelihood implementation of the SWIFT model of eye-movement control during reading (Engbert et al., Psychological Review, 112, 2005, pp. 777–813) is proposed. Within the broader framework of data assimilation, Bayesian parameter inference with adaptive Markov Chain Monte Carlo techniques is facilitated for reliable model fitting. Across the different studies, this framework has shown to enable reliable parameter recovery from simulated data and prediction of experimental summary statistics. Despite its complexity, SWIFT can be fitted within a principled Bayesian workflow, capturing interindividual differences and modeling experimental effects on reading across different geometrical alterations of text. Based on these advancements, the integrated dynamical model SEAM is proposed, which combines eye-movement control, a traditionally psychological research area, and post-lexical language processing in the form of cue-based memory retrieval (Lewis & Vasishth, Cognitive Science, 29, 2005, pp. 375–419), typically the purview of psycholinguistics. This proof-of-concept integration marks a significant step forward in natural language comprehension during reading and suggests that the presented methodology can be useful to develop complex cognitive dynamical models that integrate processes at levels of perception, higher cognition, and (oculo-)motor control. These findings collectively advance process-oriented cognitive modeling and highlight the importance of Bayesian inference, individual differences, and interdisciplinary integration for a holistic understanding of reading processes. Implications for theory and methodology, including proposals for model comparison and hierarchical parameter inference, are briefly discussed.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Open edX is an incredible platform to deliver MOOCs and SPOCs, designed to be robust and support hundreds of thousands of students at the same time. Nevertheless, it lacks a lot of the fine-grained functionality needed to handle students individually in an on-campus course. This short session will present the ongoing project undertaken by the 6 public universities of the Region of Madrid plus the Universitat Politècnica de València, in the framework of a national initiative called UniDigital, funded by the Ministry of Universities of Spain within the Plan de Recuperación, Transformación y Resiliencia of the European Union. This project, led by three of these Spanish universities (UC3M, UPV, UAM), is investing more than half a million euros with the purpose of bringing the Open edX platform closer to the functionalities required for an LMS to support on-campus teaching. The aim of the project is to coordinate what is going to be developed with the Open edX development community, so these developments are incorporated into the core of the Open edX platform in its next releases. Features like a complete redesign of platform analytics to make them real-time, the creation of dashboards based on these analytics, the integration of a system for customized automatic feedback, improvement of exams and tasks and the extension of grading capabilities, improvements in the graphical interfaces for both students and teachers, the extension of the emailing capabilities, redesign of the file management system, integration of H5P content, the integration of a tool to create mind maps, the creation of a system to detect students at risk, or the integration of an advanced voice assistant and a gamification mobile app, among others, are part of the functionalities to be developed. The idea is to transform a first-class MOOC platform into the next on-campus LMS.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
Leitfaden für die Erstellung von kommunalen Aktionsplänen zur Steigerung der urbanen Klimaresilienz
(2024)
Die durch Klimaveränderungen hervorgerufenen Auswirkungen auf Menschen und Umwelt werden immer offensichtlicher: Neben der gesundheitlichen Gefährdung durch Hitzewellen, die deutschlandweit seit einigen Jahren eine steigende Rate an Todes- und Krankheitsfällen zur Folge hat sind in den letzten Jahren zunehmend Starkniederschläge und daraus resultierenden Überschwemmungen bzw. Sturzfluten aufgetreten. Diese ziehen zum Teil immensen wirtschaftlichen Schäden, aber auch Beeinträchtigungen für die menschliche Gesundheit – sowohl physisch als auch psychisch – sowie gar Todesopfer nach sich. Es ist davon auszugehen, dass diese Extremwetterereignisse zukünftiger noch häufiger auftreten werden.
Um die Bevölkerung besser vor den Folgen dieser Wetterextreme zu schützen, sind neben Klimaschutzmaßnahmen auch Vorsorge- und Anpassungsmaßnahmen zur Steigerung der kommunalen Klimaresilienz dringend notwendig. Dazu bedarf es einerseits einer Auseinandersetzung mit den eigenen kommunalen Risiken und daraus resultierenden Handlungsbedarfen, und andererseits eines interdisziplinären, querschnittsorientierten und prozessorientierten Planens und Handelns. Aktionspläne sollen diese beiden Aspekte bündeln.
In den letzten Jahren sind einige kommunale und kommunenübergreifende (Hitze-) aufgestellt worden. Diese unterscheiden sich jedoch in ihrem Inhalt und Umfang zum Teil erheblich. Mit dem vorliegenden Leitfaden soll eine effektive Hilfestellung geschaffen werden, um Kommunen bzw. die kommunale Verwaltung auf dem Weg zum eigenen Aktionsplan zu unterstützt. Dabei fokussiert der Leitfaden auf die Herausforderungen, die sich durch vermehrte Hitze- und Starkregenereignisse ergeben. Er stützt sich auf schon vorhandene Arbeitshilfen, Handlungsempfehlungen, Leitfäden und weitere Hinweise und verweist an vielen Stellen auch darauf. So soll ein praxistauglicher Leitfaden entstehen, der flexibel anwendbar ist. Mit Hilfe des vorliegenden Leitfadens können Kommunen ihre Aktivitäten auf Hitze oder Starkregen fokussieren oder einen umfassenden Aktionsplan für beide Themenbereiche erstellen.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
Das in diesem Beitrag vorgestellte Projektseminarkonzept reagiert auf eine wahrgenommene Distanz und Unsicherheit Studierender im Fach Lebensgestaltung-Ethik-Religionskunde gegenüber religionsbezogenen Themen. Mittels verschiedener Strategien wurde, ausgehend von der Conceptual Change-Forschung, zur Wahrnehmung und Reflexion des eigenen kulturellen Standortes und der eigenen Konzepte in Bezug auf Religion(en) angeregt. Ihren Lernprozess haben die Studierenden in Arbeitsjournaleinträgen festgehalten. Diese Einträge wurden wiederum mittels einer qualitative Inhaltsanalyse untersucht. Nach der Darstellung der dabei erhobenen religions- und unterrichtsbezogenen Vorstellungen der Studierenden werden im Beitrag Anregungen gegeben, inwiefern die analysierten Befunde als Grundlage für die Verbesserung der Hochschullehre im Fachbereich dienen können.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
The present paper proposes a novel approach for equilibrium selection in the infinitely repeated prisoner’s dilemma where players can communicate before choosing their strategies. This approach yields a critical discount factor that makes different predictions for cooperation than the usually considered sub-game perfect or risk dominance critical discount factors. In laboratory experiments, we find that our factor is useful for predicting cooperation. For payoff changes where the usually considered factors and our factor make different predictions, the observed cooperation is consistent with the predictions based on our factor.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
An die Revolution 1848/49 wird als wesentliches Ereignis deutscher Demokratiegeschichte erinnert; die Beteiligung von Frauen nimmt jedoch bis heute im kollektiven Gedächtnis einen untergeordneten Stellenwert ein. Die vorliegende Masterarbeit beschäftigt sich aus diesem Grund spezifisch mit der Rolle der Frauen in der Revolution 1848/49 und bietet Anregungen für die Integration des Themas in den Politikunterricht.
Wie die Ergebnisse der Arbeit verdeutlichen, nutzten zahlreiche Frauen die Aufbruchsstimmung der 1840er Jahre, um sich auf verschiedene Arten und Weisen politisch zu engagieren. Zwar blieben viele dabei innerhalb der dichotomen Geschlechterteilung verhaftet, welche auf dem sich im 19. Jahrhundert herausbildenden bürgerlichen Geschlechtermodell beruhte. Einige überschritten diese Grenzen jedoch trotz harter Sanktionen auch bewusst.
Sichtbar wird, dass die weibliche Beteiligung zu diesem Zeitpunkt noch nicht zur grundsätzlichen Infragestellung der Geschlechterpolarität führte, aber die Frauen zunehmend den öffentlichen Raum auch für sich beanspruchten und damit die Grundlagen für die deutsche Frauenbewegung der folgenden Jahrzehnte schufen.
Die schulische Thematisierung der Rolle der Frauen in der Revolution 1848/49 bietet sich sowohl im Geschichts- und Politikunterricht als auch fächerübergreifend hinsichtlich vielfältiger Anknüpfungspunkte an. Die Integration des Themas in den Unterricht kann insbesondere dazu beitragen, das historische Erbe der Anfänge der Frauenbewegung zu bewahren, und es zudem für die Vermittlung demokratischer Werte nutzbar machen.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
This study focuses on William Faulkner, whose works explore the demise of the slavery-based Old South during the Civil War in a highly experimental narrative style. Central to this investigation is the analysis of the temporal dimensions of both individual and collective guilt, thus offering a new approach to the often-discussed problem of Faulkner’s portrayal of social decay. The thesis examines how Faulkner re-narrates the legacy of the Old South as a guilt narrative and argues that Faulkner uses guilt in order to corroborate his concept of time and the idea of the continuity of the past. The focus of the analysis is on three of Faulkner’s arguably most important novels: The Sound and the Fury, Absalom, Absalom!, and Go Down, Moses. Each of these novels features a main character deeply overwhelmed by the crimes of the past, whether private, familial, or societal. As a result, guilt is explored both from a domestic as well as a social perspective. In order to show how Faulkner blends past and present by means of guilt, this work examines several methods and motifs borrowed from different fields and genres with which Faulkner narratively negotiates guilt. These include religious notions of original sin, the motif of the ancestral curse prevalent in the Southern Gothic genre, and the psychological concept of trauma. Each of these motifs emphasizes the temporal dimensions of guilt, which are the core of this study, and makes clear that guilt in Faulkner’s work is primarily to be understood as a temporal rather than a moral problem.
Die fortschreitende Digitalisierung durchzieht immer mehr Lebensbereiche und führt zu immer komplexeren sozio-technischen Systemen. Obwohl diese Systeme zur Lebenserleichterung entwickelt werden, können auch unerwünschte Nebeneffekte entstehen. Ein solcher Nebeneffekt könnte z.B. die Datennutzung aus Fitness-Apps für nachteilige Versicherungsentscheidungen sein. Diese Nebeneffekte manifestieren sich auf allen Ebenen zwischen Individuum und Gesellschaft. Systeme mit zuvor unerwarteten Nebeneffekten können zu sinkender Akzeptanz oder einem Verlust von Vertrauen führen. Da solche Nebeneffekte oft erst im Gebrauch in Erscheinung treten, bedarf es einer besonderen Betrachtung bereits im Konstruktionsprozess. Mit dieser Arbeit soll ein Beitrag geleistet werden, um den Konstruktionsprozess um ein geeignetes Hilfsmittel zur systematischen Reflexion zu ergänzen.
In vorliegender Arbeit wurde ein Analysetool zur Identifikation und Analyse komplexer Interaktionssituationen in Software-Entwicklungsprojekten entwickelt. Komplexe Interaktionssituationen sind von hoher Dynamik geprägt, aus der eine Unvorhersehbarkeit der Ursache-Wirkungs-Beziehungen folgt. Hierdurch können die Akteur*innen die Auswirkungen der eigenen Handlungen nicht mehr überblicken, sondern lediglich im Nachhinein rekonstruieren. Hieraus können sich fehlerhafte Interaktionsverläufe auf vielfältigen Ebenen ergeben und oben genannte Nebeneffekte entstehen. Das Analysetool unterstützt die Konstrukteur*innen in jeder Phase der Entwicklung durch eine angeleitete Reflexion, um potenziell komplexe Interaktionssituationen zu antizipieren und ihnen durch Analyse der möglichen Ursachen der Komplexitätswahrnehmung zu begegnen.
Ausgehend von der Definition für Interaktionskomplexität wurden Item-Indikatoren zur Erfassung komplexer Interaktionssituationen entwickelt, die dann anhand von geeigneten Kriterien für Komplexität analysiert werden. Das Analysetool ist als „Do-It-Yourself“ Fragebogen mit eigenständiger Auswertung aufgebaut. Die Genese des Fragebogens und die Ergebnisse der durchgeführten Evaluation an fünf Softwarentwickler*innen werden dargestellt. Es konnte festgestellt werden, dass das Analysetool bei den Befragten als anwendbar, effektiv und hilfreich wahrgenommen wurde und damit eine hohe Akzeptanz bei der Zielgruppe genießt. Dieser Befund unterstützt die gute Einbindung des Analysetools in den Software-Entwicklungsprozess.
Die Arbeit „Die Bekämpfung transnationaler Kriminalität im Kontext fragiler Staatlichkeit“ widmet sich dem Phänomen grenzüberschreitend tätiger Akteure der organisierten Kriminalität die den Umstand ausnutzen, dass einige international anerkannte Regierungen nur eine unzureichende Kontrolle über Teile ihres Staatsgebietes ausüben. Es wird untersucht, weshalb der durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, zur Bekämpfung transnationaler Kriminalitätsphänomene im Kontext dieser fragilen Staaten, nicht oder nur defizitär dazu beiträgt die Kriminalitätsphänomene zu bekämpfen.
Nachdem zunächst erörtert wird, was im Rahmen der Untersuchung unter dem Begriff der transnationalen Kriminalität verstanden wird, wird der internationale Rechtsrahmen zur Bekämpfung anhand von fünf beispielhaft ausgewählten transnationalen Kriminalitätsphänomenen beschrieben. Im darauffolgenden Hauptteil der Untersuchung wird der Frage nachgegangen, weshalb dieser durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, gerade in fragilen Staaten, kaum einen Beitrag dazu leistet solchen Kriminalitätsphänomenen effektiv zu begegnen. Dabei wird festgestellt, dass die Genese des internationalen Rechtsrahmens zu einem Legitimitätsdefizit desselbigen führt. Auch die mangelhafte Berücksichtigung der speziellen Lebensrealitäten, die in vielen fragilen Staaten vorzufinden sind, wirkt sich negativ auf die Durchsetzbarkeit des internationalen Rechtsrahmens aus. Es wird dargelegt, dass unterschiedlich hohe menschenrechtliche Schutzstandards zu Normenkollisionen bei der internationalen Zusammenarbeit der Staaten führen, insbesondere im Rahmen der internationalen Rechtshilfe. Da gerade fragile Staaten häufig durch eine defizitäre menschenrechtliche Situation gekennzeichnet sind, stellt dies konsolidierte Staaten im Rahmen der Zusammenarbeit mit fragilen Staaten öfters vor Herausforderungen. Schließlich wird aufgezeigt, dass auch die extraterritoriale Jurisdiktion und somit die strafrechtliche Verfolgung transnationaler Straftaten durch Drittstaaten mit rechtlichen und praktischen Problemen einhergeht.
In einem letzten Kapitel der Arbeit wird der Frage nachgegangen, ob nicht ein alternativer Strafverfolgungsmechanismus geschaffen werden sollte, um transnationale Kriminalitätsphänomene zu verfolgen, die aus fragilen Staaten heraus begangen werden und wie ein solch alternativer Strafverfolgungsmechanismus konkret ausgestaltet sein sollte.
Der tänzerische Kreativitätstest stellt ein valides Instrumentarium dar, welches auf tanzspezifischen Aufgaben basiert und für die differenzierte und standardisierte Erfassung der tänzerischen Kreativität bei Kindern im Alter von 8 bis 12 Jahren konzipiert ist. Mit dem tänzerischen Kreativitätstest können nicht nur Fragestellungen zum Stand sowie zur Entwicklung tänzerisch-kreativer Fähigkeiten im Kindesalter bearbeitet werden, sondern er liefert auch wertvolle Informationen für die Optimierung von Trainings-, Förder- und Vermittlungsmaßnahmen. Erfasst werden folgende tänzerisch-kreativen Fähigkeiten: 1) Vielfalt und Originalität in der Fortbewegung und in Körperpositionen sowie 2) Ideenreichtum, Vielfalt und Originalität in der Gestaltung von Bewegungspatterns und -kompositionen. Dieser Test lässt sich mit größeren Gruppen und minimalem materiellen Aufwand durchführen, ist zeitlich unbeschränkt und ermöglicht es, unterschiedliche Leistungsniveaus zu identifizieren. Der tänzerische Kreativitätstest bietet Forschenden und Lehrkräften eine wertvolle Möglichkeit, die tänzerisch-kreativen Fähigkeiten von Kindern zu analysieren und zu fördern.
Der Fall des T. Annius Milo bietet für den Lateinunterricht großes didaktisches Potenzial. Denn an seinem Beispiel kann die Lektüre eines lateinischen Textes hervorragend mit realienkundlichen Aspekten verknüpft und es können plausible Bezüge zur Gegenwart hergestellt werden. Die vorliegende Masterarbeit zeigt, welch reiches Themenspektrum in Ciceros Rede Pro Milone steckt. Dazu zählen der historische Kontext des Falls, der Tatbestand des Mordes und der Ablauf des damaligen Gerichtsverfahrens. Darüber hinaus wird das römische Recht mit dem heutzutage in Deutschland geltenden Strafrecht verglichen. Und zu guter Letzt wird hier die Glaubwürdigkeit verschiedener schriftlicher Zeugnisse geprüft, insbesondere die Frage, ob die überlieferte Rede das einstige Prozessgeschehen in authentischer Weise widerspiegelt.
In this paper, we study one channel through which communication may facilitate cooperative behavior – belief precision. In a prisoner’s dilemma experiment, we show that communication not only makes individuals more optimistic that their partner will cooperate but also increases the precision of this belief, thereby reducing strategic uncertainty. To disentangle the shift in mean beliefs from the increase in precision, we elicit beliefs and precision in a two-stage procedure and in three situations: without communication, before communication, and after communication. We find that the precision of beliefs increases during communication.
Digital Fashion
(2024)
Das virtuelle Kleid als mediale und soziokulturelle Alltagserscheinung der Gegenwart bildet den Gegenstand der vorliegenden interdisziplinären Unter-suchung. An der Schnittstelle zwischen Menschen, Medien und Mode ist das virtuelle Kleid an unrealen Orten und in synthetischen Situationen ausschließlich auf einem Screen erfahrbar. In diesem Dispositiv lassen sich Körperkonzepte, Darstellungskonventionen, soziale Handlungsmuster und Kommunikations-strategien ausmachen, die zwar auf einer radikalen Ablösung vom textilen Material beruhen, aber dennoch nicht ohne sehr konkrete Verweise auf das textile Material auskommen. Dies führt zu neuen Ansätzen der Auseinandersetzung mit Kleidern, die nun als Visualisierung gebündelter Datenpakete zu betrachten sind. Die dynamische Entwicklung neuer Erscheinungsformen und deren nahtlose Einbindung in traditionelle Geschäftsmodelle und bestehende Modekonzepte macht eine Positionsbestimmung notwendig, insbesondere im Hinblick auf gegenwärtige Nachhaltigkeitsdiskurse um immaterielle Produkte. Für diese Studie liefern die hinter den Bildern liegenden Prozesse der ökonomischen Ausrichtung, der Herstellung, der Verwendung und der Rezeption den methodologischen Zugang für die Analyse. Mithilfe eines typologisierenden Instrumentariums wird aus der Vielzahl und Vielfalt der Darstellungen ein Set an forschungsleitenden Beispielen zusammengestellt, welche dann in einer mehrstufigen Kontextanalyse zu einer begrifflichen Fassung des virtuellen Kleides sowie zu fünf Kontexteinheiten führen. Am Beispiel des virtuellen Kleides zeichnet diese Untersuchung den technischen, gesellschaftlichen und sozialen Wandel nach und arbeitet seine Bedeutung für zukünftige Modeentwicklungen heraus. Damit leistet die Untersuchung einen Beitrag zur medien- und sozialwissenschaftlichen Modeforschung der Gegenwart.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
The urban heat island (UHI) effect, describing an elevated temperature of urban areas compared with their natural surroundings, can expose urban dwellers to additional heat stress, especially during hot summer days. A comprehensive understanding of the UHI dynamics along with urbanization is of great importance to efficient heat stress mitigation strategies towards sustainable urban development. This is, however, still challenging due to the difficulties of isolating the influences of various contributing factors that interact with each other. In this work, I present a systematical and quantitative analysis of how urban intrinsic properties (e.g., urban size, density, and morphology) influence UHI intensity.
To this end, we innovatively combine urban growth modelling and urban climate simulation to separate the influence of urban intrinsic factors from that of background climate, so as to focus on the impact of urbanization on the UHI effect. The urban climate model can create a laboratory environment which makes it possible to conduct controlled experiments to separate the influences from different driving factors, while the urban growth model provides detailed 3D structures that can be then parameterized into different urban development scenarios tailored for these experiments. The novelty in the methodology and experiment design leads to the following achievements of our work.
First, we develop a stochastic gravitational urban growth model that can generate 3D structures varying in size, morphology, compactness, and density gradient. We compare various characteristics, like fractal dimensions (box-counting, area-perimeter scaling, area-population scaling, etc.), and radial gradient profiles of land use share and population density, against those of real-world cities from empirical studies. The model shows the capability of creating 3D structures resembling real-world cities. This model can generate 3D structure samples for controlled experiments to assess the influence of some urban intrinsic properties in question. [Chapter 2]
With the generated 3D structures, we run several series of simulations with urban structures varying in properties like size, density and morphology, under the same weather conditions. Analyzing how the 2m air temperature based canopy layer urban heat island (CUHI) intensity varies in response to the changes of the considered urban factors, we find the CUHI intensity of a city is directly related to the built-up density and an amplifying effect that urban sites have on each other. We propose a Gravitational Urban Morphology (GUM) indicator to capture the neighbourhood warming effect. We build a regression model to estimate the CUHI intensity based on urban size, urban gross building volume, and the GUM indicator. Taking the Berlin area as an example, we show the regression model capable of predicting the CUHI intensity under various urban development scenarios. [Chapter 3]
Based on the multi-annual average summer surface urban heat island (SUHI) intensity derived from Land surface temperature, we further study how urban intrinsic factors influence the SUHI effect of the 5,000 largest urban clusters in Europe. We find a similar 3D GUM indicator to be an effective predictor of the SUHI intensity of these European cities. Together with other urban factors (vegetation condition, elevation, water coverage), we build different multivariate linear regression models and a climate space based Geographically Weighted Regression (GWR) model that can better predict SUHI intensity. By investigating the roles background climate factors play in modulating the coefficients of the GWR model, we extend the multivariate linear model to a nonlinear one by integrating some climate parameters, such as the average of daily maximal temperature and latitude. This makes it applicable across a range of background climates. The nonlinear model outperforms linear models in SUHI assessment as it captures the interaction of urban factors and the background climate. [Chapter 4]
Our work reiterates the essential roles of urban density and morphology in shaping the urban thermal environment. In contrast to many previous studies that link bigger cities with higher UHI intensity, we show that cities larger in the area do not necessarily experience a stronger UHI effect. In addition, the results extend our knowledge by demonstrating the influence of urban 3D morphology on the UHI effect. This underlines the importance of inspecting cities as a whole from the 3D perspective. While urban 3D morphology is an aggregated feature of small-scale urban elements, the influence it has on the city-scale UHI intensity cannot simply be scaled up from that of its neighbourhood-scale components. The spatial composition and configuration of urban elements both need to be captured when quantifying urban 3D morphology as nearby neighbourhoods also cast influences on each other. Our model serves as a useful UHI assessment tool for the quantitative comparison of urban intervention/development scenarios. It can support harnessing the capacity of UHI mitigation through optimizing urban morphology, with the potential of integrating climate change into heat mitigation strategies.
Leadership plays an important role for the efficient and fair solution of social dilemmas but the effectiveness of a leader can vary substantially. Two main factors of leadership impact are the ability to induce high contributions by all group members and the (expected) fair use of power. Participants in our experiment decide about contributions to a public good. After all contributions are made, the leader can choose how much of the joint earnings to assign to herself; the remainder is distributed equally among the followers. Using machine learning techniques, we study whether the content of initial open statements by the group members predicts their behavior as a leader and whether groups are able to identify such clues and endogenously appoint a “good” leader to solve the dilemma. We find that leaders who promise fairness are more likely to behave fairly, and that followers appoint as leaders those who write more explicitly about fairness and efficiency. However, in their contribution decision, followers focus on the leader’s first-move contribution and place less importance on the content of the leader’s statements.
Access to digital finance
(2024)
Financing entrepreneurship spurs innovation and economic growth. Digital financial platforms that crowdfund equity for entrepreneurs have emerged globally, yet they remain poorly understood. We model equity crowdfunding in terms of the relationship between the number of investors and the amount of money raised per pitch. We examine heterogeneity in the average amount raised per pitch that is associated with differences across three countries and seven platforms. Using a novel dataset of successful fundraising on the most prominent platforms in the UK, Germany, and the USA, we find the underlying relationship between the number of investors and the amount of money raised for entrepreneurs is loglinear, with a coefficient less than one and concave to the origin. We identify significant variation in the average amount invested in each pitch across countries and platforms. Our findings have implications for market actors as well as regulators who set competitive frameworks.
Ausgehend von der Beobachtung, dass die aktuelle Digitalisierungsforschung die Ambivalenz der Digitalisierung zwar erkennt, aber nicht zum Gegenstand ihrer Analysen macht, fokussiert die vorliegende kumulative Dissertation auf die ambivalente Dichotomie aus Potenzialen und Problemen, die mit digitalen Transformationen von Organisationen einhergeht. Entlang von sechs Publikationen wird mit einem systemtheoretischen Blick auf Organisationen die spannungsvolle Dichotomie hinsichtlich dreier ambivalenter Verhältnisse aufgezeigt: Erstens wird in Bezug auf das Verhältnis von Digitalisierung und Postbürokratie deutlich, dass digitale Transformationen das Potenzial aufweisen, postbürokratische Arbeitsweisen zu erleichtern. Parallel ergibt sich das Problem, dass auf Konsens basierende postbürokratische Strukturen Digitalisierungsinitiativen erschweren, da diese auf eine Vielzahl von Entscheidungen angewiesen sind. Zweitens zeigt sich mit Blick auf das ambivalente Verhältnis von Digitalisierung und Vernetzung, dass einerseits organisationsweite Kooperation ermöglicht wird, während sich andererseits die Gefahr digitaler Widerspruchskommunikation auftut. Beim dritten Verhältnis zwischen Digitalisierung und Gender deutet sich das mit neuen digitalen Technologien einhergehende Potenzial für Gender Inklusion an, während zugleich das Problem einprogrammierter Gender Biases auftritt, die Diskriminierungen oftmals verschärfen. Durch die Gegenüberstellung der Potenziale und Probleme wird nicht nur die Ambivalenz organisationaler Digitalisierung analysierbar und verständlich, es stellt sich auch heraus, dass mit digitalen Transformationen einen doppelte Formalisierung einhergeht: Organisationen werden nicht nur mit den für Reformen üblichen Anpassungen der formalen Strukturen konfrontiert, sondern müssen zusätzlich formale Entscheidungen zu Technikeinführung und -beibehaltung treffen sowie formale Lösungen etablieren, um auf unvorhergesehene Potenziale und Probleme reagieren. Das Ziel der Dissertation ist es, eine analytisch generalisierte Heuristik an die Hand zu geben, mit deren Hilfe die Errungenschaften und Chancen digitaler Transformationen identifiziert werden können, während sich parallel ihr Verhältnis zu den gleichzeitig entstehenden Herausforderungen und Folgeproblemen erklären lässt.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Bildung:digital
(2024)
Heute Morgen schon im Bett geswiped, geliked oder gepostet? Auf Arbeit an einer Video-Konferenz teilgenommen, eine Datenbank benutzt oder programmiert? Auf dem Heimweg schnell noch im Laden mit dem Smartphone bezahlt, Podcasts gehört und die Ausleihe der Bibliotheksbücher verlängert? Und abends auf der Couch mit dem Tablet auf ELSTER.de die Steuererklärung ausgefüllt, online geshoppt oder Rechnungen bezahlt, ehe die Streaming-Plattform mit einer Serie lockt?
Unser Leben ist durch und durch digitalisiert. Diese Veränderungen machen vieles schneller, leichter, effizienter. Doch damit Schritt zu halten, verlangt uns einiges ab und gelingt beileibe nicht allen. Es gibt Menschen, die für eine Überweisung lieber zur Bank gehen, das Programmieren den Experten überlassen, die Steuererklärung per Post schicken und das Smartphone nur zum Telefonieren benutzen. Sie wollen nicht, vielleicht können sie auch nicht. Haben es nicht gelernt. Andere, jüngere Menschen, wachsen als „Digital Natives“ inmitten digitaler Geräte, Tools und Prozesse auf. Aber können sie deshalb wirklich damit umgehen? Oder brauchen auch sie digitale Bildung?
Aber wie sieht erfolgreiche digitale Bildung eigentlich aus? Lernen wir dabei ein Tablet zu bedienen, richtig zu googeln und Excel-Tabellen zu schreiben? Möglicherweise geht es um mehr: darum, den umfassenden Wandel zu verstehen, der unsere Welt erfasst, seitdem sie in Einsen und Nullen zerlegt und virtuell neu aufgebaut wird. Aber wie lernen wir, in einer Welt der Digitalität zu leben – mit allem, was dazu gehört und zu unserem Nutzen? Für die aktuelle Ausgabe der „Portal Wissen“ haben wir uns an der Universität Potsdam umgeschaut, welche Rolle die Verbindung von Digitalisierung und Lernen in der Forschung der verschiedenen Disziplinen spielt: Wir haben mit Katharina Scheiter, Professorin für digitale Bildung, über die Zukunft in deutschen Schulen gesprochen und uns gleich von mehreren Expert*innen Beispiele dafür zeigen lassen, wie digitale Instrumente schulisches Lernen, aber auch Weiterbildung im Berufsleben verbessern können. Außerdem haben uns Forschende aus Informatik und Agrarforschung vorgeführt, wie auch gestandene Landwirte dank digitaler Hilfsmittel noch viel über ihr Land und ihre Arbeit lernen können. Wir haben mit Bildungsforschenden gesprochen, die mithilfe von Big Data analysieren, wie Jungen und Mädchen lernen und wo mögliche Ursachen für Unterschiede zu suchen sind. Die Bildungsund Politikwissenschaftlerin Nina Kolleck wiederum schaut auf Bildung vor dem Hintergrund der Globalisierung und setzt dabei auf die Auswertung von großen Mengen Social-Media- Daten.
Dabei verlieren wir natürlich die Vielfalt der Forschung an der Uni Potsdam nicht aus den Augen: Wir stellen der Strafrechtlerin Anna Albrecht 33 Fragen, begleiten eine Gruppe von Geoforschenden in den Himalaya und lassen uns erklären, welche Alternativen es bald zu Antibiotika geben könnte. Außerdem geht es in diesem Magazin um Stress und wie er uns krankmacht, die Forschung zu nachhaltiger Erzgewinnung und neue Ansätze in der Schulentwicklung.
Neu ist auch eine ganze Reihe kürzerer Beiträge, die zum Blättern und Schmökern einladen: von Forschungsnews und Personalia- Infos über fotografische Einblicke in Labore, einfache Erklärungen komplexer Phänomene und Ausblicke in die weite Forschungswelt bis hin zu einer kleinen Wissenschaftsutopie, einem persönlichen Dank an die Forschung und einem Wissenschaftscomic. All das im Namen der Bildung, versteht sich. Viel Vergnügen bei der Lektüre!
In the aftermath of the Shoah and the ostensible triumph of nationalism, it became common in historiography to relegate Jews to the position of the “eternal other” in a series of binaries: Christian/Jewish, Gentile/Jewish, European/Jewish, non-Jewish/Jewish, and so forth. For the longest time, these binaries remained characteristic of Jewish historiography, including in the Central European context. Assuming instead, as the more recent approaches in Habsburg studies do, that pluriculturalism was the basis of common experience in formerly Habsburg Central Europe, and accepting that no single “majority culture” existed, but rather hegemonies were imposed in certain contexts, then the often used binaries are misleading and conceal the complex and sometimes even paradoxical conditions that shaped Jewish life in the region before the Shoah.
The very complexity of Habsburg Central Europe both in synchronic and diachronic perspective precludes any singular historical narrative of “Habsburg Jewry,” and it is not the intention of this volume to offer an overview of “Habsburg Jewish history.” The selected articles in this volume illustrate instead how important it is to reevaluate categories, deconstruct historical narratives, and reconceptualize implemented approaches in specific geographic, temporal, and cultural contexts in order to gain a better understanding of the complex and pluricultural history of the Habsburg Empire and the region as a whole.
Die vorliegende Masterarbeit widmet sich der Frage, inwiefern die neuesten Lehrwerke für den gymnasialen Französischunterricht, Découvertes 1 (Klett) und À plus 1 (Cornelsen) aus dem Jahr 2020, sprachvernetzende Inhalte nutzen, um auf vorgelernte Sprachen und frühere Spracherwerbsprozesse hinzuweisen oder darauf zurückzugreifen. Der Fokus liegt dabei auf der Schul- und/oder Erstsprache Deutsch sowie der ersten Fremdsprache Englisch, wobei auch andere auftretende Sprachen in die Untersuchung einbezogen werden.
Die Arbeit leistet einen Beitrag zum fachdidaktischen Diskurs bezüglich mehrsprachigkeitsdidaktischer Inhalte in Fremdsprachenlehrwerken. Darüber hinaus kann sie Lehrkräften aufzeigen, wie diese aktuellen Lehrwerke ihren mehrsprachigkeitsorientierten Unterricht begleiten können.
Die Einleitung betont die Relevanz der Sprachvernetzung für den Fremdsprachenunterricht, insbesondere im Hinblick auf die individuelle Mehrsprachigkeit der Schüler*innen. Es wird auf das Potenzial des interlingualen Transfers hingewiesen, das u. a. in einer Lernerleichterung sowie der Förderung der Sprachbewusstheit und der Sprachlernbewusstheit besteht.
In Kapitel 2 werden die theoretischen Grundlagen für die Analyse gelegt, indem Mehrsprachigkeit und Mehrsprachigkeitsdidaktik, Sprachvernetzung und ihr Potenzial näher betrachtet werden. Zudem wird anhand des Deutschen und Englischen aufgezeigt, welches sprachliche Transferpotenzial im Anfangsunterricht Französisch eingebracht werden könnte. Auch die Bedingungen dafür, dass Schüler*innen den interlingualen Transfer in ihrem Spracherwerb einsetzen, werden besprochen.
Kapitel 3 gibt einen Überblick über den Forschungsstand zu Sprachvernetzung und Mehrsprachigkeit in Fremdsprachenlehrwerken und identifiziert die Forschungslücke, die diese Arbeit zu schließen versucht.
In Kapitel 4 werden die Forschungsfrage und ihre Unterfragen formuliert, die untersuchten Lehrwerke beschrieben und die Auswahl der Lehrwerke und der untersuchten Lehrwerkskomponenten begründet. Zudem wird die Methodik der vergleichenden Lehrwerkanalyse erläutert.
Die Ergebnisse der Analyse werden in Kapitel 5 ausführlich dargestellt. Es wird aufgezeigt, welche sprachvernetzenden Inhalte in den jeweiligen Lehrwerken vorkommen – in welcher Form und unter Einbezug welcher Sprachen und sprachlichen Ebenen.
In Kapitel 6 werden die Ergebnisse diskutiert und analysiert, wobei auf die Mehrsprachigkeitskonzepte der Lehrwerke und die Trends bei den sprachvernetzenden Inhalten eingegangen wird.
Im abschließenden Kapitel 7 wird zusammenfassend betont, dass beide Lehrwerke viele sprachvernetzende Inhalte anbieten, die das Potenzial haben, mehrsprachigkeitsdidaktisches Arbeiten zu unterstützen. Insbesondere auf der Produktionsebene werden jedoch noch zu wenige Transferprozesse initiiert. Zudem wird aufgezeigt, welche weiteren Untersuchungen ergänzend möglich sind, z. B. hinsichtlich des Einsatzes der sprachvernetzenden Inhalte im Unterricht.